patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11862129
DETAILED DESCRIPTION Apparatuses, machine-readable media, and methods related to image location based on a perceived interest and display position. Computing device displays (e.g., monitors, mobile device screens, laptop screens, etc.) can be used to view images (e.g., static images, video images, and/or text) on the display. Images can be received by the computing device from another device and/or generated by the computing device. A user of a computing device may prefer some images over other images and sort those images to various viewing locations on a display (e.g., viewing location). Images can be organized into viewing locations by the computing device for the convenience of the user. For instance, a computing device can include a controller and a memory device to organize the images based on a preference of the user. The preference can be based on a perceived interest of the image by the user. In an example, a method can include assigning, by a controller coupled to a memory device, a perceived interest to an image of a plurality of images, wherein the perceived interest is assigned based in part on a change in position of a display coupled to the memory device while the image is viewable on the display, selecting the image from an initial viewing location on the display responsive to the assigned perceived interest, and transferring the image to a different viewing location, wherein the initial viewing location and the different viewing location are visible on the display. As used herein, the term “viewing location” refers to a location that can be visible on the display of a computing device. The display can be part of a user interface for a computing device, where the user interface allows the user to receive information from the computing device and provide inputs to the computing device. The viewing location can be selected by a user of the computing device. For example, a user can select a viewing location visible on the display to view the images allocated to the viewing location. The images allocated to a particular viewing location can share a common perceived interest. As used herein, the term “perceived interest” refers to a level of importance an image is determined to possess. For instance, a perceived interest of an image may be an assignment corresponding to a user's subjective interest in the image. For example, a user may use a computing device such as a mobile device (e.g., a smartphone) equipped with an image sensor (e.g., a camera) to generate an image. In other examples, a computing device can receive an image from the internet, an email, a text message, or other transmission. In other examples, a computing device can receive (or otherwise obtain) an image from the internet, a screenshot, an email, a text message, or other transmission. Additionally, a computing device can generate groups of images based on criteria in an attempt to associate a perceived interest in the grouped images. Computing devices can group images without requiring the input of a user. For example, some approaches to generating groups of images without input from the user of the computing device includes grouping images by a geographical location (e.g., GPS) in which they were generated and/or received, grouping by facial recognition of the subject in the image (e.g., grouping images according to who/what is included in the image), and/or a time (e.g., a time of day, month, year, and/or season). However, the images that are grouped by a computing device using location, facial recognition of a subject of the image, and/or time can be inaccurate and fail to capture a user's subjective perception of interest in an image. For example, the grouped images may not represent what the user subjectively (e.g., actually) perceives as interesting, but instead can group repetitive, poor quality, disinteresting, or otherwise undesired images. The inaccurate grouping of images can result in cluttered image viewing locations on a display of a computing device and result in situations where the user is frequently searching for a particular image. This may result in frustration, wasted time, resources, and computing power (e.g., battery life). A user of a computing device may show another person an image when the user determines an image to be interesting. In some examples, the act of showing another person an image on a computing device involves moving the display of the computing device such that the display is at an angle that another person can view the image. In other examples, the act of showing another person an image on a computing device involves the different person being close enough to the display to be at an angle where the person can view the image. For example, a person can position him or herself next to or behind the user such that the display of the computing device is visible to the user and the person. Examples of the present disclosures can ease frustration, clutter, conserve resources and/or computing power by grouping images together that share a perceived interest of the user. In an example embodiment, a perceived interest can be assigned to an image generated, received, and/or otherwise obtained by a computing device (e.g., a smartphone) based on a change in position of a display of a computing device while the image is viewable on the display. Said differently, if a user locates the image such that it is visible on the display, and moves the display to a suitable angle such that a different person can view the image, the computing device can assign the image a perceived interest corresponding to a desired preference. In another example embodiment, a perceived interest can be assigned to an image generated, received, and/or otherwise obtained by a computing device (e.g., the camera of a smartphone) based on receiving an input from an image sensor coupled to the display when the image is visible on the display. Said differently, an image sensor coupled to the display can transmit facial recognition data if a person other than the user is at an angle such that the image is visible on the display (the person is standing next to or behind the user). The computing device can assign the image a perceived interest corresponding to a desired preference. Embodiments described herein include the computing device transferring (e.g., copying) images with a shared perceived interest to viewing locations on the display such that at a user can easily find images frequently presented and/or viewed by other people. As used herein, the term “transfer” refers to moving and/or creating a copy of an image and moving it from an initial viewing location to a different viewing location. In some examples, respective viewing locations can include other images that share common perceptions of interest. Further, the computing device can group images based on the facial recognition input received that corresponds to the person that viewed the image. In other embodiments, undesired images generated by the computing device can be identified and be made available on the display such that a user can review and discard the images, thus removing clutter. For example, images generated by the computing device that are not visible on the display when the display position is altered, and/or not provided for another person to view, may be assigned a perceived interest (e.g., a lack of perceived interest) corresponding to an undesired preference and moved to a viewing location such that a user can review and discard the images. Said differently, sometimes users can capture, receive, and/or otherwise obtain images on a computing device (e.g., a smartphone) that may not necessarily be important to the user, repetitive, etc. These infrequently viewed images can be grouped together and the computing device can prompt the user to discard the images. In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure can be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments can be utilized and that process, electrical, and structural changes can be made without departing from the scope of the present disclosure. As used herein, designators such as “N,” “M,” etc., particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designation can be included. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” can include both singular and plural referents, unless the context clearly dictates otherwise. In addition, “a number of,” “at least one,” and “one or more” (e.g., a number of memory devices) can refer to one or more memory devices, whereas a “plurality of” is intended to refer to more than one of such things. Furthermore, the words “can” and “may” are used throughout this application in a permissive sense (i.e., having the potential to, being able to), not in a mandatory sense (i.e., must). The term “include,” and derivations thereof, means “including, but not limited to.” The terms “coupled,” and “coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and/or data, as appropriate to the context. The terms “data” and “data values” are used interchangeably herein and can have the same meaning, as appropriate to the context. The figures herein follow a numbering convention in which the first digit or digits correspond to the figure number and the remaining digits identify an element or component in the figure. Similar elements or components between different figures can be identified by the use of similar digits. For example,222can reference element “22” inFIG.2, and a similar element can be referenced as322inFIG.3. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and/or the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present disclosure and should not be taken in a limiting sense. FIG.1is a functional block diagram in the form of a computing system including an apparatus100having a display102, a memory device106, and a controller108(e.g., a processor, control circuitry, hardware, firmware, and/or software) in accordance with a number of embodiments of the present disclosure. The memory device106, in some embodiments, can include a non-transitory machine-readable medium (MRM), and/or can be analogous to the memory device792described with respect toFIG.7. The apparatus100can be a computing device, for instance, the display102may be a touchscreen display of a mobile device such as a smartphone. The controller108can be communicatively coupled to the memory device106and/or the display102. As used herein, “communicatively coupled” can include coupled via various wired and/or wireless connections between devices such that data can be transferred in various directions between the devices. The coupling need not be a direct connection, and in some examples, can be an indirect connection. The memory device106can include non-volatile or volatile memory. For example, non-volatile memory can provide persistent data by retaining written data when not powered, and non-volatile memory types can include NAND flash memory, NOR flash memory, read only memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and Storage Class Memory (SCM) that can include resistance variable memory, such as phase change random access memory (PCRAM), three-dimensional cross-point memory (e.g., 3D XPoint™), resistive random access memory (RRAM), ferroelectric random access memory (FeRAM), magnetoresistive random access memory (MRAM), and programmable conductive memory, among other types of memory. Volatile memory can require power to maintain its data and can include random-access memory (RAM), dynamic random-access memory (DRAM), and static random access memory (SRAM), among others. In other embodiments, as illustrated inFIG.1, the memory device106can include one or more memory media types.FIG.1illustrates a non-limiting example of multiple memory media types in the form of a DRAM112including control circuitry113, SCM114including control circuitry115, and a NAND116including control circuitry117. While three memory media types (e.g., DRAM112, SCM114, and NAND116) are illustrated, embodiments are not so limited, however, and there can be more or less than three memory media types. Further, the types of memory media are not limited to the three specifically illustrated (e.g., DRAM112, SCM114, and/or NAND116) inFIG.1, other types of volatile and/or non-volatile memory media types are contemplated. In a number of embodiments, the controller108, the memory media DRAM112, SCM,114, and/or NAND116, can be physically located on a single die or within a single package, (e.g., a managed memory application). Also, in a number of embodiments, a plurality of memory media (e.g., DRAM112, SCM,114, and NAND116), can be included on a single memory device. A computing device can include an image sensor (e.g., a camera)103. The image sensor103can generate images (video, text, etc.) which can be visible on the display102. Additionally, the image sensor103can capture and/or receive input from objects, people, items, etc. and transmit that input to the controller108to be analyzed. In some examples, the images sensor103is a camera and can provide input to the controller108as facial recognition input. For example, the display102can be a portion of a mobile device including a camera (e.g., a smartphone). The images generated by an image sensor103can be written (e.g., stored) on the memory device106. The controller108can present the images on the display102responsive to a selection made by a user on the display102. For instance, a user may select via a menu (e.g., a “settings” menu, a “images” or “pictures” menu, etc.) displayed on the display102to show images available to view on the display102. Such a menu may give the user options as to what images the user wants to view and/or the user can manually select and customize images into groups. For example, a user may make a group of images that the user selects as a “favorite image” and other “favorite images” can be grouped together to create albums and/or folders which can be labeled as a user desires. Manually selecting images as a “favorite image” can be tedious, and, as mentioned above, grouping the images without user input (e.g., by geographic location, facial recognition, etc.) can be inaccurate and include repetitive images that are undesired, thus leaving the user to still manually search and select a desired image. Grouping images by assigning them a perceived interest of the user can increase group accuracy and efficiency of the computing device and/or memory device106. Perceived interest can be assigned to images by determining if an image is visible on a display when the position of the display changes and/or by receiving input (e.g., facial recognition input) from the image sensor103. A change in position of the display102includes the display102changing from an initial position to a subsequent position. An example of a change in position of a display102can include turning the display102from the perspective of a user viewing the display102a quantity of degrees such that it is viewable by another person, animal, and/or device. Selecting an image to be visible on a display102, and changing the position of the display while the image is visible on the display102can be indicative that the image is perceived as interesting by the user. In other words, a user viewing an image on a display102, and turning the display102to show another person can be indicative that the user has a preference for the image. In a non-limiting embodiment, the controller108can be configured to assign, by the controller108coupled to a memory device106, a perceived interest to an image of a plurality of images, where the perceived interest is assigned based in part on a change in position of a display102coupled to the memory device106while the image is viewable on the display. For instance, a user may be viewing an image on the display102of a smartphone and turn the smartphone such that the display102is viewable to a different person. Responsive to the change in position, the controller108can assign a perceived interest to the image viewable on the display102. The controller108can be configured to select the image from an initial viewing location on the display102responsive to the assigned perceived interest; and transfer the image to a different viewing location, where the initial viewing location and the different viewing location are visible on the display102. In this example, the controller108can copy the image from the initial viewing location (e.g., a default album or folder) and transfer the copy to a different viewing location (e.g., for images that have been detected to include a perceived interest). In some examples, the controller108can be configured to include a threshold quantity of changes in position of the display102while an image is visible on the display102. A threshold determined by a user can prevent accidental assignments of perceived interest to an image due to accidental changes in position of the display102. For example, a user can use setting on the computing device to set a threshold at three or more changes in display102position before assigning a perceived interest corresponding to a desired preference to an image and/or prompting a computing device (e.g., a user) to confirm a perceived interest and/or a new viewing location on the display102. While the number three is used herein, the threshold quantity can be more or less than three. Using this method, a user would be required to change the position of a display while an image is visible on the display three or more times before the computing device assigns a perceived interest corresponding to a desired preference. In some examples, a computing device can assign a perceived interest by receiving input into the image sensor103. For example, the apparatus100can be a computing device and include a memory device106coupled to the display102via the controller108. An image sensor103can be coupled to the display102either directly or indirectly via the controller108. To group images to viewing locations on the display based on a perceived interest, the controller108can be configured to select an image from a plurality of images to be viewable on the display102. The image can be selected from an initial viewing location (e.g., a default album, and/or folder) on the display102, generated by the image sensor103, received (from another computing device via text or email), and/or otherwise obtained by the computing device. While the image is visible on the display, the user may desire to show the image to another person. The controller108can be configured to receive an input from the image sensor103when the image of the plurality of images is visible on the display102. The display102may experience a change in position and/or the display102may be in view of the other person (e.g., standing near the user). The input received by the controller108from the image sensor103may be facial recognition input related to the person viewing the image. The controller108can assign a perceived interest to the image based at least in part on the received input from the image sensor103. The controller108may transfer the image from an initial viewing location on the display to a different viewing location on the display responsive to the assigned perceived interest. In a non-limiting example, the computing device can be a smartphone, the image sensor103can be a camera of the smartphone and a user can configure the settings of the camera to capture facial recognition input when the camera is positioned such that it may collect facial data of a person, animal, etc. In this example, the camera can capture the facial recognition data while an image is visible on the display102. The controller108coupled to the camera (e.g., the image sensor103) can generate a new viewing location based on the facial recognition input and prompt the smartphone (e.g., the user of the smartphone) for confirmation of the new viewing location. The controller108can be configured to group together subsequent images with a common assigned perceived interest corresponding to the facial recognition input. For instance, if a user selects an image on their smartphone, the user may show the image to their Mother (e.g., or any other person), the camera of the smartphone may receive facial recognition data from Mother, and the controller108may prompt the user to generate a new folder (e.g., a new viewing location) labeled as “Mother”. Subsequently, the user may select a different image to show their Mother, and the controller108can add the different picture to the “Mother” folder when the facial data is collected. This can be accomplished without user input. In other examples, the controller108can determine that one or more images has a perceived interest that corresponds to a dislike, indifference, or an undesired preference by the user. The controller108can assign a perceived interest corresponding to an undesired preference. This can be responsive to an image on the display102not changing position while the image is visible on the display102. Additionally, other images that have not been selected by the user and/or been viewable on the display102while the display has changed position can be grouped together as having a perceived interest corresponding to an undesired preference. The grouped images having an undesired preference can be transferred to a folder to be reviewed by the user to be discarded. For instance, the controller108can transfer the image to a particular viewing location on the display102. In some examples, the controller108can write image data corresponding to the images in viewing locations on the display102to a plurality of memory types. In an example embodiment, the controller108can be coupled to a plurality of memory media types (e.g., DRAM112, SCM114, and/or NAND116), where the images included in an initial viewing location can be written in a first memory media type (e.g., DRAM112) and images included in the different viewing location can be written in a second memory media type (e.g., NAND116). For example, the different viewing location on the display102may include images that are written to a memory media type that is more secure and/or more suitable for long term storage on the computing device. As such, the viewing locations written to the respective memory media types (e.g., DRAM112, SCM114, and/or NAND116) can include other images that have been selected by the controller108based on a respective perceived interest. FIG.2is a diagram representing an example of a computing device210including a display202with visible images218in accordance with a number of embodiments of the present disclosure.FIG.2illustrates a computing device210such as a mobile device including an image sensor203which is analogous to the image sensor103ofFIG.1, and a display202which is analogous to the display102ofFIG.1. The computing device210further includes a memory device206, which is analogous to the memory device106ofFIG.1. The memory device206can be coupled to a controller208which can be analogous to the controller108ofFIG.1.FIG.2illustrates the display202as including a plurality of images218-1,218-2,218-3,218-4, and218-N which can be referred to herein as images218. FIG.2illustrates a non-limiting example of a particular image218-3denoted with a star and other images218-1,218-2,218-4, and218-N which are denoted with a circle. The other squares illustrated in the display202are analogous to images218but are unmarked here as to not obscure examples of the disclosure. The display202includes a plurality of images218. In some examples, the plurality of images218may be included in an initial viewing location on the display202and presented in chronological order. Said another way, the plurality of images218can be the contents of an initial viewing location. For example, the plurality of images218can be images that are presented to a user in the order in which that have been generated by an image sensor203(e.g., a camera) and/or received, transmitted, or otherwise obtained by the computing device210. A user can use an appendage (e.g., a finger) or a device (e.g., a stylus, a digital pen, etc.) to select one or more images218-1,218-2,218-3,218-4,218-N from the plurality of images218. The selection of a particular image218-3rather than other images218-1,218-2,218-4, and/or218-N can indicate a perceived interest corresponding to a desired preference of the user. The controller208can use multiple methods to assign a perceived interest to an image218. For example, the controller206can assign a perceived interest based on a selection of a particular image218-3such that the image218-3is visible on the display202while the display202changes position as will be described in connection withFIGS.3A-3B. When the particular image218-3is selected, it can be enlarged such that is encompasses all or a majority of the display202. A user can configure the computing device210(e.g., the controller209) to assign a perceived interest to an image (e.g., image218-3) corresponding to a desired preference of the user when it is selected from a group of images218to be visible on the display while the display changes position 3 or more times. While three or more is used as an example herein, the quantity of times that the display is required to change position can be greater or less than three times. The computing device210can store metadata, including a metadata value, associated with the image that can indicate the perceived interest of the image, the location of the image on the display, a grouping of the image, among other information that can be included in the metadata associated with an image. The elimination of the requirement of a user to manually denote an image as a “favorite image” can reduce clutter and frustration of the user experience of the computing device210. In another non-limiting example, the controller208can assign a perceived interest to one or more images218when an image218is shown to another person and the image sensor203can collect facial recognition data. For example, the controller208can assign a perceived interest corresponding to a desired preference to a particular image218-3when the image is selected and positioned on the display202such that another person (e.g., and/or an animal or device) can view the image. In some examples, the computing device210and/or the controller208can be configured (e.g., through settings etc.) to generate a new viewing location corresponding to the facial recognition data collected, without user input. In other examples, the computing device210and/or the controller208can be configured to prompt the user for confirmation prior to generating a new viewing location corresponding to the facial recognition data collected. In a non-limiting example, a user may be positioned in front of a computing device210such that the display202is visible to the user while the particular image218-3is visible on the display202. A different person may position themselves next to and/or behind the user such that the person can also view the display202and the particular image218-3. The controller208may assign perceived interest corresponding to a desired preference of the user to the image218-3when the image sensor203detect the other person is positioned to view the particular image218-3. The image sensor203may collect facial recognition data from the person and the controller208may generate a new viewing location corresponding to the person to transfer the image218-3. In another non-limiting example, a user may be positioned in front of a computing device210such that the display202is visible to the user while the particular image218-3is visible on the display202. The user may change the position of the display202such that a different person can also view the display202and the particular image218-3. The controller208may assign perceived interest corresponding to a desired preference of the user to the image218-3when the image sensor203detect the other person is positioned to view the particular image218-3and/or when the display202is changed from an initial position to a subsequent position. The image sensor203may collect facial recognition data from the person and the controller208may generate a new viewing location on the display202corresponding to the person. In some examples, a perceived interest can be assigned to images218that have not been selected, viewed by another person, and/or made visible on the display202while the position of the display202changes from an initial position to a subsequent position. For example, assume the images218-1,218-2,218-4, and218-N have not been selected, viewed by another person, and/or made visible on the display202while the position of the display202changes from an initial position to a subsequent position. In this example, the controller208may assign a perceived interest that that corresponds to an image that is undesired by the user. In this example, the images with a perceived interest that reflects a disinterest by the user can be sorted and transferred to a different viewing location on the display202. In some examples, this viewing location may be used to prompt the user to discard these images to ease clutter and memory space on the memory device206. In some embodiments, the controller208can change a perceived interest for an image218. For example, an image218-1can be assigned a perceived interest that corresponds to an undesired preference to a user of the computing device210. Subsequently, responsive to the image218-1being selected, viewed by another person, and/or made visible on the display202while the position of the display202changes from an initial position to a subsequent position the controller208can assign a new perceived interest that corresponds to a desired preference by the user. As will be discussed in connection withFIGS.4A and4B, the controller208can sort the plurality of images218by grouping the plurality of images218based on the perceived interest. This can be done without user input (e.g., upon setting up the computing device210the controller208can be configured with user preferences) or a user may select a prompt asking if sorting and/or grouping is a preference. For instance, upon loading the application, the controller208determines that the user may want to include a perceived interest in particular images218and may prompt the user for affirmation. Alternatively, the controller208can determine that the user may want to include a perceived interest in images that have not been selected, viewed by another person, and/or made visible on the display202while the position of the display202changes from an initial position to a subsequent position. FIGS.3A-3Bare diagrams representing an example display302including visible image318in accordance with a number of embodiments of the present disclosure.FIGS.3A-3Beach illustrate a display302which is analogous to the displays102and202ofFIGS.1and2. The display302may be part of a computing device (e.g., the computing device210ofFIG.2) and be coupled to a controller (e.g., the controller208ofFIG.2) and a memory device (e.g., the memory device206ofFIG.2).FIGS.3A-3Beach include an image318which can be analogous to the images218ofFIG.2.FIGS.3A-3Balso illustrate a person321. While theFIGS.3A-3Bare illustrated as including a single person, there may be more than one person. Further, while the depiction ofFIGS.3A-3Binclude an illustration of a human, any animal or device could be used. FIG.3Aillustrates the display302including the visible image318. In the illustration inFIG.3A, the computing device310is in an initial position where the user (not illustrated) may be facing the display318such that the image318is visible to the user. As illustrated inFIG.3A, the person321is not in a position to view the image318.FIG.3Billustrates an example of the display302coupled to the computing device310in a subsequent position. In this example, the display302has changed position from the initial position illustrated inFIG.3Ato a subsequent position illustrated byFIG.3B. While the subsequent position of the person321ofFIGS.3A and3Bis to the right of the computing device310, the person321could be oriented to the left of the computing device310, in front of the computing device310, and/or anywhere in between. In the subsequent position illustrated byFIG.3B, the image318is visible to the person321. The controller of the computing device310can assign a perceived interest corresponding to a desired preference of the image318based on the image318being visible on the display302when the position of the display302changes from the initial position (ofFIG.3A) to the subsequent position (ofFIG.3B). In another example, the controller of the computing device310can receive an input from an image sensor303coupled to the controller when the display302is in the subsequent position; and transfer the image318to a new viewing location based on the input received from the image sensor. Said differently, the subsequent position changes the angle of the display such that the person321can view the image318. The image sensor303can collect input (e.g., facial recognition input) and generate a new viewing location to transfer the image318(and/or a copy of the image318). In this example, the new viewing location may correspond to the person321, and other subsequent images that are shown to the person321can be transferred to the new viewing location on the display302. This can be done without user input. For instance, upon receiving a subsequent image, the controller can determine the facial recognition input is of the person321and transfer the subsequent image to the new viewing location without user prompts or, a user may select a prompt asking if this is a preference. For instance, upon receiving the subsequent image, the controller can determine that the user may want to transfer the image to the new viewing location based on the facial recognition input corresponding to the person321and may prompt the user for affirmation. In some examples, the controller of the computing device310may refrain from transferring the image318to a new viewing location. In another non-limiting example, the controller of the computing device310can receive an input from an image sensor303coupled to the controller when the display302is in the subsequent position; and refrain from transferring the image318to a new viewing location based on the input received from the image sensor. For instance, the image sensor303may collect input (e.g., facial recognition input) and the controller308can generate a new viewing location based on the input received from the image sensor303to transfer the image318(and/or a copy of the image318). The controller may prompt the user to confirm creating a new viewing location. The person321may be unknown (e.g., or infrequently encountered, etc.) by the user, and the user may not wish to dedicate a new viewing location to the unknown person321. In the above example, where the person321is unknown to the user, the controller may assign a perceived interest corresponding to an undesired preference to the image318. In this example, the controller may further transmit a prompt to the computing device310and/or the user to discard the image318based on the perceived interest being that of an undesired preference. FIGS.4A-4Bare functional diagrams representing computing devices410for image location on a display402based on image location based on a perceived interest and display position in accordance with a number of embodiments of the present disclosure.FIGS.4A and4Beach illustrate a display402which is analogous to the displays102,202, and302ofFIGS.1,2, and3A-3Band images418-1to418-N, which are analogous to images218and318ofFIGS.2and3and can be referred herein as images418. The display402may be part of a computing device410and analogous to the computing device210ofFIG.2and coupled to a controller408which can be analogous to the controllers108and208ofFIGS.1and2and a memory device406which can be analogous to the memory devices106and206ofFIGS.1and2. FIG.4Aillustrates images418-1to418-N which are included in the initial viewing location424-1.FIG.4Billustrates image viewing locations visible on the display402. The initial viewing location424-1can include each of the plurality of images418. The images418can be viewable in the initial viewing location424-1in chronological order and/or be the default image viewing location for images generated, received, or otherwise obtained by the computing device410. Another viewing location can be the preferred image viewing location424-2the images viewable here can include images that have been assigned (by the controller408) a perceived interest corresponding to a desired preference of the user. The discard viewing location424-3can include images that have been assigned (by the controller408) a perceived interest corresponding to an undesired preference of the user. The discard viewing location424-3can include images that a user may not want to keep as they have not been viewed frequently or shown to another person. The controller408can prompt a user to review the images included in the discard viewing location424-3and discard the images from the computing device410. Yet another viewing location can include images that correspond to facial recognition input collected by the image sensor403. The facial recognition viewing location424-M can include images that have been viewed by a person (e.g., the person321ofFIG.3). The images418may be grouped and transferred to a viewing location on the display402based at least in part on the perceived interest assigned by the controller408. As described herein, transferring an image418can include generating a copy of the image418and transferring the copy to a different viewing location424. In other words, the controller408can be further configured to generate a copy of an image418and transfer the copy of the image418from the initial viewing location424-1to the different viewing location424-2,424-3,424-M. As illustrated inFIG.4A, the controller408can be configured to assign a perceived interest to each of the plurality of images418. For instance, the controller408can be further configured to determine an assigned perceived interest for each of the plurality of images418, and as illustrated inFIG.4A, sort the plurality of images418into a plurality of groups based on the assigned perceived interest. For example, the images denoted with stars and triangles418-1,418-3,418-5,418-8,418-9, and418-N, can be included in a first group. The images denoted with circles418-2,418-4,418-6, and418-7can be included in a second group. In the above non-limiting example, each image418-1,418-3,418-5,418-8,418-9, and418-N included in the first group of the plurality of groups includes images with an assigned perceived interest corresponding to a desired preference. The images418-1,418-3,418-5,418-8,418-9, and418-N may have been assigned the perceived interest corresponding to the desired preference because they were shown to another person, the image(s) were viewable on the display403when the display403changed position from an initial position to a subsequent position, or a combination thereof. Further, in the above non-limiting example, each image418-2,418-4,418-6, and418-7included in the second group of the plurality of groups includes images with an assigned perceived interest corresponding to an undesired preference. The images418-2,418-4,418-6, and418-7may have been assigned the perceived interest corresponding to the undesired preference because they were not shown to another person, the image(s) were not viewable on the display403when the display403changed position from an initial position to a subsequent position, or a combination thereof. The controller408may be further configured to transmit a prompt to the computing device410to discard the second group of images. As mentioned, the controller408can group and sort the images418based on a perceived interest. The controller408can further transfer the images to viewing locations424based on the perceived interest assigned at box422ofFIG.4A. In some example, the images418can exist in multiple viewing locations424. For example, all of the images418-1to418-N are viewable in the initial viewing location424-1. The controller408may assign (at422) images418-1,418-3,418-5,418-8,418-9, and418-N the perceived interest corresponding to the desired preference and transfer the images to the preferred viewing location424-2such that they are now viewable in the initial viewing location424-1and the preferred viewing location424-2. Further, the images denoted with a triangle418-9and418-N may correspond to input from the image sensor corresponding to a person who has viewed the images418-9and418-N and be transferred to the facial recognition viewing location424-M. In this example, the images denoted with a triangle424-9and424-M may be viewable in the initial viewing location424-1, the preferred viewing location424-2, and the facial recognition viewing location424-M. The images418-2,418-4,418-6, and418-7may have been assigned the perceived interest (at422) corresponding to the undesired preference because they were not shown to another person, the image(s) were not viewable on the display403when the display403changed position from an initial position to a subsequent position, or a combination thereof. These images may be viewable initial viewing location424-1and the discard viewing location424-3such that a user can review the discard viewing location424-3and discard the images as desired. In some examples, discarding an image from any of the plurality of viewing locations424can discard the image from the computing device410. FIG.5is a block diagram539for an example of image location based on a perceived interest and display position in accordance with a number of embodiments of the present disclosure.FIG.5describes a computing device (e.g., the computing device410ofFIG.4) which is equipped with a camera to generate images and a controller (e.g., the controller108ofFIG.1) to receive, transmit, or otherwise obtain images. At box540the computing device can generate (e.g., or receive, etc.) an image and the controller can receive the image. The image can be saved to an initial viewing location (e.g., the initial viewing location424-1ofFIG.4B). At box542, the controller can determine a change in position of the display of the mobile device. For example, the controller can determine when the display is in an initial position and a subsequent position, where the change in position of the display includes the display moving from the initial position to the subsequent position. At, block544, the controller can assign a perceived interest to the image. If the image was not visible on the display while the display changed position from the initial position to the subsequent position, the controller may assign a perceived interest that corresponds to an undesired preference. If the image was visible on the display while the display changed position from the initial position to the subsequent position, the controller may assign a perceived interest that corresponds to an undesired preference. At box546, the controller can transfer the image from the initial viewing location (e.g., the initial viewing location424-1) on the display to a different viewing location (e.g., the preferred viewing location424-2, or the discard viewing location424-3ofFIG.4) on the display. At block548, the controller may receive facial recognition input from an input sensor (e.g., a camera on the mobile device). The facial recognition input can be from a person that the user showed the image to when the image visible on the display changed position from the initial position to the subsequent position. At block550, the controller may assign a new perceived interest to the image. For example, the controller may assign a new perceived interest and/or refrain from transferring the image at558to a viewing location that corresponds to the facial recognition input. In this example, the user may have declined a prompt to generate a viewing location that corresponded to the person. In another example, the controller can transfer the image at556to a viewing location that corresponds to the facial recognition input. While a “preferred viewing location” a “discard viewing location” and an “initial viewing location” are discussed, there could be additional and/or different viewing locations such as “edit viewing location,” frequently emailed and/or texted viewing location” etc. could be used. In a non-limiting example, the mobile device may be configured by the user to include a threshold. The user may have configured settings on the mobile device to set a threshold requiring the change in the display from the initial position (ofFIG.3A) to the subsequent position (ofFIG.3B) to occur three or more times prior to assigning a perceived interest that corresponds to a desired preference to a user of the computing device. In in a non-limiting example, the controller can (at542) determine when the display is in an initial position and a subsequent position, where a change in position of the display includes the display moving from the initial position to the subsequent position and a plurality of viewing locations include a discard viewing location and a subset of the respective plurality of images (e.g., the images418denoted with a circle ofFIG.4) are sorted into the discard viewing location responsive to having been viewable on the display while the display is in the subsequent position less than a threshold quantity of times. In another non-limiting example the controller can determine (at542) when the display is in an initial position and a subsequent position, where a change in position of the display includes the display moving from the initial position to the subsequent position and the plurality of viewing locations include a preferred viewing location and the respective plurality of images sorted into the preferred viewing location have been viewable on the display while the display is in the subsequent position greater than a threshold quantity of times. FIG.6is flow diagram representing an example method680for image location based on a perceived interest and display position in accordance with a number of embodiments of the present disclosure. At682, the method680includes assigning, by a processor coupled to a memory device, a perceived interest to an image of a plurality of images, wherein the perceived interest is assigned based in part on a change in position of a display coupled to the memory device while the image is viewable on the display. For example, the change in position of the display includes the display moving from the initial position to the subsequent position. In other example, a perceived intertest can be assigned based on input received by the computing device via an image sensor. At682, the method680includes selecting the image from an initial viewing location on the display responsive to the assigned perceived interest. The perceived interest can correspond to an undesired preference to a user of the computing device and the image can be transferred from an initial viewing location to a discard viewing location. In other examples, the perceived interest can correspond to a desired preference to a user of the computing device and the image can be transferred from an initial viewing location to a preferred viewing location. Said differently, at684, the method680can include transferring the image to a different viewing location, wherein the initial viewing location and the different viewing location are visible on the display. In a number of embodiments, methods according the present disclosure can include identifying data for an image displayed via a user interface, determining a relative position of the user interface or input from a sensor, or both, while the image is displayed on the user interface, and writing, to memory coupled to the user interface, metadata associated with the data for the image based at least in part on the relative position of the user interface or input from the sensor. Embodiments of the present disclosure can also include reading the metadata from the memory, and displaying the image at a location on the user interface or for a duration, or both, based at least in part on a value of the metadata. Embodiments of the present disclosure can also include reading the metadata from the memory, and writing the data for the image to a different address of the memory or an external storage device based at least in on a value of the metadata. Embodiments of the present disclosure can also include reading the metadata from the memory, and modifying the data for the image based at least in part on the value of the metadata. FIG.7is a functional diagram representing a processing resource791in communication with a memory resource792having instructions794,796,798written thereon for image location based on a perceived interest and display position in accordance with a number of embodiments of the present disclosure. The memory device792, in some embodiments, can be analogous to the memory device106described with respect toFIG.1. The processing resource791, in some examples, can be analogous to the controller108describe with respect toFIG.1. A system790can be a server or a computing device (among others) and can include the processing resource791. The system790can further include the memory resource792(e.g., a non-transitory MRM), on which may be stored instructions, such as instructions794,796, and798. Although the following descriptions refer to a processing resource and a memory resource, the descriptions may also apply to a system with multiple processing resources and multiple memory resources. In such examples, the instructions may be distributed (e.g., stored) across multiple memory resources and the instructions may be distributed (e.g., executed by) across multiple processing resources. The memory resource792may be electronic, magnetic, optical, or other physical storage device that stores executable instructions. Thus, the memory resource792may be, for example, a non-transitory MRM comprising Random Access Memory (RAM), an Electrically-Erasable Programmable ROM (EEPROM), a storage drive, an optical disc, and the like. The memory resource792may be disposed within a controller and/or computing device. In this example, the executable instructions794,796, and798can be “installed” on the device. Additionally, and/or alternatively, the memory resource792can be a portable, external or remote storage medium, for example, that allows the system790to download the instructions794,796, and798from the portable/external/remote storage medium. In this situation, the executable instructions may be part of an “installation package”. As described herein, the memory resource792can be encoded with executable instructions for image location based on perceived interest. The instructions794, when executed by a processing resource such as the processing resource791, can include instructions to determine, by a controller coupled to a mobile device including a plurality of images, a change in position of a display coupled to the mobile device when one or more images of the plurality of images is viewable on the display. In some examples mentioned herein, the computing device may be configured by the user to include a threshold. In a non-limiting example, the user may have configured settings on the computing device to set a threshold requiring the change in the display from the initial position (ofFIG.3A) to the subsequent position (ofFIG.3B) to occur three or more times prior to assigning a perceived interest that corresponds to a desired preference to a user of the computing device. The instructions796, when executed by a processing resource such as the processing resource791, can include instructions to assign a respective perceived interest to each of the respective plurality of images, wherein each respective perceived interest is based in part on whether the respective plurality of images has been viewable on the display when the position of the display has changed. The plurality of images can be assigned different perceived interest. In some examples, one or more of the images can correspond to a person that has viewed the images (e.g., via facial recognition data received by the computing device). The instructions798, when executed by a processing resource such as the processing resource791, can include instructions to sort the respective plurality of images based on the assigned respective perceived interest into a plurality of viewing locations, wherein the plurality of viewing locations are visible on a display of the mobile device. The plurality of viewing locations can include a discard viewing location, a preferred viewing location, and/or a facial recognition viewing location. Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
55,656
11862130
DETAILED DESCRIPTION The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such details as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic. Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This disclosure may however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the disclosure to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure). Various terms as used herein. To the extent a term used in a claim is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing. Embodiments explained herein relate to musical instruments. Particularly, it pertains to a simple and cost effective portable drum kit. The present disclosure provides an efficient solution to the major problems associated with conventional drum kits. The disclosed drum kit maintains and sustains the same feel and experience of an acoustic drum kit, while being easy and quick to set-up/dismantle, convenient to transport, saving time, money and efforts. In an embodiment, the disclosed drum kit solves a major problem faced by professionals and percussionists, which is related to dismantling; packing, transporting and assembling their drum kits, without risk of damaging or misplacing any component. The proposed drum kit eliminates need of expensive, cumbersome hard-cases for each component. Instead, it provides a better and cost-effective alternative as a single preassembled, ready to move, adjustable 5 pieces acoustic drumming solution or a product. Since all components are preassembled with assistance of fully adjustable clamps, the invention saves a huge amount of time spent during setup and repacking as well as while being transported. Re-designing and modifying a drum kit in a manner which allows it to fit within a box, encasement or a cubic frame of metal with a combined outer dimension of less than 87 inches (length+height+depth) was challenging. This is done without compromising on the quality of the instruments and sound. A 5 pieces drum kit consists of 5 drums, one, three or more cymbals, cymbal stands and several other parts such as clamps to join and assemble them. Majority of them vary in size and are oddly shaped. Hence, their typical structures have made it difficult for even skilled professionals to confine them in a box/frame, without increasing its size and weight till an extent where it becomes too cumbersome to transport. Furthermore, all parts are delicate and have a risk of parts being touching/fouling/rubbing against each other, which could result in damage. Creating a drum kit wherein all parts fit in a compact, lightweight carry-box/frame and still do not rattle and get damaged in transportation. In addition, it was challenging to create a drum kit which can be assembled and repacked by the percussionist alone with minimal time and effort. Another challenge was to create a drum kit which takes minimum stage space, which is valuable as it is also used by other performers. Moreover, different percussionists are of different sizes and heights and it was challenging to arrange kit which do not take away or restrict fluid movements made by the percussionists. Therefore, all drums, cymbals, pedals, hi-hats etc. remain adjustable for percussionist of all sizes and height while respecting their varying playing style and preferences. The present disclosure eliminates all that hassle associated with the conventional drum kits as all components fit within a box/frame with wheels which can be effortlessly transported even by a single person. Since there is minimal disassembly and assembly involved, the risk of components being misplaced is close to none. Despite being clamped on to support members, all drums and cymbals are adjustable in order to provide a satisfactory drumming experience to drummers of all sizes and playing styles. Value for money is always what customers want, especially in the music industry. Buying hard-cases for a basic 5-piece acoustic drum kit costs as much as buying a kit itself, while non-musical components of disclosed drum kit (such as rod members, clamps, wheels, etc.) add only about 25% of the actual kit cost. Hence, the present invention is a far more effective alternative while costing approximately 75% less as compared to buying all these components separately and when comparing the costs associated in/with storing, packing and transporting. Thus, the proposed drum kit is economical, affordable, convenient, easy to transport, easy to handle, assemble and pack, without fear of damage or losing any part as well as saves time and efforts of the percussionist. The proposed drum kit can be handled, used, assembled, packed, transported by one person without help of any other person. Easy to carry in flights as check in bag. Easy to carry on stairs or roads. It can be made in many variations with different materials according to need of a percussionist. It is durable having shelf life of years. In short, the proposed drum kit can create revolution in the musical industry and markets of musical instruments, specifically drum kits. Referring fromFIG.1AtoFIG.3where the proposed drum kit assembly in open configuration, closed configuration and with an outer cashing are shown, the proposed drum kit assembly100can include a base102and one or more musical instruments include one or more drums and at least one cymbal to be fitted on the base102, one or more vertical columns, such as but not limited to a vertical column106-1,106-2,106-3and106-4(collectively vertical columns106) fixed to the base102to support at least one of the musical instruments at a predefined height above the base102; one or more rod members502(shown inFIGS.5B to5E) detachably coupled to the one or more musical instruments; and a plurality of pair of modular clamps108(hereinafter, individually referred to as modular clamp108, and collectively referred to as modular clamps108) adapted to enable fitment of the least one of the one or more drums and the at least one cymbal to the vertical columns and/or horizontal column/beams. In an embodiment, the one or more drums can include a bass drum110fitted on the base102, rack toms112including mid tom and a high tom, a floor tom114and a snare drum116. The rack toms112are supported on the bass drum110with assistance of a road118and modular clamps108. The floor tom114and the snare drum116are mounted on the respective vertical columns106at predefined or desired heights above the base102with assistance of the modular clamps108. In an embodiment, the at least one cymbal can include a crash cymbal120, a ride cymbal122and a hi-hats cymbal124. The crash cymbal120and the ride cymbal122are mounted on the mounted on the respective vertical columns108at the predefined or desired heights above the base102with assistance of the modular clamps108and rod like structures. The hi-hats cymbal124is also mounted on the respective vertical column106at the desired position with assistance of the modular clamps108and the rod like structure. In an exemplary embodiment, drum dimensions can be: bass drum110—20×9 inches, floor tom114—14×5 inches, snare drum116—13×3 inches, high tom—10×5 inches and the mid tom—12×5 inches. In an exemplary embodiment, the base102can be a wooden base with dimensions of 27.5×29 inches. In another exemplary embodiment, the base102can be a wooden base with dimensions of 24.8×34.5 inches. In another embodiment, the base102can be made of polymeric material or metallic material such as aluminium, polycarbonate, ABS, polypropylene and the like. In an embodiment, the base102can include a plurality of wheels or casters126configured on a lower surface of the base102to allow movement of the base102. Each of the plurality of casters126can include a lock arrangement to lock the corresponding caster to prevent movement of the base102. The locking arrangement for the casters facilitates stability for a percussionist while playing the drums and movability for transportation. The casters126can also add 4 inches of height to the base102, which not only make it portable but it also saves the wooden base from getting wet in case of spillage of any liquid or during rain if the floor is wet and thus indirectly protects the drum sets100. In an embodiment, an extendable base128can be provided to support a remote hi-hat pedal130. The extendable base128can be pivotally coupled to the base102to move between a folded position in which the extendable base128is folded (shown inFIGS.2A and2B) and an extended position in which the extendable base128is positioned adjacent to the base102(shown inFIGS.1A to1C) to support the remote hi-hat pedal130. In another embodiment, one or more additional extendable bases to increase the footprint of the base for any reason. In an embodiment, the portable drum kit100can include a throne132for the percussionist to sit. In an embodiment, the portable drum kit100can include an audio interface134for mics of the drum kit100. In an embodiment, the portable drum kit100can include an outer cover302(shown inFIG.3) to cover the musical instruments. One or more latches204are provided with the outer cover302to lock the outer cover302with the base102. The outer cover302can include a handle306fitted to an outer surface of the outer cover302for easy handling of the drum kit100. In an embodiment, the handle306can be a telescopic handle fitted on one side of cover can convert whole drum kit100in to a trolley bag easy to be transported and carry without feeling weight of the drum kit100. In an embodiment, one or more pockets (not shown) can be provided on sides of the outer cover302to store drum playing sticks, books, spare accessories, cables and other small items. In an embodiment, the outer cover302can be made of polymeric material or wooden martial or any other suitable hard material walls/panels forming an encasement from all sides giving it a box structure depending on the budget and preference of the percussionist. This outer cover302can be opened and spread on floor to give additional flooring which can be useful when the stage or floor are not comfortable or can be separated and kept aside if not required while playing. In an embodiment, the modular clamps108can allow movement of the at least one of the drums and the cymbals between a collapsed position or closed position (shown inFIG.2A to2D) in which the drums and the cymbals are placed closer to each other, and a deployed position (shown inFIGS.1A to1F) in which the drums and the cymbals are placed at desired positions to allow a percussionist to use the drum kit100. In an embodiment, the portable drum kit can include one or more modular brackets (shown inFIGS.5A to5E) to enable coupling of the one or more rod members to the one or more musical instruments such as drums. The one or more rod members are generally L-shaped. In an embodiment, the modular clamps108and the modular brackets can be designed for a quicker, easier and faster set-up and packing process as they can be adjusted by turning their wing-nuts. Once the clamps is loosened, one can adjust the particular component in the desired position and tighten the clamp using the corresponding wing-nut. Unlike other clamps, the modular clamps108and the modular brackets offer close to 360 degrees adjustment, which is better for the musical instruments. In an embodiment, for set-up, the clamps108have to be loosened by turning the respective wing nuts in a direction, adjusted in playing position and then tightened by turning the respective wing nuts in opposite direction. As the bass drum110is stored in a different spot, it has to be pulled out a few inches in order to play. The drum pedal, which is fastened to the base102using Velcro, has to be removed and attached to the bass drum110. The hi-hat stand is collapsed and fastened vertically to the corresponding vertical column106. It has to be removed and set-up separately. All cymbals120,122and hi-hats124are moved from resting or storage position inside the bass drum110to the designated or desired positions. Remove the hi-hat cymbals124and throne seat top from the storage unit at the back of the bass drum110. Remove the cymbals from the locked positions. Move the extended base128from the folded position to the deployed position In an embodiment, components of the drum kit100have to be packed the same way as they were opened up, by adjusting the modular clamps108and the brackets. The bass drum110has to be moved to its storage spot. The cymbals120,122and124should be removed and stored inside the bass drum110. A hi-hat stand and bass pedal are attached to the base102at their respective storage locations. The bass pedal is used to play the bass drum. Move the extended base128to the folded position. In an embodiment, the dimensions of the drum kit100can be reduced (more compact) in such a way that the total length+height+width can be closer to 62 inches; this allows it to be suitable as checked-in luggage on a flight. This may revolutionise drum transportation. In an embodiment, the outer cover302can have five panels/walls totally removable before playing or can be spread over to the floor creating additional cover of floor. These panels/walls can be constructed out of reinforced plastic, fibre glass, carbon fibre, aluminium or wood or polycarbonate ABS and polypropylene or any other similar material. These panels can be designed to completely encase. When the panels are locked into place it will not only protect all the components of the drums set100but also ensure no water, or dust can go in. The panels can be designed to be dismantled and assembled quickly and managed by one person only. The panels can have inbuilt ribs or embossed elements that gives strength and stability. For a person to play the drum kit100requires to open the outer cover302, loosen the clamps108and adjust the drums, cymbals and other components outward and tighten the clamps108again. Similarly, to pack the drum kit100, the clamps108need to be loosened and components need to be repositioned in the collapsed position. It may takes about 10-15 minutes by a single person to pack or open the drums kit100(and components) for playing compared to a conventional drum kit, which requires 30-45 (or more) minutes to assemble and to dismantled and packed by one person. The cover302has to be put on to complete the packing process. With the cover on, and wheels unlocked, the drum kit is ready to be transported. In an embodiment, the vertical columns106, rod members, and other road like structure of the drum kit100itself can be made of strong and light materials such as; aluminium, stainless steel, carbon fibre or mild steel. Making it much lighter. In an embodiment, the drum kit100can be designed as a “plug and play” drum kit, microphones can be pre-fitted at appropriate positions, all of which then further be connected to a junction box or audio interface134. Power can be supplied and the audio interface can directly be connected to the venue's sound system. This way, microphones at the venue can be used by other performers. This may substantially reduce the time wasted during audio set-up and sound checks. Additionally, this will lead to a tidy stage and good cable management. Referring fromFIG.4A to4E, where a modular clamp (s) of the proposed drum kit is shown. The modular clamp108can include a double v-clamp402to engage with one of the columns106or one of the rod members502(shown inFIG.5B) detachably coupled to the musical instruments such as drums, or any other rod like structure, at least one first cylindrical coupling member, such as a first cylindrical coupling member404having a female coupling406, and at least one second cylindrical coupling member such as a second cylindrical coupling member408having a male coupling410to engage with the corresponding female coupling of the corresponding modular clamp to define a pair of clamps108(shown inFIG.4B). Any of the first cylindrical coupling member404or the second cylindrical coupling member408can be engaged to the double v-clamp402. The modular clamp108also include a first wing nut412for connecting the first cylindrical coupling member404and the second cylindrical coupling member408with the double v-clamp402. In another embodiment, the modular clamp108can include, but not limited to, the first cylindrical coupling member404and a pair of second cylindrical coupling member404and one the double v-clamp402(shown inFIG.4D). In another embodiment, the modular clamp108can include any numbers of the first cylindrical coupling member404and the second cylindrical coupling member408. In an embodiment, the first cylindrical coupling member404and the second cylindrical coupling member408having serrations416on any or both of a lower surface and an upper surface to enable engagement of the first cylindrical coupling member404, the second cylindrical coupling member406and the double v-clamp402with each other in different orientation along an axis418of the respective modular clamp108. In an embodiment, a spare male coupling420or a spare female coupling422of the pair of modular clamps108allow coupling with the corresponding female coupling or male coupling of another modular clamp to allow coupling of another musical instrument or other elements/accessory, such as but not limited to, mic, music sheet holder and the like. This also allow extension of length of a group of modular clamps108as per requirement (shown inFIG.4E). In an embodiment, the male coupling410and the female coupling406of the corresponding modular clamps108are joined with assistance of lock nuts. The lock nuts of the pair of modular clamps108allow angular rotation or adjustment of the one modular clamp108with the corresponding modular clamps108as per requirement. In an embodiment, the double V-clamp402of the modular clamp108can include an upper member424, a lower member426and a lock wing nut428to enable tightening or loosening of the double V-clamp. In an embodiment, the modular clamps108allow movement of the drums and the cymbals between the collapsed position in which the drums and the cymbals are placed closer to each other, and the deployed position in which the drums and the cymbals are placed at desired positions by loosening and/or tightening of the first wing nut412, lock wing nut428and the lock nuts joining the male coupling410and the female coupling406. In an embodiment, assembly of clamp may allow the connected musical instrument to move in at least 3 planes (at least 2 and maximum 6), and also allow the musical instruments to face normal (perpendicular) to the player. Referring fromFIGS.5A to5E, where a modular bracket of the proposed drum kit100is shown. The modular brackets500can be configured for fitment with a drum to enable coupling of the one or more rod members such as a rod member502to the one or more musical instruments such as drums. The rod members502can be generally L-shaped. In another embodiment, the rod members502can be of any shape, for example, U-shaped, S shaped etc. In an embodiment, each of the modular brackets500can include a base bracket member504configured for fitment to the drum of the drum kit100and a holding member506configured for fitment with base bracket member504with assistance of a second wing nut508. The holding member506can incorporate at least one hole such as a hole510for fitment of the corresponding rod member502with assistance of a third wing nut512. In an embodiment, the each of the one or more modular bracket can include at least one coupling member such as coupling member514having a male coupling410to allow coupling of one or more other elements to the corresponding brackets500. The coupling member can be configured for fitment between the base bracket504and the holding member506. In an embodiment, the modular brackets500can allow rotation of the respective drum along a first axis516and a second axis518of the respective modular bracket500by loosening and/or tightening of the second wing nut508and the third wing nut512(shown inFIG.5E). It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C . . . and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc. The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims. While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art. Advantages of the Invention The present disclosure provides an efficient solution to the aforementioned problems associated with typical drum kits. The present disclosure provides a portable drum kit which can obviate limitations of typical drum sets. The present disclosure provides a pre-assembled drum kit to make it quick and convenient to open and repack, by single person easily. Further, being attached to each other, there is no fear of missing any part while travelling or packing or unpacking. The present disclosure provides a simple and cost effective portable drum kit a drum kit with all the components of a traditional 5-piece drum kit without compromising on the quality and utility of a conventional acoustic drum kit but with convenience of transportation, packing, unpacking and utilising the same with economical ways.
24,655
11862131
DETAILED DESCRIPTION FIGS.1and2Aare perspective views in which the present invention is carried out, andFIG.2Bis a cross-sectional configuration diagram for describing the perspective views. As illustrated inFIGS.1and2A and2B, the present invention is configured in a form in which a body1is provided in a lower portion, an image monitor2ais provided at a rear end of an inside of the body1, and an installation direction of the image monitor2ais configured in a form in which an erect image in which upper and lower, and left and right portions are right is viewed. A reflector3reflecting at an upward45oblique angle is provided at a front end of the image monitor2aand a page movement device4capable of an image displaying a music sheet of the image of the image monitor2ato left and right sides is configured on a front surface portion of the body1. Upper and lower supports are configured upward at a rear end portion of the body1, and a lower support5is provided in a lower portion of the body1, and the lower support5is inserted into the body1, and an upper support6of which height is varied in a vertical direction is provided, and a height fixture5ais configured at an upper end of the lower support5to fix a height of the lower support5. A semi-transparent mirror7which perpendicularly reflects a music sheet image of the image monitor2aincident on the lower reflector3at a 45° oblique angle toward a player8is configured at an upper end portion of the upper support6. The image monitor2ais configured by a monitor which is combined with a tablet PC or a separate small computer having a computer function embedded therein. The page movement device4is connected to a computer part of the image monitor2a, and wiredly or wirelessly connected to be combined or separated, and used. In the page movement device4, when a right pedal4bis pressed, a current page is turned to a next page of the music sheet as illustrated inFIGS.1and2B, and when a left pedal4ais stepped, the current page is turned to a previous page of the music sheet. A structure of the semi-transparent mirror7is configured at the 45° oblique angle, so a part of an image which is straightly incident is transmitted and a part is reflected on an inclination surface. A transmission rate and a reflection rate are added and subtracted according to a use purpose within 80%: 20% or 20%: 80% based on 50%: 50%. Further, as illustrated inFIG.2A, a monitor intubation2is provided at the rear end portion of the body1, so the player8may separately bring and mount or separately use a tablet PC with the music sheet embedded therein. Further, when the monitor having the computer embedded therein is used, only a memory such as a USB may be brought, and mounted and used. In general, in a structure in which there is one reflection surface, when the monitor is inverted and mounted in order to invert the upper and lower portions of the image of the monitor, and straighten the upper and lower portions, left and right portions are changed at this time. Therefore, one side of the upper and lower portions or the left and right portions of the image should be inverted, and a separate image inversion device is required, and a resolution of the image inversion device deteriorates, and a separate power supply device and a separate expensive complicated structure should be added. As illustrated inFIG.2B, in the present invention, an image of which upper and lower portions are erected in the image monitor2ais reflected upward on the reflector3provided at the front end of the image monitor2aat the 45° oblique angle, and thus the upper and lower portions of the image are reversed. While the reversed image is reflected to the player8perpendicularly on the reflection surface of the semi-transparent mirror7configured at the upper end, the upper and lower portions are erected again, and the player8views the reversed image as a music sheet image in which the upper and lower, and left and lower portions are erected. Therefore, the separate image inversion device is not required. A center a of the image monitor2a, a center a of the reflector3, a center a of the semi-transparent mirror7, and an eye height of the player8are provided on one optical axis (a) line. Further, when a camera10is configured at a straight position to the eye height of the player8, i.e., outside a rear surface of the semi-transparent mirror7, a position of the camera10is also configured on the same optical (a) line as the eye height of the player8. In such a configuration, the eye of the player8matches the eye of the camera10even though the player8plays the musical instrument while viewing only the music sheet image provided in the image monitor2a. Further, with respect to an audience which is in a remote distance, the semi-transparent mirror7allows the face of the player to be transparently transmitted and the music sheet image to be viewed only to the player. As illustrated inFIG.1, when the player8plays the musical instrument while viewing the music sheet of the semi-transparent mirror7, the music sheet is viewed only to the player8, and the external audience views the semi-transparent mirror7with a straight eye, so the audience may view the face and the expression of the player8, and the eye of the player8who views the music sheet matches the eye of the camera10as it is even upon photographing the image. That is, since the eye of the player8matches the eye of the camera10or the audience even though the player8views only the music sheet, an effect such as playing the musical instrument while directly viewing the camera and the audience, and a natural and a natural expression are provided. Therefore, the player8may turn the music sheet while pressing the pedal of the lower body1while naturally viewing the music sheet and simultaneously viewing the audience or the camera10. That is, the player8may play the musical instrument while turning the music sheet page singly without a separate page tuner. In the present invention, as illustrated inFIG.2B, when the eye position of the player8matches the optical axis a which is the center of the semi-transparent mirror7by fixing a height fixture5aby increasing the upper support6to the eye height of the player8, the eye height of the player8, the center of the semi-transparent mirror7, the center of the reflector3, and the center of the image monitor2amatch as one optical axis a. Therefore, upper and right, and left and right portions of the music sheet image of the monitor2aand a photographed image of the player8match each other, and eye of the player8and the eye of the photographed image match each other. The present invention further includes a camera support9which may include the camera10in a part of the upper support6as illustrated inFIG.3A. In the camera support9, the camera10rotates in a vertical or horizontal direction, and the semi-transparent mirror7and the camera10are provided on the same optical axis (a) line upon photographing, and the camera10rotates in the horizontal or vertical direction upon non-photographing, so the camera support9is provided so that the camera10is positioned at one side thereof to prevent the camera from shielding a playing scene of the player8. In such a configuration, the camera support9is provided at one end of the upper support6, and the camera10is provided to be positioned on the same optical axis a which matches a central axis a of the semi-transparent mirror7. With respect to the position of the camera10, the eye of the player8who views the music sheet matches the eye of the camera10. When the support is configured to rotate the camera support8around the upper support6, an effect is obtained in that the player8executes photographing simultaneously with playing the musical instrument while turning the music sheet page and viewing the music sheet singly according to a rotational direction. Further, as illustrated inFIG.3A, a microphone support9ais added to the upper support6and a microphone is mounted and used as necessary, and when the microphone is not used, the microphone support9amay be rotated horizontally or vertically. In such a structure, a singer may sing toward the audience or camera while viewing the music sheet without a separate microphone stand, and an action logic is the same as a logic of the present invention described above. Further, as illustrated inFIG.3B, one end of the upper support6is rotated horizontally, and a horizontal movement stand61in which a length of the upper support6is varied is provided, and provided at the front end thereof as the tablet PC, and the semi-transparent mirror7is provided thereabove. In such a configuration, when the player plays a musical instrument having a large volume, such as a drum or a piano, the player may view the music sheet image at a close position. Even though the player is separated from the music sheet, the player may turn the page of the music sheet page at the close position. As illustrated inFIG.3C, a music sheet presenting monitor further includes an image inversion device in the configuration such as the tablet PC2a, and is provided in link with the page movement device4. That is, the image monitor2aor the tablet PC2ais provided upward, and the image inversion device is additionally provided, so the semi-transparent mirror7and the page movement device4are combined. As illustrated inFIG.3D, when the player views the music sheet downward at the time of playing the musical instrument such as a violin, a slope support51is additionally provided at a lower end of the lower support5, a rotation stand52is provided below the rear end of the body1, and the page movement device4is additionally provided in front of the rotation stand52. The page movement device4includes left and right switches4aand4band may be pressed with a hand, and the pedals are further provided above the left and right switches4aand4bto add convenience, and may be provided as a wireless device using a separate sensor when playing the musical instrument by using both hands and both feet, such as drum playing. Therefore, in the present invention, since the music sheet is viewed only to the player and the playing view of the player is just transmitted, the playing is photographed while the eye of the player who views the music sheet matches the eye of the camera without an obstacle, and the audience may view the playing view of the player without an eye obstacle of a conventional music stand, and the player may play the musical instrument while turning the music sheet page without an assistance of the separate page turner.
10,635
11862132
DETAILED DESCRIPTION The device according to one embodiment is now described with reference toFIGS.1to10. Frame FIGS.1to4represent a musical device according to embodiments of the invention. Device1comprises a frame2comprising a plurality of receptacles11for receiving three-dimensional parts10. The frame may comprise a one-piece structure. The frame comprises receptacles11for receiving three-dimensional parts10and an audio system5,51. Inserting two parts10into two receptacles11triggers the sound diffusion of two music tracks, selected based on the orientation of the parts in their receptacle, by the audio system5,51. Music tracks are soundly diffused simultaneously and synchronously. In an example illustrated inFIGS.1to4, frame2substantially has a shape of revolution. The frame can stand on a plurality of legs20. All receptacles11are arranged on the upper surface21of the frame2, preferably angularly equidistributed about a central axis. Sound Output Frame2further comprises an audio system for producing sound from an electrical signal. The audio system may comprise an amplifier. The audio system also includes a loudspeaker or speaker system5. Preferably, the loudspeaker or speaker system5is arranged in the center of the frame2. In one example shown inFIGS.1to3B, the frame2comprises a central cavity comprising a speaker system5which can be protected by a speaker grid. Receptacles11are preferably arranged around the speaker grid. In one complementary or alternative embodiment, the audio system includes an audio output interface51. Audio output interface51allows an offset speaker to be connected to the frame. The audio output interface51consists of, for example, an electrical socket or a jack socket. The audio output interface51enables, for example, the musical device1to be connected to an offset speaker or to a headset or headphones. In another embodiment, the audio output interface51is an interface for connection to a wireless audio device (speaker, headset, headphones), for example via a Bluetooth connection. Receptacles Receptacles11are designed to receive three-dimensional parts10described later in the description. Receptacles11have a receiving cavity for cooperating with a three-dimensional part10by inserting said part10into said cavity. In the first embodiment illustrated inFIG.3AandFIG.3B, the receptacles11can be provided by a recess in the frame2. In a second alternative embodiment, illustrated inFIGS.1,2and6A to7B, receptacles11are removable relative to frame2. Frame2comprises recesses4. Each recess4is designed to receive one receptacle11. Each receptacle11can be removably integrated into the recess4. Recess4may comprise means for cooperating with receptacle11. The cooperation means make it possible to keep the receptacle11inside recess4. Each receptacle11has a shape for cooperating with a face of at least one three-dimensional part10. Receptacle11includes a bottom12for receiving the three-dimensional part. As illustrated inFIGS.6A through6B, the bottom12of receptacle11is to receive a three-dimensional part10or to cooperate with a face of at least one three-dimensional part10. One advantage of the removable receptacles11in a recess4of the frame2is that the shape of the bottom of the receptacles11can be easily adapted. For example, a device1for children may include shapes of the receptacle11comprising star, square, crescent moon shapes. On the contrary, a device1for adults may comprise more neutral shapes. The receptacles11inserting into the recesses4of the frame2therefore allow the musical device1to be modulated. In both cases, the bottom12of receptacle11has a shape cooperating with the shape of one face of the three-dimensional part10. Receptacles11may also have a protrusion to the frame or be aligned with the upper surface21of the frame2. Three-Dimensional Parts Shape of the Three-Dimensional Part Three-dimensional parts10are designed to cooperate with at least one receptacle11. The three-dimensional parts10comprise a contact face14and an external face15opposite to the contact face14. Contact face14is designed to cooperate with bottom12of a receptacle11. The contact face14may comprise a shape substantially similar to the shape of the bottom12of a receptacle designed to cooperate with said part10. Preferably, the contact face14comprises dimensions similar to or slightly smaller than the dimensions of the bottom12of a receptacle11, thus allowing insertion and cooperation of the contact face14with the bottom of the receptacle12. In examples illustrated inFIGS.5A through5E, the contact face of the three-dimensional part may comprise a star shape (FIG.5A), a square shape (FIG.5B), a circular shape (FIG.5C), a cross shape (FIG.5D), or a crescent moon shape (FIG.5E). Each three-dimensional part10may be associated with a receptacle11of frame2, whose shape of the bottom12of receptacle11cooperates with the shape of the contact face14of said three-dimensional part10. In another example, all parts10and all receptacles11are identical and each part10can cooperate with any receptacle11. Swapping Orientation of the Part in the Receptacle The shapes of the contact face14of a three-dimensional part10and the shape of the bottom12of receptacle11associated with said three-dimensional part10cooperate by insertion. The three-dimensional part10can cooperate by inserting into receptacle11in several different orientations. Preferably, part10can swap orientation in the receptacle by rotating part10along an axis16substantially perpendicular to the contact surface14of part10. The axis of rotation16described above should be understood as an axis of rotation between two orientations of the part in the receptacle, even if this part has to be removed from the receptacle and then reinserted to swap orientation as described hereinbefore. The axis of rotation16may be a projection of the geometric center of the contact face into a plane perpendicular to the plane of the contact face. In one embodiment, the shapes of receptacle11and three-dimensional part10associated therewith cooperate to lock rotational movement of said part10into receptacle11. For example, if the bottom of receptacle12and contact surface14of part10are star, cross, or square shaped, part10can only swap orientation by removing part10and reinserting it into the receptacle in a different orientation. Three-dimensional parts10preferably comprise a marker13. Marker13is preferably a visual marker arranged on a zone on the external face15of the three-dimensional part10. Marker13allows the user to view orientation of part10inserted into receptacle11. The visual marker13is preferably arranged in a lateral zone of the external face15, that is in a zone that does not comprise the axis of rotation16. Marker13therefore advantageously enables the three-dimensional part orientation10in receptacle11to be viewed. In a first example illustrated inFIGS.9A through9D, part10and receptacle11are cross-shaped. Marker13is arranged on the external face of part10and allows its orientation to be viewed. Such a cross shape does not allow the part10to rotate in its receptacle11. Indeed, contact between the radial edges17of part10and receptacle11locks rotation of part10. Part10should therefore be removed from receptacle11, rotated by the user along its axis of rotation16, then reinserted in another orientation.FIG.9Billustrates part10ofFIG.9A, whose orientation in the receptacle has been swapped 90° clockwise.FIGS.9C and9Dillustrate a 180° and 270° clockwise orientation swap of the part fromFIG.9A. Marker13enables this orientation swap to be viewed. In another embodiment, the shape of part10and the shape of receptacle11permit rotation of the three-dimensional part10in receptacle11, for example a circular shape or a crescent moon shape in a circular shaped receptacle. Preferably, the height of the three-dimensional part10is greater than the height of the receiving cavity of the receptacle11. The three-dimensional part thus protrudes from the upper surface21of the frame2or from the receptacle11. This protrusion makes it easier to grip a part10inserted into a receptacle11. It will be understood that the height of part10can be defined by the distance between the contact face14and the external face of part15. Indexing Element Musical device1can detect orientations of parts10in receptacles11in order to select a music track according to the orientation detected. As illustrated inFIGS.5A to5E, each three-dimensional part10comprises an indexing element7. Frame2comprises a plurality of detectors8. Each detector is associated with a receptacle11to determine the orientation of the part10through the detection of indexing element7in said receptacle. Indexing element7allows cooperation with at least one detector8of frame2associated with the receptacle to determine orientation of the part in the receptacle. Detector8and indexing element7can be chosen so that detector8can detect the presence of indexing element7and/or can detect a quantity representative of the distance between indexing element7and said detector8. FIG.8is a cross-section view of part of the frame2illustrating a receptacle11/part10pair. In this figure,3detectors8associated with the represented receptacle11are represented. Indexing element7is preferably arranged within the volume of part10. Such an arrangement advantageously enables the indexing element7to be protected from user manipulation. Indeed, parts10are intended to be extensively manipulated by the user. The risk of damage to the indexing element7is advantageously reduced, in particular when the user is a young child. Indexing element7is preferably arranged in a side zone of part10. Especially, the indexing element7is arranged in a zone not comprising the axis of rotation16of part10in receptacle11. Due to its offset position in relation to the axis of rotation16, rotation of the three-dimensional part10advantageously results in to a position change of the indexing element7relative to the frame2(in the same way as the marker illustrated inFIGS.9A to9D). By determining position of the indexing element7in relation to the frame2, it is advantageously possible to determine orientation of the three-dimensional part10in receptacle11. The detector(s)8associated with a receptacle are arranged to determine orientation of the three-dimensional part10in said receptacle11via the detection of the indexing element7. In one example, at least 3 detectors are associated with a receptacle11and one of the detectors8is not on a segment connecting the other two detectors8. Such a number and arrangement of the detectors advantageously enables position of the indexing element7to be determined by triangulation. In another example, the number of detectors8associated with each receptacle11is equal to the possible number of orientations of part10in receptacle11or equal to the number of orientations desired to be detected. Preferably, as illustrated inFIG.8, each detector8is arranged in frame2in a zone comprising projection19of the expected position of an indexing element7in relation to the plane of the contact surface14or the bottom12. Thus, each detector8can be arranged below a position that the indexing element7would take in one orientation of part10. Such a large number and arrangement of detectors advantageously enable the orientation of part10in relation to the detector sensing the best response to be determined. In a first preferred embodiment, the indexing element7comprises a permanent magnet and the detectors8comprise Hall effect sensors. Hall effect sensors advantageously enable a magnetic field change, which is representative of the distance to said permanent magnet, to be measured. Thus, one or more Hall sensors can be associated with each receptacle11. In an example illustrated inFIG.8, these Hall effect sensors8are located in frame2under the bottom12of receptacle11. The Hall sensors are distributed so as to estimate a position of the indexing element7according to the magnetic field detected by each Hall sensor. One advantage of a permanent magnet and Hall effect sensors is to provide a detection means that is not accessible on the surface of a part or the frame. Indexing elements7and detectors8are buried and inaccessible to the user. This reduces the risk of breakage, especially when the user is a young child. Another advantage is to provide a “passive” detection system, that is without emission of waves, light or signal between the three-dimensional part10and a detector8. In another alternative embodiment not represented, the plurality of detectors8comprises lasers and optical detection means disposed at different zones of the bottom12and the indexing element7comprises a reflective means to reflect laser light to an optical detection means when the part swaps into a predetermined orientation. In another alternative embodiment not represented, the indexing element includes a signal emitter and the detectors comprise signal receivers to estimate the emitter position based on the signals received. The signal emitted from the indexing element can be generated by the indexing element or it can be a signal reflected by the indexing element. Inserted Face In one embodiment mode, musical device1is designed so that detection of the orientation of the three-dimensional part10is also possible when part10is inserted into the receptacle11by any of the external face15and contact face14. For this, the indexing means7can be arranged in such a way that it is detected by the plurality of detectors8when part10is inserted in both directions. Part10may also comprise a second indexing means similar to the first indexing means to be detected by the plurality of detectors8when part1is inserted into receptacle11from the external face15. One advantage is that part10can be used in both directions to make it easier to use, for example for young children. Optionally, the plurality of receivers8enables orientation of part10in its receptacle and the direction of part10in its receptacle to be determined by detecting an indexing element7. The advantage of this option is that from a single receptacle11/part10pair, it is possible to double the number of possible orientations. Recognition of the Three-Dimensional Part In one embodiment, musical device1allows detection of an identifier of the three-dimensional part10inserted into a receptacle11. The three-dimensional parts10thereby comprise an identification element. This identification element makes it possible to recognize the identifier of a three-dimensional part10inserted into a receptacle. Determining the identification of a part10in a receptacle11advantageously enables the number of selectable tracks M from a receptacle11to be increased. The track M to be soundly diffused will then be selected from receptacle11, the orientation of part10and based on part10that has been inserted into said receptacle11. This embodiment is particularly advantageous when the shapes of parts10and receptacles11are designed so that each part10can cooperate with each receptacle11. In a first example, indexing elements7are designed to allow detectors8to determine the identifier of part10in receptacle11associated with said detectors8. If the indexing elements7of the parts10are permanent magnets, then the permanent magnet of each three-dimensional part10comprises a magnetization other than the other permanent magnets. Magnetization of a material is characterizable by its magnetic moment volume density and can be measured in amperes per meter. Hall effect sensors associated with receptacle11can therefore:determine the position of the permanent magnet as described hereinabove;calculate magnetization of the permanent magnet from its known position and its measured magnetic field; anddetermine the identifier of part10detected from the magnetization calculated. In a second example, the three-dimensional part10can comprise an RFID tag. Frame2then comprises a plurality of RFID sensors, with each RFID sensor associated with a receptacle11to read the RFID tag of a part10in its receptacle11. Each RFID tag is associated with an identifier of a part10. The identifier of part10in a receptacle11can then be determined from its RFID tag. Memory The musical device1according to the invention may also comprise a memory, including a memory comprising a set P of music tracks as described hereinbelow. In another embodiment illustrated inFIGS.2,3A,3BandFIG.10, device1includes a connector6for the connection of a memory including such a set P of tracks. Connector6includes, for example, a USB port or an SD card slot. Visual Indicator Musical device1can generate information, preferably light information, based on the presence and determined orientation of a part10in a receptacle11. For this purpose, frame2may comprise at least one display means. At least one display means may comprise a plurality of luminescent diodes9, each luminescent diode9being associated with a receptacle11. In one embodiment illustrated inFIGS.1to3B, the luminescent diodes9are arranged on the upper surface21of the frame2. In one alternative embodiment not represented, the luminescent diode9can be arranged on the bottom12of the receptacle11. The three-dimensional part10then comprises a transparent element (not represented) to scatter light from luminescent diode9through said part10. Alternatively, the display means comprises a screen to display information about orientations detected of the parts in receptacles11. The screen can be integrated into frame2or be offset2. Operating Interface The musical device1according to one embodiment of the invention comprises a control interface3. The control interface3preferably comprises a control knob as illustrated inFIGS.1to4. The control interface3can be used to switch the musical device1on or off, for example. The control interface3also makes it possible to control volume and/or select a set of tracks from multiple sets of tracks stored in the memory. The control interface3is preferably arranged on a surface of frame2, highly preferably on a side wall of frame2. Method One mode of execution of the method for operating a musical device1is described hereinbelow, in particular with reference toFIGS.11and12. Providing a Set of Tracks The operation of the device comprises a step of providing FOU of a set P of music tracks. The set P of music tracks M is provided to device1by said memory or by a memory connected to connector6. The set P of music tracks M is comprised of a plurality of music tracks M. The music tracks M of a same set P are designed to be soundly diffused simultaneously and synchronously. Preferably, the music tracks M include melodies or sound sequences with a common tempo or a tempo whose value is a multiple of a common tempo. Preferably, each music track M in a set P comprises a same duration. Such a set P is illustrated inFIG.11. The set P comprises a plurality of music tracks M distributed into a plurality of sub-sets N. Each sub-set N comprises at least two music tracks M. Preferably, the set P comprises a number of sub-sets N equal to the number of receptacles11in frame2. Preferably, each sub-set N comprises a number of music tracks M less than or equal to the number of orientations that device1may determine and/or the number of different orientations that part10may take in a receptacle11associated with said sub-set N. Preferably, each music track M includes a sound sequence of an instrument. The music tracks M are designed so that when combining the tracks M of a different sub-set, a synchronized and coherent music sequence is achieved. Preferably, each music track M includes a record of an instrument's sound or a music. Preferably, the set P comprises groups S of music tracks S from a musical symphonic composition and each instrument or group of instruments is recorded on one of the music tracks M of this group S of music tracks M for selective playing. In a first example, a musical symphonic composition is a piano concerto. One of the music tracks MAithen comprises the piano sound of this composition. One of the tracks MBiincludes the accompaniment violin sound of this same composition. One of the tracks MEicomprises the accompaniment guitar sound of this same composition. One of the tracks MCiincludes the cello sound, and one of the tracks MDiincludes the percussion sound of this same composition. In one embodiment, each sub-set N comprises a music track from that group forming a symphonic musical composition S. In addition, tracks from the same musical composition are preferably organized each in a distinct sub-set N. The set P shown inFIG.11is comprised of 20 music tracks (from AAito MEi) distributed into 5 sub-sets (from N1to N5) of 4 music tracks M each. Preferably, these 20 tracks M come from 4 music symphonic compositions S. Each music symphonic composition S comprises 1 music track M per sub-set. Preferably, each sub-set N represents a musical function. For example, in the case of 5 N sub-sets, a first sub-set N1includes music tracks from a solo instrument, a second N2and third N5sub-sets may include accompaniment sound tracks. A fourth sub-set N3includes music tracks including an instrument with a bass function (bass, cello, double bass), a fifth sub-set N4can include music tracks comprising percussion. The various symphonic compositions are thereby “compatible”. By compatible, it is meant that they have the same tempo or that their tempo is a multiple of a common tempo. By simultaneously playing one track per sub-set, whether or not from the same musical composition, this will always lead to the music playback of a global symphonic set, the different tracks of which melodically merge with each other. One advantage of the invention is the ability to compose melodic arrangements, especially by selecting one track per musical function (for example: bass, percussion, soloist, first and second accompaniment), each track M being taken from a different symphonic musical composition B. Detecting a Part in a Receptacle A second step comprises detecting DET1the presence and orientation of a three-dimensional part10in a receptacle11. This step comprises generating and transmitting information about the presence and orientation of a part10in each receptacle11. This step may comprise generating orientation indicators based on the orientation detected by detectors8of each receptacle11. For each receptacle A, B, C, D, E, the plurality of detectors8determines an orientation θA, θB, θC, θD, θEof a part in the receptacle. As previously seen, the orientation of part10can be detected by estimating the position of the indexing element7of said part10in relation to frame2. In one embodiment, if the presence of indexing element7(thus the three-dimensional part10) is not detected in receptacle11, no indicator is generated for this receptacle11. Alternatively, an indicator of absence of part10can be generated in this case. If the presence of indexing element7is detected, the orientation of part10in receptacle11is determined and an orientation indicator θ is generated. Preferably, each orientation indicator θ generated comprises at least:Information on the receptacle in which part10has been detected (A, B, C, D, E). This information is obtained using the identifier of detector(s)8, each detector8being associated with a receptacle.Information about the determined orientation of part10in the receptacle (i, j, k, l), determined by the detectors via the indexing element as described previously. In the example illustrated inFIG.11, Part10in receptacle E can take 4 different orientations: i, j, k or l. The indicator generated therefore corresponds to θEi, θEj, θEkor θEirespectively. In one embodiment, the orientation indicator θ generated further comprises the identifier of part10inserted into receptacle11. Selecting Music Tracks The method comprises a step of selecting SEL1a first music track M1to be soundly diffused. The first music track M1to be soundly diffused is selected from the set P of music tracks M. This selection is made automatically once orientation of the part is detected. The first music track M1is selected from receptacle11in which a part10has been detected and from the determined orientation of said part10in said receptacle11. The first music track M1is selected from the information of the indicator θ generated, preferably information on the receptacle (A, B, C, D, E). As illustrated inFIG.11, the information on the receptacle (A, B, C, D, E) in which the part10has been inserted allows the selection of a sub-set N of music tracks. For example, if the receptacle in which a part10has been inserted is receptacle E, the first track M1will be selected from the sub-set M5associated therewith comprising the tracks MEi, MEj, MEk, MEl. The first music track M1is selected according to the orientation of the part10in the receptacle (i, j, k, l). The first music track M1is selected from the information of the indicator θ generated, preferably information on the orientation (i, j, k, l). As illustrated inFIG.11, the information on the orientation (i, j, k, l) in which the part10has been inserted allows a music track to be selected from those of the sub-set selected. For example, if the part11into which a part10has been inserted is receptacle E in orientation k, the first track M1will be MEk. In one embodiment, swapping orientation of part10in receptacle11thereby allows the music track M selected to be modified. This swapping can also be used to select the symphonic composition S from which the selected music track has been extracted. In one embodiment, the first track M1can also be selected based on the identifier of part10cooperating with receptacle11, especially when a receptacle11is adapted to receive several parts and the plurality of detectors allows determination of the identifier of the part cooperating with the part cooperating with the receptacle associated. In one embodiment not represented, the track(s) M selected can be chosen from the set P of tracks based on the determined identifier of the three-dimensional part10detected in a receptacle. In this mode, each sub-set N comprises at least two subdivisions comprising each of the music tracks. Each subdivision is associated with one identifier of three-dimensional part10. This advantageously increases the number of selectable tracks M from receptacle11, by selecting a track M based on receptacle11, orientation of the part, and also based on the determined identifier of the part. Sound Diffusion of the First Track The method comprises a step of the sound diffusion DIFF1of the first music track M1selected. This playback DIFF1is automatically carried out once the first music track M1is selected. The first music track M1can be soundly diffused via the audio system5and/or through the audio output interface51of the musical device1. Preferably, the first music track M1is soundly diffused from the beginning. Preferably, the first music track M1is soundly diffused automatically when the orientation of a first part10in a first receptacle11is detected. Second Music Track The method comprises detecting DET2the presence and orientation of a second three-dimensional part10in a second receptacle11. This step is similar to detecting DET1the presence and orientation of the first three-dimensional part10in the first receptacle11. The method also comprises selecting SEL2a second music track M2to be soundly diffused. This step SE2is similar to the step of selecting SE1the first music track M1. The second music track M2to be soundly diffused is selected from a second sub-set N, different from the first sub-set of music tracks. Indeed, each sub-set N of the set of music tracks is associated with only one receptacle. Sound Diffusion of the Second Music Track Once selected, device1automatically triggers the sound diffusion DIFF2of said second music track M2. Said second music track M2is soundly diffused simultaneously with the first track. The second music track M2is soundly diffused simultaneously with the first music track M1in a temporally synchronized manner. Preferably, the second music track M2is soundly diffused automatically when the orientation of a second part10in a second receptacle11is detected. When a second music track M2is selected, its sound diffusion is added to the sound diffusion of the first music track in a synchronized manner. The time synchronization with the first music track M1can comprise synchronization of the bar of the second music track M2with the bar of the first music track M1so that the notes of the second music track M2blend into the melody of the first music track M1without modifying the sound diffusion of the music track M1. In one embodiment, the sound diffusion DIFF2of the second music track M2is triggered at the same time location as the time location of the first music track M1upon triggering the sound diffusion of the second track. By time location, it is meant a date or duration since the beginning of the music track. For example, if when the sound of the second music track M2is triggered, the sound diffusion of the first music track M1is at a time location of 35 seconds (that is the time between the beginning of the track and the currently soundly diffused instant is 35 seconds), then the music track M2is soundly diffused starting directly at 35 seconds so that its sound diffusion is synchronized in time with the sound diffusion of the first music track M1. In one embodiment, the method comprises a step of starting all the music tracks M of set P simultaneously. All tracks M are played, for example by a means for playing an audio file. All tracks M of set P are played simultaneously from the same date or location, advantageously guaranteeing synchronization between all tracks M. The volume of each music track M is then individually variable. The volume of each M is variable between at least a first volume and a second volume in which the first volume corresponds to a so-called “mute” volume preventing its playback and in which the second volume generates its sound diffusion. When starting all music tracks, the volume of all music tracks M is adjusted to the first volume. The playback of a selected track includes a step of modifying the track volume from the first volume to the second volume for its sound diffusion. When an orientation of a part is detected in a receptacle, the device automatically modifies the volume of the track associated with that receptacle and said orientation to the second volume for generating the sound diffusion of said track associated. This embodiment advantageously improves the response time between the selection of the music track and its sound diffusion synchronously with the first sound track. In one embodiment, the step of starting all music tracks simultaneously is triggered by selecting SEL1the first music track, by the detection step DET1, or by the step of the sound diffusion of the first track DIFF1. In another embodiment, the step of starting all music tracks is controlled by the control interface3. This advantageously allows the first track to be soundly diffused from the beginning when the device is switched on or when a first part is inserted into a receptacle. Similarly, device1allows the sound playback of a plurality of sound tracks M simultaneously and synchronously by increasing the number of receptacles11receiving a part10. Preferably, the method comprises sound playing back a number of music tracks equal to the number of receptacles11in which a part is detected. In one embodiment, when removing a part10from a receptacle11, detectors8no longer detect its presence and the corresponding track M selected is then no longer soundly diffused. Similarly, if detectors8determine an orientation swap of a part, the corresponding selected track is no longer soundly diffused, and a new music track corresponding to the new orientation is then selected and soundly diffused as described above. The method and musical device according to the invention thus allow simultaneous and synchronized sound diffusion of several music tracks forming a symphony, to add, remove or replace a music track as much as desired without stopping said symphony. The method may include a prior step of pre-selecting a set of music tracks from a plurality of music tracks and providing the set P selected. In one example, the memory connected to connector6may comprise a plurality of sets P. Selection of a set P can be made by the control interface3, for example by a control knob. Calculation System Musical device1comprises hardware and/or software means to implement the steps of the method described hereinabove. For this purpose, the musical device may include a calculation system K. The calculation system K comprises a calculator CALC. The calculator CALC contains software elements for implementing the method described hereafter. Preferably, a calculator CALC is connected to a second MEM memory containing instructions readable and executable by the calculator CALC, in particular for the implementation of the method described below. The control system K also includes an audio processor KLT. The audio processor KLT is connected to the calculator CALC. The audio processor KLT is also connected to the audio system5and/or an audio output interface51. The control system K is also connected to a memory containing a set P of music tracks M or to connector6for connection to such a memory. Preferably, the audio processor KLT is connected to said memory or said connector6. The calculator CALC is connected to sensors8to receive orientation data from parts7in receptacles11. Preferably, the calculator CALC is connected to the control interface3to receive control information from the control interface3. The calculator can also be connected to the indicator9to transmit information to be displayed. Preferably, the set P of tracks M is provided to the audio processor KLT via connector6or via a memory integrated into musical device1, for example in the frame. The calculator CALC receives information from the plurality of detectors8. The calculator is configured to select music tracks M from the information from the plurality of detectors8. The calculator can generate the orientation indicators θ. In another embodiment, detectors8each generate their orientation indicator θ and transmit it to the calculator CALC. The selected track(s) is (are) transmitted to the audio processor KLT. The audio processor KLT allows sound diffusion of the selected track(s) by the audio system5,51. The audio processor KLT is configured to play music tracks simultaneously and synchronously as described hereinabove. In addition, the calculation system is connected to the display means to display information relating to the orientation of the parts in the receptacles. In one embodiment, each time an orientation of a part is detected, an indicator light representative of the orientation and the receptacle is generated IND1, IND2. Device1may also comprise a power supply, in particular to power the audio system5,51, the calculation system K and/or the display means9.
35,297
11862133
DETAILED DESCRIPTION In the Summary above and in this Detailed Description, the claims below, and in the accompanying drawings, reference is made to particular features (including method steps) of the invention. It is to be understood that the disclosure of the invention in this specification includes all possible combinations of such particular features. For example, where a particular feature is disclosed in the context of a particular aspect or embodiment of the invention, or a particular claim, that feature can also be used, to the extent possible, in combination with and/or in the context of other particular aspects and embodiments of the invention, and in the invention generally. The term “comprises” and grammatical equivalents thereof are used herein to mean that other components, ingredients, steps, among others, are optionally present. For example, an article “comprising” (or “which comprises”) components A, B, and C can consist of (i.e., contain only) components A, B, and C, or can contain not only components A, B, and C but also contain one or more other components. Where reference is made herein to a method comprising two or more defined steps, the defined steps can be carried out in any order or simultaneously (except where the context excludes that possibility), and the method can include one or more other steps which are carried out before any of the defined steps, between two of the defined steps, or after all the defined steps (except where the context excludes that possibility). The term “at least” followed by a number is used herein to denote the start of a range beginning with that number (which may be a range having an upper limit or no upper limit, depending on the variable being defined). For example, “at least 1” means 1 or more than 1. The term “at most” followed by a number is used herein to denote the end of a range ending with that number (which may be a range having 1 or 0 as its lower limit, or a range having no lower limit, depending upon the variable being defined). For example, “at most 4” means 4 or less than 4, and “at most 40%” means 40% or less than 40%. When, in this specification, a range is given as “(a first number) to (a second number)” or “(a first number)— (a second number),” this means a range whose lower limit is the first number and whose upper limit is the second number. For example, to 100 mm means a range whose lower limit is 25 mm and upper limit is 100 mm. Certain terminology and derivations thereof may be used in the following description for convenience in reference only and will not be limiting. For example, words such as “upward,” “downward,” “left,” and “right” would refer to directions in the drawings to which reference is made unless otherwise stated. Similarly, words such as “inward” and “outward” would refer to directions toward and away from, respectively, the geometric center of a device or area and designated parts thereof. References in the singular tense include the plural, and vice versa, unless otherwise noted. The term “coupled to” as used herein may mean a direct or indirect connection via one or more components. Referring now to the drawings and the following written description of the present invention, it will be readily understood by those persons skilled in the art that the present invention is susceptible to broad utility and application. Many embodiments and adaptations of the present invention other than those herein described, as well as many variations, modifications, and equivalent arrangements will be apparent from or reasonably suggested by the present invention and the detailed description thereof, without departing from the substance or scope of the present invention. This disclosure is only illustrative and exemplary of the present invention and is made merely for purposes of providing a full and enabling disclosure of the invention. FIG.1shows an example top-side view of a stand100. The stand100may include a face110, a bracket130, and a base120. The face110may be a flat board made of wood, metal, plastic, or other similar durable material. The face110may be generally rectangular in shape in a first plane extending in a first direction and a second direction with holes112defined through the face110in a third direction through the first plane. The holes112may have a length in the first direction that extends a majority of the length of the face110. The holes112may have a width in the second direction that is much smaller than the length. The bracket130may extend parallel to the holes in the first direction under the face110. The bracket130may extend on either side of each hole112. The bracket130may include metal, plastic, wood, or other durable materials. The bracket130may extend for the entire length of the holes112. The base120may support the face110and may connect to the face110on a bottom side of the face110. The base120may include several separate parts. For example, as shown inFIG.1, the base may include three legs that support the face110at an angle (e.g., not parallel to the ground). The base120may support the face110. The base120may also support the brackets130and any other objects attached to the face110and brackets130. The base120may include wood, metal, plastic, or other similar durable materials. FIG.2shows an example exploded view of the stand100. The stand100may include a spacer114between the face110and the bracket130. The spacer may be arranged to provide a gap between the bracket130and the bottom of the face110to create a slot through which a grommet (not shown in this figure) may slide. The spacer114may be made of a similar material as the face110and bracket130. The base120may include three legs which extend primarily in parallel in the second direction to support the face110at an angle. The legs of the base120may define openings arranged to be below the holes112so that a portion of the grommet (not shown in this figure) can be slid through the opening between the base and the face110. The opening in the legs of the base120may be arranged to be between fingers of the brackets130. The middle of the three legs of the base120may have additional openings defined in the leg which may allow a peg in the grommet (not shown in this figure) to have room to pass over/through the opening in the leg when the grommet slides in the slot between the bracket130and the face110. FIG.3shows an example bottom view of the face110. The spacers114may extend parallel to the holes112for all or most of the length of the holes112. The spacers114may be on either side of each of the holes112. FIG.4shows an example bottom view of the bracket130. The bracket130may include fingers132which are arranged to extend on either side of each of the holes112under the spacers114. The bracket130may also include a back134connecting the fingers132and configured to prevent a grommet sliding in the gap between the bracket130and the face110from sliding out of the gap. Thus, the bracket130may be arranged below the holes112and on three sides of the holes112. FIG.5shows an example side view of the stand100with a grommet200shown entering into the slot formed between the bracket130and the face110. The grommet200may be shaped and sized such that the grommet200may slide through the slot. FIG.6shows an example exploded view of a grommet200and peg300. The grommet may include a body210, a spring220, a lock230and a pin240. The body210may have a generally cylindrical shape with a lip on the top portion of the body210shaped and arranged to fit in the gap between the bracket130and the face110. The body may also include an opening for the peg to enter and an opening for the spring220and the lock230to enter. The spring220may be arranged to press the lock230in a direction to lock the lock230in a position to secure the peg300to the grommet200. The lock230and body210may be made of plastic or another durable material. The lock230may be configured to slide into the opening for the lock230and engage with the peg300to secure the peg300to the grommet200. The pin240may be arranged in the opening in the body210to prevent the lock230from completely exiting the opening in the body230. The pin240may be made of metal or another durable material. The peg300may include a shaft310, head320, and a bumper330. The shaft310may include notches or extensions312for engaging with the lock230of the grommet200to secure the shaft in the opening in the body210of the grommet200when the lock is engaged with the shaft310. The lock230may be disengaged with the shaft310by pressing on the lock230. The shaft310may be sized such that the shaft310can fit through the hole112in the face110. The shaft310may also include a connection end314with threading or other mechanism for connecting the connection end314with the head320. The shaft310may include metal, plastic, and other durable materials. The head320may connect to the connection end314of the shaft310and extend horizontally and connect to the bumper330. The bumper330may have threading or another mechanism to connect to the head and may include a material such as silicone or rubber with high friction that may contact a modulator or other device that is desired to be secured to the face110of the stand100. The bumper may also be adjustable in height below the head320by rotating the bumper330relative to the head320on the threads. FIG.7shows an example top view of the stand100with modulators400attached. The stand100may have several grommets200in the slots below the face100with pegs300extending through the holes112in the face into the grommets200. The pegs300may each be attached to one of the grommets200. Multiple pegs300and grommets200may be used to attach each modulator400to the face110. Objects other than modulators400may also be attached to the face110using the grommets200and pegs300. The grommets200may be slid in the slots under each hole112so that the grommets200may be positioned to secure (using the pegs300) each modulator400(or other device) in a desired position. FIG.8shows an example back view of a second embodiment of the stand100. The stand100may include a height adjustable base140. The height adjustable base140may include a foot142, adjustment screw144, and scissoring legs146. The foot142may be a frame or other form of structure that supports the stand and does not adjust in shape. The scissoring legs146may change or adjust in shape in a scissoring motion to adjust a height of a portion of the face110(e.g., lift the back end of the face110) or adjust the angle of the face110(e.g., adjust the angle of the face110relative to the foot142and the ground. The scissoring legs146may also lay flat causing the overall height of the stand100to be decreased and making transportation easier. The adjustment screw144may be attached between the foot142and the scissoring legs146such that when the adjustment screw144is turned, the scissoring legs adjust in shape to adjust the height and/or angle of the face110. Other ways of making the height adjustable base140may include, using pistons, gears, electronic actuators, etc. to cause the scissoring legs146to change shape. Furthermore, the scissoring legs could be replaced by various other forms of height adjustable legs. The foot142and scissoring legs146may be made of similar materials as the face110and bracket130. The adjustment screw144may include metal or another durable material. Many different embodiments of the inventive concepts have been shown. A person of ordinary skill in the art will appreciate that the features from different embodiments may be combined or replaced with other features from different embodiments. Advantageously, the stand100may allow for quick and secure connection/disconnection of modulators due to the ease of sliding a grommet200into a slot and then pressing a peg300into the grommet200through the hole112in the face110. The stand100also allows for easy customization of locations of the modulators400on the stand due to the ease of moving the grommets200in the slots, allowing a user to secure modulators400(or other devices) anywhere on the face where a peg300attached to a grommet200can reach the modulator400. In one embodiment the device may include a face110, a bracket130and a base120. The face110may define a hole112that extends completely through the face110with a first length in a first direction and a first width in a second direction perpendicular to the first direction, wherein the first length is greater than the first width. The bracket130may connect under the face110and extend in the first direction parallel to the hole112and be configured to allow a grommet200to slide along the bracket130in the first direction. The base may be configured to support the face. The bracket130may be arranged between the base120and the face110. The bracket130may include a first rail (e.g., first finger132of the bracket130) below the hole in the second direction and a second rail (e.g., second finger132of the bracket130) above the hole in the second direction. The base120may define an indent for the grommet200to slide through while sliding along the bracket130. The device may further include the grommet200sized and configured to slide along the bracket130in the first direction. The grommet200may include an opening and a releasable locking mechanism (e.g., lock230and spring220). The device may further include a peg300configured to enter into the opening in the grommet200and be releasably locked into the opening by the locking mechanism. The peg300may be shaped and configured to secure an object (e.g., modulator400) to the face100when the grommet200is in a slot defined by the bracket130and the face110and the peg300is inserted through the hole112in the face110. The bracket130and the grommet200may be sized and arranged for a plurality of grommets200including the grommet200to be inserted into the slot. The device may further include a plurality of holes112including the hole112and a plurality of brackets (e.g., fingers132of the bracket) including the bracket130. Each of the plurality of brackets may be arranged under the face and extend in the first direction parallel to a respective one of the plurality of holes112and be configured to allow the grommet200to slide along the bracket in the first direction. The base140may include a lifting mechanism (e.g., foot142, scissoring arms146, and adjustment screw144) configured to adjust an angle of the face110. The base140may include a lifting mechanism configured to adjust a height of a portion of the face110. The lifting mechanism may include scissoring arms146and an adjustment screw144configured to move the scissoring arms146when the adjustment screw144is rotated in order to adjust the height of the portion of the face110. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. The present invention according to one or more embodiments described in the present description may be practiced with modification and alteration within the spirit and scope of the appended claims. Thus, the description is to be regarded as illustrative instead of restrictive of the present invention.
16,009
11862134
DESCRIPTION OF THE EMBODIMENTS The above-mentioned keyboard device requires the same number of the flexible member as the number of keys. In addition, there is a need for a structure for mounting the flexible member on both the frame side and the key side. Therefore, it is desired to realize a keyboard device using a simpler support structure. One of the objects of the present disclosure is to improve the appearance of the keyboard device by a support structure of a simple key. Hereinafter, embodiments of the present disclosure will be described in detail referring to the drawings. The following embodiments are examples of embodiments of the present disclosure, and the present disclosure is not to be construed as being limited to these embodiments. In the drawings referred to in the present embodiment, portions having the same or similar functions are denoted by the same or similar reference numerals, and a description thereof may be omitted. For convenience of description, the dimensional ratio in the drawings may be different from the actual ratio, or a part of the configuration may be omitted from the drawings. In the present specification, for the keyboard device, the expressions indicating directions such as up, down, left, right, front, and back are based on a player when playing. For convenience of explanation, the direction may be indicated based on the key, but in this case, a front-end side (front side) and a rear-end side (rear side) of the key correspond to a front side and a rear side, respectively, based on the player. The term “arc” herein includes not only the arc in a strict sense, but also shapes (approximately arc) that can be regarded as approximately arc. For example, even if an external form of the member is not a perfect arc due to the effects of manufacturing errors, etc., the external form of the member can be regarded as an arc if it has a function equivalent to the member whose external form is an arc. Similarly, the terms “coincident” or “identical” include cases that differ slightly (i.e., approximately coincident, or approximately identical) so as not to impair the function. [Configuration of Keyboard Device] FIG.1is a plan view showing a configuration of a keyboard device1. In this embodiment, the keyboard device1is an electronic keyboard device that sounds in response to a key depression by a user (player) such as an electronic piano. The keyboard device1may be a keyboard-type controller that outputs control data (e.g., MIDI) for controlling an external sound source device in response to a key depression. In this instance, the keyboard device1may not have the sound source device. The keyboard device1includes a keyboard assembly10. The keyboard assembly10includes a white key100W and a black key100B. A plurality of white keys100W and a plurality of black keys100B are arranged side by side. The number of the key100is N, in this example 88. The direction in which the keys are arranged is referred to as a scale direction. InFIG.1, a first direction (D1) is a scale direction. A second direction (D2) orthogonal to the first direction is a longitudinal direction of the key100. With the player as a reference, the first direction is the lateral direction, and the second direction is the anterior-posterior direction. If the white key100W and the black key100B can be described without particular distinction, it may be referred to as the key100. Also, in the following description, when “W” is attached to the end of the code, it means that the configuration corresponds to the white key. When “B” is attached to the end of the code, this means that the configuration corresponds to the black key. A part of the keyboard assembly10exists within a housing90. When the keyboard device1is viewed from above, a portion of the keyboard assembly10covered by the housing90is referred to as a non-appearance portion NV. A portion exposed from the housing90and visible to the user (a portion located closer to the front side than the non-appearance portion NV) is referred to as an appearance portion PV. That is, the appearance portion PV is a part of the key100and indicates a region that can be played by the user. Hereinafter, a portion of the key100exposed by the appearance portion PV may be referred to as a key body portion. Inside the housing90, a sound source device70and a speaker80are disposed. The sound source device70generates a sound waveform signal with the depression of the key100. The speaker80outputs the sound waveform signal generated in the sound source device70to an external space. The keyboard device1may include a slider for controlling the volume, a switch for switching the timbre, a display for displaying various data, and the like. FIG.2is a block diagram showing a configuration of the sound source device. The sound source device70includes a signal converting unit710, a sound source unit730and an output unit750. A sensor300is provided corresponding to each key100. The sensor300detects an operation of the key and outputs a signal corresponding to the detected content. In this example, the sensor300outputs a signal in accordance with the key depressing amount of three stages. Key depression rate can be detected in response to intervals of these signals. The signal converting unit710acquires an output signal of the sensor300(the sensors300-1,300-2, . . . ,300-88corresponding to the 88 keys100) and generates and outputs an operation signal corresponding to the operation state in each key100. In this instance, the operating signal is a signal in the form of MIDI. Therefore, in response to the key depression operation, the signal converting unit710outputs note on. At this time, the key number indicating which of the 88 keys100has been operated and the velocity corresponding to the key depression rate are also output in association with note on. On the other hand, in response to a key release operation, the signal converting unit710outputs the key number and note off in association with each other. A signal corresponding to other operations such as a pedal is input to the signal converting unit710and may be reflected in the operation signal. The sound source unit730generates the sound waveform signal based on the operation signal output from the signal converting unit710. The output unit750outputs the sound waveform signal generated by the sound source unit730. The sound waveform signal is output to, for example, the speaker80or an output terminal for outputting the sound waveform signal. [Configuration of Keyboard Assembly] FIG.3is a side view showing a configuration of the keyboard assembly10. The keyboard assembly10includes the key100, a supporting portion150, a hammer assembly200, the sensor300and a frame500. The keyboard assembly10is a resin-made structure manufactured by injection molding or the like in most configurations. InFIG.3, a third direction (D3) is a direction orthogonal to the first direction (D1) and the second direction (D2). The third direction is the vertical direction based on the player. The third direction may also be referred to as a key depression direction or a stroke direction. The frame500is fixed to a housing (not shown). A plurality of keys100are rotatably supported on the frame500, thereby forming the keyboard assembly10. The key100rotates about a pivot point around an axis extending in a first direction. The supporting portion150rotatably supports the key100relative to the frame500. The supporting portion150is composed of an axis portion110and a bearing portion120shown inFIG.10to be described later. Specific structures of the supporting portion150will be described later. The key100includes a front-end key guide102. The front-end key guide102is provided on a tip of the white key100W. The front-end key guide102is a plate-shaped member having a surface facing forward (player side). A vertically elongated slit (not shown) is provided in the approximately center of the surface of the front-end key guide102. The frame500includes a front-end frame guide511. The front-end frame guide511is a plate-shaped member protruding forward. When the white key100W is mounted on the frame500, the front-end frame guide511is slidably inserted into the slit of the front-end key guide102. At the time of the key depression operation, the white key100W moves in the vertical direction while sliding the inside of the slit of the front-end key guide102to the front-end frame guide511. This movement limits the movement of the key100in the scale direction, yawing direction, and rolling direction at the front-end, as will be described below. In the example shown inFIG.3, an example in which a guide structure composed of the front-end key guide102and the front-end frame guide511is provided at the front-end of the white key100W is shown, but the present invention is not limited to this example. Such a guide structure may be provided in a portion of the white key100W within a range of half from the front (front-end) (preferably, a portion in which the width of the white key100W is wider than other portions). Further, an example in which a plate-shaped member is used as the front-end frame guide511is shown, but it is not limited to this example, it may be a structure in contact with the inside of the slit of the front-end key guide102at two portions in the vertical direction. In this case, the slit of the front-end key guide102may be constituted by two slits arranged in the vertical direction. That is, the above-mentioned two portions in the vertical direction may be in contact with the insides of the two slits, respectively. Further, although an example in which the front-end key guide102is a slit is shown, the present invention is not limited to this example. That is, the shape of the front-end key guide102is arbitrary as long as the front-end key guide102contacts the front-end frame guide511at two points in the vertical direction. The hammer assembly200is disposed in a space below the key100and is rotatably attached to the frame500. A shaft supporting portion220of the hammer assembly200and a rotational axis520of the frame500are slidably in contact with each other at least at three points. A front-end portion210of the hammer assembly200is in contact with the hammer assembly so that it slides approximately anterior-posterior direction in an interior space116of the hammer supporting portion115. This sliding portion (i.e., the portion where the front-end portion210and the hammer supporting portion115contacts) is located below the key100in the appearance portion PV. In the hammer assembly200, at the rear side of the rotational axis520, a metallic weight portion230is disposed. Normally (when the key is not depressed), the weight portion230is in contact with a lower stopper410. In this state, the front-end portion210of the hammer assembly200is pushing the key100upward. When the key is depressed, the key100depresses the front-end portion210of the hammer assembly200downward. As a result, the weight portion230is moved upward and in contact with an upper stopper430. The hammer assembly200provides a weight on the key depression operation by the weight portion230. The lower stopper410and the upper stopper430are made of a cushioning material or the like, such as a nonwoven fabric or an elastic body. The sensor300is mounted on the frame500below the hammer supporting portion115and the front-end portion210of the hammer assembly200. The sensor300is composed of an upper electrode portion310having a structure in which an electrode is attached to a flexible member, and a lower electrode portion320having a structure in which an electrode is provided in a circuit substrate. When the upper electrode portion310of the sensor300is crushed on the lower surface side of the front-end portion210of the hammer assembly200by the key depression, the electrode of the upper electrode portion310and the electrode of the lower electrode portion320contact in the order from the rear side (the side closer to the rotational axis520of the hammer assembly200). By this contacting, the sensor300outputs a detection signal according to the key depressing amount of three stages. As described above, the sensor300is provided corresponding to each key100. FIG.4is a plan view showing a configuration of the keyboard assembly10. InFIG.4, a part of a configuration of the frame500is omitted for convenience of explanation. As shown inFIG.4, a supporting portion150B of the black key100B is arranged on the rear side of a supporting portion150W of the white key100W. This position relates to the position of the pivot point (center of rotation) of the key100. The arrangement shown inFIG.4reproduces the difference in the fulcrum between the white key and the black key of the acoustic piano. Further, in the present embodiment, the positions of the white key100W and the black key100B are adjusted so that the touch feeling of the ground piano can be reproduced. More specifically, as shown inFIG.4, when a distance between a rotational axis101W of the white key100W and a rotational axis101B of the black key100B is a and a length of the white key100W (a distance from the rotational axis101W to the front-end of the white key100W) is b, the value of a/b is adjusted so as to fall within a range of 0.061 or more and 0.075 or less. The value of a/b in a typical ground piano is about 0.068. In the present embodiment, the touch feeling of the ground piano is reproduced by setting the value of a/b in the keyboard assembly10to ±10% of the value of a/b in the keyboard assembly of the ground piano. For example, when the length (b) of the white key100W is 210 mm or more and 250 mm or less, the distance (a) between the rotational axis101W of the white key100W and the rotational axis101B of the black key100B may be 14 mm or more and 17 mm or less. FIGS.5and6are diagrams for explaining a definition regarding the direction of movement of the key100. Although the white key100W is exemplified inFIGS.5and6, the same definition is used for the black key100B. InFIG.5, the scale direction S is, as described above, the direction in which the white key100W and the black key100B are arranged (the lateral direction as viewed from the player). In the present embodiment, the first direction corresponds to the scale direction S. The yawing direction Y is a direction in which the white key100W rotates about the third direction (D3) shown inFIG.3when the white key100W is viewed from above. For example, inFIG.5, when it is assumed that only the rear-end (the supporting portion150W shown inFIG.4) of the white key100W is fixed, the direction in which the white key100W curves left and right is the yawing direction Y. InFIG.6, the rolling direction R is a direction in which the white key100W rotates about the longitudinal direction of the white key100W. In other words, the rolling direction R can be said to be the direction of rotation around the axis extending in the second direction (D2) shown inFIG.5. The vertical direction V is a direction in which the white key100W is depressed (also referred to as the stroke direction). In the present embodiment, the third direction (D3) corresponds to the vertical direction V. As described in the prior art, the key may be deformed due to aging or the like. The deformation may be, for example, a curve in the yawing direction Y or a twist in the rolling direction R. If such a deformation appears in the key in the appearance portion PV, the appearance of the keyboard device is impaired. Therefore, it is necessary to have a structure that restricts the deformation of the key so that the deformation of the key is not visually recognized in the appearance portion PV. In the keyboard assembly10of the present embodiment, the key100is relatively short, and a large part of the entire key appears in the appearance portion PV. Therefore, the keyboard assembly10of the present embodiment is configured to restrict the deformation of the key100at the front-end portion and the rear-end portion of the key100. Specifically, the front-end portion of the key100, the movement of the scale direction S, the yawing direction Y, and the rolling direction R is restricted by the front-end key guide102and the front-end frame guide511. The rear-end portion of the key100is restricted from moving in the scale direction S by the supporting portion150. In the present embodiment, the front-end portion of the key100restricts the movement in the scale direction S, the yawing direction Y, and the rolling direction R, while providing the degree of freedom of movement in the yawing direction Y and the rolling direction R in the rear-end portion. That is, in the key100of the present embodiment, the movement in the scale direction S is restricted and the movement in the yawing direction Y and the rolling direction R is allowed in the supporting portion150. [Configuration of Supporting Portion] FIG.7is a perspective view showing a configuration of a key rear-end portion105.FIG.8is a perspective view showing a configuration of a first bearing member130.FIG.9is a perspective view showing a configuration of a second bearing member140. As shown inFIG.7, the key rear-end portion105has a hook-shaped configuration as a whole. A portion of the key rear-end portion105that extends in the second direction is the axis portion110. The key100rotates by rotating the axis portion110around the axis extending in the first direction. InFIG.8, the first bearing member130has a groove portion131. The groove portion131functions to support the axis portion110as a bearing. The second bearing member140shown inFIG.9functions to press the axis portion110in the third direction when the axis portion110is disposed inside the groove portion131of the first bearing member130. As will be described later, the second bearing member140is elastic in the third direction. Therefore, the axis portion110can move in the third direction within a certain range if a force greater than the elastic force received from the second bearing member140is applied. FIG.10is a plan view showing a configuration of the supporting portion150. Specifically, it corresponds to a plan view obtained by enlarging the supporting portion150B of the black key100B shown inFIG.4.FIG.11is a cross-sectional view showing a configuration of the supporting portion150. Specifically,FIG.11corresponds to a cross-sectional view in which the supporting portion150shown inFIG.10is cut along A-A line. InFIGS.10and11, the supporting portion150B of the black key100B is exemplified, but the supporting portion150W of the white key100W has a similar structure. As shown inFIG.10, the supporting portion150includes the axis portion110and the bearing portion120. The bearing portion120is composed of the first bearing member130and the second bearing member140. The axis portion110is rotatably supported by the bearing portion120around the axis extending in the first direction. That is, the key100rotates with the axis extending in the first direction as the rotational axis. The pivot point when the key100rotates is at the rotational axis in the plane where the axis portion110is cut along the second direction. The axis portion110is located in the key rear-end portion105of the key100. The axis portion110is a rod-shaped portion extending in the second direction. As shown inFIG.11, in the present embodiment, the cross section of the axis portion110is approximately elliptical, but the cross section may be polygonal or approximately circular. The first bearing member130is a part of the frame500. The first bearing member130of the present embodiment has the groove portion131. As shown inFIG.11, the axis portion110is disposed inside the groove portion131of the first bearing member130. The axis portion110contacts the groove portion131with contact points41and42. In the present embodiment, by the axis portion110is in contact with the first bearing member130at the contact point41and the contact point42, the movement of the axis portion110in the first direction (scale direction S) is restricted. The second bearing member140is a member that sandwiches the axis portion110between itself and the first bearing member130. The second bearing member140has a function of pressing the axis portion110against the first bearing member130. Therefore, the movement of the axis portion110in the third direction (vertical direction V) is restricted by the second bearing member140. In the present embodiment, the second bearing member140is coupled to and fixed to the frame500. Specifically, the second bearing member140has a configuration in which a body portion140ashown inFIG.9is held by a holding portion140bconnected to the frame. That is, the body portion140ais held in a cantilever structure to the holding portion140b. The second bearing member140contacts the axis portion110at a contact point43. That is, the axis portion110is supported by the bearing portion120by the contact point41, the contact point42and the contact point43. The second bearing member140is elastic. In the present specification, “member A is elastic” means that when the member A and the member B have a contact point, the member A has a property that it can deform while holding the contact point in response to a change in the force applied from the member B. In other words, the contact point described above is movable between the member A and the member B along the directions in which the elastic force of the member A acts. Therefore, in the structure shown inFIG.11, even if the axis portion110moves in the third direction, the contact point43is held in accordance with the movement of the second bearing member140in the third direction. Thus, the movement of the axis portion110in the third direction is allowed within a certain range, although restricted by the elastic force of the second bearing member140. The axis portion110has a curved surface110a, a curved surface110b, and a curved surface110c. The curved surface110ais in contact with a curved surface131aof the first bearing member130at the contact point41. The curved surface110bis in contact with a curved surface131bof the first bearing member130at the contact point42. The curved surface110cis in contact with the second bearing member140at the contact point43. At this time, as shown inFIG.11, the external form of the axis portion110is arc shape at the contact point41, the contact point42and the contact point43. That is, the contact point41, the contact point42and the contact point43are points on arcs which are the external form of the axis portion110. In the example shown inFIG.11, when the radius of an arc having the contact point41is r1and the radius of an arc having the contact point42is r2, the radius r1and the radius r2are equal. That is, a lower end of the axis portion110is semicircular shape. The radius of an arc having the contact point43is r3. The radius r3is greater than the radius r1and the radius r2. That is, the axis portion110is a member having a longitudinal direction in the third direction in a cross-sectional view. InFIG.11, the arc having the contact point41, the arc having the contact point42, and the arc having the contact point43are arcs having the same center O. Therefore, the axis portion110is rotatable with the center O as the pivot point. That is, the key100is rotatable in the rolling direction R that is the direction of rotation around the axis extending in the second direction. In other words, the key100has a degree of freedom of movement in the rolling direction R. Thus, in the supporting portion150of the present embodiment, at the contact point41, the contact point42and the contact point43, the axis portion110rotates while intermittently sliding to the bearing portion120. FIG.12is a plan view schematically illustrating a configuration of the key100.FIG.13is a cross-sectional view schematically illustrating a configuration of the key100. Specifically,FIG.13corresponds to a cross-sectional view in which the key100is cut by the B-B line through the contact point42, the center O and the contact point43according to the cross-sectional view shown inFIG.11. As shown inFIG.12, the supporting portion150is disposed on the key rear-end portion105of the key100. However, the present invention is not limited to this embodiment, and the supporting portion150may be disposed at a position other than the key rear-end portion105of the key100. As shown inFIG.13, in a cross-sectional view in a plane orthogonal to the first direction (D2-D3plane), the cross-sectional shape of the curved surface110bof the axis portion110is linear. However, not limited to this example, the cross-sectional shape of the curved surface110bmay be curved. On the other hand, the cross-sectional shape of the curved surface110cis curved. In this case, it is preferable that the cross-sectional shape of the curved surface110cis an arc whose radius is the length between the contact point42and the contact point43. Further, an inner wall of the groove portion131of the first bearing member130has the curved surface131b. Therefore, at the contact point42, the curved surface of the axis portion110and the curved surface131bin the groove portion131of the curved surface110bare in contact with each other. Although not shown inFIG.13, similarly to the contact point42, at the contact point41, the curved surface110aof the axis portion110and the curved surface131aof the groove portion131are in contact with each other. With the configuration shown inFIG.13, during the key depression operation, the key100rotates around the rotational axis101which through the contact point41and the contact point42. InFIG.13, the contact point42is the center of rotation. At this time, since the axis portion110has the curved surface110cat the contact point43where the axis portion110and the second bearing member140are in contact with each other, the supporting portion150does not hinder the rotation when the key100rotates. FIG.14is a plan view schematically showing a configuration of the supporting portion150.FIGS.15and16are cross-sectional views schematically showing a configuration of the supporting portion150. Specifically,FIG.15corresponds to a cross-sectional view in which the supporting portion150shown inFIG.14is cut along C-C line.FIG.16corresponds to a cross-sectional view in which the supporting portion150shown inFIG.14is cut along D-D line. As shown inFIG.14, the supporting portion150has a first positioning portion51and a second positioning portion52for determining the position of anterior-posterior direction (the second direction) of the axis portion110. The first positioning portion51and the second positioning portion52, respectively, portions facing the first bearing member130. The first positioning portion51faces a front inclined portion22aof the first bearing member130. The second positioning portion52faces a rear inclined portion22bof the first bearing member130. That is, the first positioning portion51and the second positioning portion52are disposed so as to face each other across the first bearing member130. Here, inFIG.14, the aforementioned D-D line matches the rotational axis of the axis portion110in the key depression operation. That is, the first positioning portion51and the second positioning portion52sandwiches the rotational axis of the axis portion110, in a position isolated from the rotational axis to each other. As shown inFIG.15, in a cross-sectional view, the first bearing member130in the third direction has a shape (trapezoidal shape) in which the tip tapers toward the tip (upward). In contrast, the first positioning portion51and the second positioning portion52are inclined on the opposite side to the front inclined portion22aand the rear inclined portion22b, respectively. That is, in a cross-sectional view, the space between the first positioning portion51and the second positioning portion52, on the contrary to the first bearing member130, in the third direction, has a shape (trapezoidal shape) in which the tip tapers toward the lower. In the present embodiment, by disposing the axis portion110in the groove portion131of the first bearing member130, the lower ends of the first positioning portion51and the second positioning portion52are in contact with the first bearing member130at a contact point45and a contact point46. Therefore, when the axis portion110is supported by the bearing portion120, the movement of the axis portion110to the anterior-posterior direction (the second direction) is restricted. The lower ends of the first positioning portion51and the second positioning portion52are preferably curved to improve the durability. At this time, it is desirable that the position in the third direction of a line segment44connecting the contact point41and the contact point42inFIG.16, and the position in the third direction of a line segment47connecting the contact point45and the contact point46inFIG.15are approximately coincident or as close as possible. By bringing close the line segment44and the line segment47in the third direction, at the time of key depression operation, it is possible to suppress the change in the gap (clearance) between the first bearing member130and the first positioning portion51, and between the first bearing member130and the second positioning portion52. FIG.17is a plan view schematically showing a configuration of the key rear-end portion105. As shown inFIG.17, the first positioning portion51of the present embodiment has an angle θ1 larger than 90° between a side53substantially parallel to the second direction. That is, the first positioning portion51is inclined to the first direction. Similarly, the second positioning portion52has an angle θ2 larger than 90° between the side53and is inclined to the first direction. However, the example shown inFIG.17is merely an example, and the angles θ1 and θ2 may be 90 degrees or less. InFIG.17, an example in which the first positioning portion51and the second positioning portion52are formed of straight sides is shown, but the present invention is not limited to this example, and curved sides may be used. The axis portion110may have a structure in which the first positioning portion51and the second positioning portion52are in contact with the first bearing member130at least one point, respectively. That is, as long as the axis portion110and the first bearing member130are in contact with each other at two positions in the front and the rear, it is possible to restrict the movement of the axis portion110to the anterior-posterior direction (the second direction). FIG.18is a plan view schematically showing a state in which the axis portion110is disposed in the first bearing member130. As described above, the first positioning portion51and the second positioning portion52are inclined to the first direction. Therefore, gaps are formed between the front inclined portion22aof the first bearing member130and the first positioning portion51, and between the rear inclined portion22bof the first bearing member130and the second positioning portion52as the distance from the groove portion131increases. Therefore, the axis portion110is rotatable in the yawing direction Y. That is, the key100has a degree of freedom of movement in the yawing direction Y. However, when the axis portion110rotates in the yawing direction Y, the positions of the contact points (the contact point45and the contact point46) in which the axis portion110and the first bearing member130are in contact with each other change. As described referring toFIGS.10and11, the axis portion110is pressed to the first bearing member130by the second bearing member140. Therefore, the axis portion110is restricted the movement in the third direction (the direction deviating from the first bearing member130) by the second bearing member140. However, if a force greater than the elastic force of the second bearing member140is exerted to the axis portion110, the axis portion110can be moved upward within a certain range. As shown inFIGS.15and16, in the supporting portion150of the present embodiment, the axis portion110is disposed on the groove portion131of the first bearing member130and is in contact at the contact point41, the contact point42, the contact point45and the contact point46. Thus, when the axis portion110moves in the yawing direction Y, the axis portion110moves upward against the elastic force of the second bearing member140with changes in the positions of the respective contact points. That is, in the present embodiment, since the second bearing member140is elastic, it is possible to provide a degree of freedom of movement in the yawing direction Y to the axis portion110. As described above, the support structure (the supporting portion150) of the key100in the keyboard device1of the present embodiment, the axis portion110, which is a part of the key100, has a degree of freedom of movement in the rolling direction R and the yawing direction Y. Therefore, according to the support structure of the present embodiment, it is possible to absorb the deformation of the key caused by aging or the like, and to improve the appearance of the keyboard device1by a simple support structure. In addition, the key100can be supported by a simple structure in which a part of the key100(the rear-end portion in this embodiment) is disposed in a part of the frame500(the bearing portion120) instead of a structure in which the key100and the frame500are connected to each other by a separate member. (Modification 1) In the present embodiment, an example in which the first bearing member130is constituted by a part of the frame500and the second bearing member140is coupled to the frame500is shown. However, the present invention is not limited to this embodiment, the first bearing member130and the second bearing member140may be integrated. That is, the first bearing member130and the second bearing member140is formed of the same material, it may be an integral structure. By integrating the first bearing member130and the second bearing member140, it is possible to further reduce the number of components, it is possible to reduce the manufacturing cost. (Modification 2) In the present embodiment, an example in which the cross-sectional shape is approximately elliptical rod-shaped member is used has been described. However, the present invention is not limited to this embodiment, the axis portion110may be a spherical member. That is, the axis portion110may be a structure in which a spherical portion is provided in a part of the key100. In this case, for the first bearing member130, for example, a member having a polygonal pyramid-shaped opening can be used. By making the shape of the opening functioning as the bearing into a polygonal pyramid (e.g., triangular pyramid, quadrangular pyramid, etc.), the axis portion110is in contact with the first bearing member130at a plurality of points (e.g., three points if the triangular pyramid, four points if the quadrangular pyramid). In this case, as well, the axis portion110may be pressed to the first bearing member130using the second bearing member140. (Modification 3) In this embodiment, as shown inFIG.9, an example in which the second bearing member140is a cantilever structure is shown. However, the present invention is not limited to this embodiment, and the second bearing member140can be a double-supported structure. The second bearing member140may be a member having a mesh-like (net-shaped) body. Further, the second bearing member140does not need to be an elastic member and may function as an elastic member by being combined with an elastic body such as a spring, a rubber, or the like. Thus, any member may be used for the second bearing member140as long as it is a member capable of giving elastic force to the axis portion110. (Modification 4) In this embodiment, an example in which the second bearing member140is elastic. However, the present invention is not limited to this embodiment, and the axis portion110or the first bearing member130may be elastic. Further, the elastic member may be at least one of the axis portion110, the first bearing member130and the second bearing member140. That is, the plurality of members may be elastic. For example, both the first bearing member130and the second bearing member140may be elastic. Next, in this embodiment, an example in which the positional relation between the axis portion and the bearing portion (the first bearing member and the second bearing member) in the support structure (the supporting portion) is different from the embodiment shown inFIG.11will be described. In the description of the present embodiment, a description will be given focusing on points different from the keyboard device1of the above-described embodiment. The same reference numerals are used to denote the same structures as those of the keyboard device1in the above-described embodiment, and descriptions thereof are omitted. FIG.19is a cross-sectional view schematically illustrating a configuration of a supporting portion150-1. The example shown inFIG.19is an example in which the bearing portion is provided on the key and the axis portion is provided on the frame in vice versa as in the aforementioned embodiment. The supporting portion150-1includes an axis portion110-1, a first bearing member130-1and a second bearing member140-1. The axis portion110-1is a part of the frame500. The first bearing, the first bearing member130-1and the second bearing member140-1are both parts of the key100. Further, although not shown, the first bearing member130-1and the second bearing member140-1are an integral member. However, not limited to this example, the first bearing member130-1and the second bearing member140-1may be separate members, respectively. The axis portion110-1is in contact with the first bearing member130-1or the second bearing member140-1at the contact points41,42and43, respectively. FIG.20is a cross-sectional view schematically illustrating a configuration of a supporting portion150-2. The example shown inFIG.20is an example in which the form of the axis portion is different from that of the embodiment described above. The configurations of the first bearing member130and the second bearing member140are the same as those of the above-described embodiment. The supporting portion150-2includes an axis portion110-2, the first bearing member130and the second bearing member140. The axis portion110-2is a part of the key100. As shown inFIG.20, the axis portion110-2includes three portions110-2ato110-2c. The axis portion110-2is in contact with the first bearing member130or the second bearing member140at the contact points41,42and43, respectively. Specifically, the axis portion110-2is in contact with the first bearing member130or the second bearing member140by each portions110-2ato110-2c. Each portions110-2ato110-2chas arcuate external forms in a cross-sectional view. That is, each portions110-2ato110-2chave curved surfaces, respectively. Thus, the axis portion110-2may have an arcuate external form in the portion in contact with the first bearing member130or the second bearing member140. That is, in the axis portion110-2, is the shape other than the portion in contact with the first bearing member130or the second bearing member140is arbitrary. FIG.21is a cross-sectional view schematically illustrating a configuration of a supporting portion150-3. The example shown inFIG.21is an example in which the shapes of the axis portion and the first bearing member is different from those of the aforementioned embodiment. The configuration of the second bearing member140is the same as that of the embodiment described above. The supporting portion150-3includes an axis portion110-3, a first bearing member130-3and the second bearing member140. The axis portion110-3is a part of the key100. As shown inFIG.21, the axis portion110-3has a linear first side110-3aand a linear second side110-3bin a cross-sectional view. The first bearing member130-3has a protrusion part130-3ahaving a semicircular external form in a cross-sectional view. That is, the protrusion part130-3ahas a curved surface in the portion in contact with the axis portion110-3. The protrusion part130-3aof the first bearing member130-3is in contact with the first side110-3aand the second side110-3bof the axis portion110-3at the contact point41and the contact point42, respectively. In this case, the axis portion110-3rotates while contacting the first side110-3aand the second side110-3bto the protrusion part130-3a. As described above, the positional relation and the shape between the axis portion and the bearing portion (in particular the first bearing member) may have various embodiment. In any case, in the key support structure of the keyboard device1of the present embodiment, the axis portion and the bearing portion are in contact with each other at least at three points in a cross-sectional view. At this time, the external form of the member that slides intermittently at the respective contact points is the circular arc. Here, the positional relationship between the contact point41, the contact point42, and the contact point43shown inFIGS.19to21satisfies a predetermined relationship. This point will be described referring toFIGS.22and23. FIG.22is a diagram for explaining positional relation between the contact point41, the contact point42, and the contact point43of an axis portion161, a first bearing member162, and a second bearing member163. A supporting portion160shown inFIG.22corresponds to the configuration of the supporting portion150shown inFIG.11, the supporting portion150-1shown inFIG.19, and the supporting portion150-2shown inFIG.20. However, for convenience of explanation, in the example shown inFIG.22shows an example in which the axis portion161is circular in a cross-sectional view. At the contact point41and the contact point42, the axis portion161and the first bearing member162are in contact with each other. At the contact point43, the axis portion161and the second bearing member163are in contact with each other. That is, in a cross-sectional view, the contact point41, the contact point42and the contact point43are points on an arc that are the external form of the axis portion161. Here, the contact point41is set as a starting point, and a normal vector toward the axis portion161(i.e., a normal vector from the contact point41toward the rotation center O of the axis portion161) is set as a first normal vector41a. Similarly, the contact point42and the contact point43are set as the starting points, and the normal vector toward the axis portion161is set as a second normal vector42aand a third normal vector43a, respectively. At this time, as shown inFIG.22, when the starting points of each of the first normal vector41a, the second normal vector42a, and the third normal vector43aare moved to one point, the angles θ1 to 03 formed by the adjacent normal vectors are less than 180°. Since the axis portion161is supported by the bearing portion composed of the first bearing member162and the second bearing member163, the force received by the axis portion161from the first bearing member162or the second bearing member163is balanced at the contact point41, the contact point42and the contact point43. In other words, the supporting portion160shown inFIG.22has zero resultant vector of forces applied to the axis portion161from the first bearing member162and the second bearing member163at the contact points41,42and43. FIG.23is a diagram for explaining positional relation between the contact point41, the contact point42, and the contact point43of an axis portion166, a first bearing member167, and a second bearing member168. A supporting portion165shown inFIG.23corresponds to the configuration of the supporting portion150-3shown inFIG.21. However, for convenience of explanation, in the example shown inFIG.23shows an example in which the first bearing member167is circular in a cross-sectional view. At the contact point41and the contact point42, the axis portion166and the first bearing member167are in contact with each other. At the contact point43, the axis portion166and the second bearing member168are in contact with each other. That is, in a cross-sectional view, the contact point41and the contact point42are points on the arc that is the external form of the first bearing member167. In contrast, the contact point43is a point on the arc that is the external form of the axis portion166. At this time, also in the example shown inFIG.23, when the starting points of each of the first normal vector41a, the second normal vector42a, and the third normal vector43aare moved to one point, the angles θ1 to θ3 formed by the adjacent normal vectors are less than 180°. In the example shown inFIG.23, the axis portion166is supported by the bearing portion composed of the first bearing member167and the second bearing member168. Thus, in the example shown inFIG.23, similarly to the example shown inFIG.22, at the contact point41, the contact point42and the contact point43, the resultant vector of the force applied to the axis portion166from the first bearing member167and the second bearing member168is zero. As described above, in the support structures (the supporting portions150-1,150-2and150-3) of the present embodiment, in a cross-sectional view, when the bearing portion (i.e., the first bearing member and the second bearing member) is in contact with the axis portion at three contact points, the respective contact points are located on the arc which is at least one of the external forms of the axis portion, the first bearing member and the second bearing member. When the starting points of each of the normal vector from the contact point toward the axis portion are moved to one point, the angle formed by each adjacent normal vector is less than 180°. Such a relationship is the same in the above-described embodiment. According to the support structure of the present embodiment, it is possible to improve the external appearance of the keyboard device1by a simple support structure similarly to the above-described embodiment. Next, in this embodiment, an example in which the support structure (the supporting portion) of the embodiment shown inFIG.11is rotated in the first direction, the second direction, or the third direction will be described. In the description of this embodiment, a description will be given focusing on points different from the keyboard device1of the above-described embodiment. The same reference numerals are used to denote the same structures as those of the keyboard device1in the above-described embodiment, and descriptions thereof are omitted. FIG.24is a plan view schematically illustrating a configuration of a supporting portion150-4.FIG.25is a cross-sectional view schematically illustrating a configuration of the supporting portion150-4. Specifically,FIG.25corresponds to a cross-sectional view in which a key100-1and the supporting portion150-4shown inFIG.24are cut along E-E line. The examples shown inFIGS.24and25correspond to the configuration shown inFIGS.12and13in which the supporting portion150is rotated approximately 90° around the axis in the first direction. A key rear-end portion105-4of the key100-1bends downward (in the third direction) to constitute an axis portion110-4. As shown inFIG.25, the axis portion110-4is supported between a first bearing member130-4and a second bearing member140-4in the second direction. In the embodiment shown inFIGS.24and25, the axis portion110-4rotates around the axis extending in the first direction and has a degree of freedom of movement in the rolling direction R and the yawing direction Y. FIG.26is a side view schematically illustrating a configuration of a supporting portion150-5.FIG.27is a cross-sectional view schematically illustrating a configuration of the supporting portion150-5. Specifically,FIG.27corresponds to a cross-sectional view in which a key100-2and the supporting portion150-5shown inFIG.26are cut along F-F line. The examples shown inFIGS.26and27correspond to the configuration in which the supporting portion150shown inFIGS.12and13is rotated approximately 90° around the axis in the second direction. An axis portion110-5is supported between a first bearing member130-5and a second bearing member140-5in the first direction. Further, the axis portion110-5is restricted from moving in the second direction by sandwiching the first bearing member130-5between a first positioning portion51-1and a second positioning portion52-1. The inclination angle of the first positioning portion51-1and the second positioning unit52-1are determined within a range that does not interfere with the rotation of the key100-2by the key depression operation. In the embodiment shown inFIGS.26and27, the axis portion110-5rotates around the axis extending in the first direction and has a degree of freedom of movement in the rolling direction R and the yawing direction Y. FIG.28is a plan view schematically illustrating a configuration of a supporting portion150-6.FIG.29is a cross-sectional view schematically illustrating the configuration of the supporting portion150-6. Specifically,FIG.29corresponds to a cross-sectional view in which a key100-3and the supporting portion150-6shown inFIG.28are cut along G-G line. The examples shown inFIGS.28and29correspond to the configuration in which the supporting portion150shown inFIGS.12and13is rotated approximately 90° around the axis in the third direction. An axis portion110-6is supported between a first bearing member130-6and a second bearing member140-6in the third orientation. Further, the axis portion110-6is restricted from moving in the first direction by sandwiching the first bearing member130-6between a first positioning portion51-2and a second positioning portion52-2. In the embodiment shown inFIGS.28and29, the axis portion110-6rotates around the axis extending in the first direction and has a degree of freedom of movement in the rolling direction R and the yawing direction Y. As described above, even when the key support structure (the supporting portion) of the keyboard device1is rotated in the first direction, the second direction, or the third direction, the external appearance of the keyboard device1can be improved by a simple support structure, similarly to the above-described embodiment. In the present embodiment has shown an example in which the support structure is rotated approximately 90°, the rotation angle is not limited to 90°. According to the support structure of this embodiment, it is possible to improve the external appearance of the keyboard device1by a simple support structure similarly to the above-described embodiment. Next, in this embodiment, a description will be given of an example in which the axis portion is the elastic support structure (the supporting portion) rather than the elastic bearing portion. In the description of this embodiment, a description will be given focusing on points different from the keyboard device1of the embodiment shown inFIG.11. The same reference numerals are used to denote the same structures as those of the keyboard device1in the above-described embodiment, and descriptions thereof are omitted. FIG.30is a cross-sectional view schematically illustrating a configuration of a supporting portion150-7.FIG.31is a cross-sectional view schematically illustrating the configuration of the supporting portion150-7. Specifically,FIG.31corresponds to a cross-sectional view in which the supporting portion150-7shown inFIG.30is cut along H-H line. As shown inFIGS.30and31, an axis portion110-7of the present embodiment includes a body portion110-7a, an elastic portion110-7band a connecting portion110-7c. The axis portion110-7is disposed inside an opening portion130-7aprovided in a first bearing member130-7. The body portion110-7acontacts the lower inner wall of the opening portion130-7a, and the elastic portion110-7bcontacts the upper inner wall of the opening portion130-7a. At this time, since the elastic force of the elastic portion110-7bpushes the first bearing member130-7upward, a downward force is exerted on the body portion110-7a. As a result, the body portion110-7ais pressed to the lower inner wall of the opening portion130-7a. The body portion110-7ahas curved surfaces60aand60bin which the external form is an arc. As shown inFIG.30, the curved surfaces60aand60bof the body portion110-7aare in contact with the first bearing member130-7at the contact point41and the contact point42. The structure at the contact point41and the contact point42in which the axis portion110-7and the first bearing member130-7are in contact with each other is the same structure as the aforementioned embodiment (e.g., referring toFIG.11). The elastic portion110-7bhas a curved surface60cin which the external form is an arc. As shown inFIGS.30and31, the curved surface60cof the elastic portion110-7bis in contact with the first bearing member130-7at the contact point43. The structure in a third contact point where the elastic portion110-7band the first bearing member130-7are in contact with each other is the same structure in which the axis portion110and the second bearing member140are in contact with each other in the aforementioned embodiment (e.g., referring toFIGS.11and13). According to the support structure of this embodiment, it is possible to improve the external appearance of the keyboard device1by a simple support structure similarly to the above-described embodiment. Further, since the axis portion110-7has the elastic portion110-7b, it is possible to reduce the manufacturing cost by reducing the number of components. In this embodiment, an example is shown in which the body portion110-7a, the elastic portion110-7b, and the connecting portion110-7care integrally formed. However, the present invention is not limited to this embodiment, and the body portion110-7aand the elastic portion110-7bmay be a separate member and connected by the connecting portion110-7c. Next, in this embodiment, an example in which the form of the axis portion is different from that of the embodiment shown inFIG.11will be described. In the description of this embodiment, a description will be given focusing on points different from the keyboard device1of the above-described embodiment. The same reference numerals are used to denote the same structures as those of the keyboard device1in the above-described embodiment, and descriptions thereof are omitted. FIG.32is a cross-sectional view schematically illustrating a configuration of a supporting portion150-8.FIG.32corresponds to a cross-sectional view described referring toFIG.11in the embodiment described above. In the supporting portion150-8, an axis portion110-8has a curved surface110-8a, a curved surface110-8b, and a curved surface110-8c. The curved surface110-8ais in contact with the first bearing member130at the contact point41. The curved surface110-8bis in contact with the first bearing member130at the contact point42. The curved surface110-8cis in contact with the second bearing member140at the contact point43. At the contact point41, the contact point42and the contact point43, the external form of the axis portion110-8is arcuate. That is, the contact point41, the contact point42and the contact point43are points on the arc which is the external form of the axis portion110. In this embodiment, the radius r1of the arc having the contact point41, the radius r2of the arc having the contact point42, and the radius r3of the arc having the third contact point are different from each other. However, the arc having the contact point41, the arc having the contact point42, and the arc having the contact point43are arcs having the same center O. Therefore, the axis portion110-8is rotatable with the center O as the pivot point. That is, the axis portion110-8has a degree of freedom of movement in the rolling direction R. According to the support structure of this embodiment, it is possible to improve the external appearance of the keyboard device1by a simple support structure similarly to the above-described embodiment. Next, in this embodiment, an example in which the configuration of the positioning portion determining the longitudinal position of the axis portion is different from that of the embodiment described above. In the description of this embodiment, a description will be given focusing on points different from the keyboard device1of the above-described embodiment. The same reference numerals are used to denote the same structures as those of the keyboard device1in the above-described embodiment, and descriptions thereof are omitted. FIG.33is a plan view schematically illustrating a configuration of a supporting portion150-9.FIG.33corresponds to a plan view described referring toFIG.14in the embodiment described above. In the embodiment described above, the first positioning portion51and the second positioning portion52are arranged to face each other with the rotational axis of the axis portion110interposed therebetween. In contrast, in the example shown inFIG.33, a first positioning portion51-3and a second positioning portion52-3are provided at different positions in the first direction. That is, in the supporting portion150-9, the first positioning portion51-3and the second positioning portion52-3are not overlapped in the second direction. In a plan view shown inFIG.33, there is a gap between the first positioning portion51-3and the front inclined portion22a. However, in practice, as described referring toFIG.15in the above embodiment, the first positioning portion51-3and the front inclined portion22aare in contact with each other. Therefore, the first positioning portion51-3can restrict the backward movement of the axis portion110-9. Similarly, since the second positioning portion52-3and the rear inclined portion22bare in contact with each other, the second positioning portion52-3can also restrict the forward movement of the axis portion110-9. FIG.34is a plan view schematically illustrating a configuration of a supporting portion150-10. In the example shown inFIG.34, a first positioning portion51-4of an axis portion110-10and a front inclined portion22a-1of a first bearing member130-10are in contact with each other at a position (or close position) overlapping the axis of the rotational axis101. In a plan view shown inFIG.34, there is a gap between the first positioning portion51-4and the front inclined portion22a-1. However, in practice, as described above, the first positioning portion51-4and the front inclined portion22a-1are in contact with each other. Similarly, a second positioning portion52-4also contacts at a position overlapping the axis of a rear inclined portion22b-1and the rotational axis101(or close position). Therefore, the first positioning portion51-4can restrict the backward movement of the axis portion110-10. Similarly, since the second positioning portion52-4and the rear inclined portion22b-1are in contact with each other, the second positioning portion52-4can also restrict the forward movement of the axis portion110-10. In the embodiment shown inFIG.34, the axis portion110-10and the first bearing member130-10are in contact with each other at a position (or close position) overlapping the axis of the rotational axis101. In this case, even if the axis portion110-10rotates around the rotational axis101, the anterior-posterior positional relation will not change at the contact point between the axis portion110-10and the first bearing member130-10. That is, even if the axis portion110-10rotates, there is no gap between the axis portion110-10and the first bearing member130-10. Therefore, according to the structure shown inFIG.34, it is possible to restrict the movement of the axis portion110-10toward the anterior-posterior direction with higher accuracy. Also, in the example shown inFIG.34, the first positioning portion51-4and the second positioning portion52-4are provided at different positions in the first direction. That is, in the supporting portion150-10, the first positioning portion51-4and the second positioning portion52-4are not overlapped in the second direction. According to the support structure of this embodiment, it is possible to improve the external appearance of the keyboard device1by a simple support structure similarly to the above-described embodiment. Next, in this embodiment, an example in which the configuration of the positioning portion determining the longitudinal position of the axis portion is different from that of the above-described embodiment. In the description of this embodiment, a description will be given focusing on points different from the keyboard device1of the above-described embodiment. The same reference numerals are used to denote the same structures as those of the keyboard device1in the above-described embodiment, and descriptions thereof are omitted. FIG.35is a plan view schematically illustrating a configuration of a supporting portion150-11.FIG.36andFIG.37are cross-sectional views schematically illustrating the configuration of the supporting portion150-11. Specifically,FIG.36corresponds to a cross-sectional view in which the supporting portion150-11shown inFIG.35is cut along J-J line.FIG.37corresponds to a cross-sectional view in which the supporting portion150-11shown inFIG.35is cut along K-K line. As shown inFIGS.35to37, an axis portion110-11has a protrusion portion65protruding in the third direction. A first bearing member130-11has a groove portion131-11. As shown inFIG.36, when the axis portion110-11is combined with the first bearing member130-11, the protrusion portion65is inserted into the groove portion131-11. At this time, a first surface23aand a second surface23bof the inner wall of the groove portion131-11that are approximately orthogonal to the second direction, respectively, faces a first surface65aand a second surface65bof the protrusion portion65that are approximately orthogonal to the second direction. Therefore, the axis portion110-11is restricted to move to the anterior-posterior direction. FIG.38is a cross-sectional view schematically illustrating a configuration of a supporting portion150-12. Specifically, this corresponds to the modification of the cross-sectional view shown inFIG.36. As shown inFIG.38, an axis portion110-12has a protrusion portion65-1protruding in the third direction. In the embodiment shown inFIG.38, a tip of the protrusion portion65-1has a semicircular shape. Further, a first bearing member130-12has an approximately triangular groove portion131-12in a cross-sectional view. When combining the axis portion110-12and the first bearing member130-12, the protrusion portion65-1is inserted into the groove portion131-12. At this time, in a cross-sectional view, the tip of the protrusion portion65-1is in contact with the inner wall of the groove portion131-12at a contact point48and a contact point49. Therefore, the axis portion110-12is restricted to move to the anterior-posterior direction. According to the support structure of this embodiment, it is possible to improve the external appearance of the keyboard device1by a simple support structure similarly to the above-described embodiment. The support structure described in the above embodiments are applied as the key support structure of the keyboard device. However, the present invention is not limited to these embodiments and is also applicable to the support structure of the rotation member other than the key in the keyboard device. For example, it may be applied as a hammer support structure of the keyboard device. One embodiment of the present disclosure will be briefly summarized below. A key support structure of a keyboard device according to an embodiment of the present disclosure includes a first member and a second member that supports the first member. The first member is pivotal about a first axis that extends in a first direction and movable with a degree of freedom of movement in a rolling direction, which is a direction of rotation around a second axis extending in a second direction that is substantially orthogonal to the first direction. the second member restricts a movement of the first member in the first direction, while pivotally supporting the first member about the first axis. The support structure can also be configured as follows. The first member may have a degree of freedom of movement in a yawing direction accompanied that changes positions of the contacts of the first member and the second member. The keyboard may include a key. The first member may be a part of the key. A key support structure of the keyboard device according to an embodiment of the present disclosure includes a first member, a second member contacting the first member at a first contact point and a second contact point, and a third member contacting the first member at a third contact point. At least one member, among the first member, the second member, and the third member, is slideable at one contact, among the first contact, second contact, and third contact. In a state where a first normal vector at the first contact point, a second normal vector at the second contact point, and the third normal vector intersect at a common point, an angle between a pair of neighboring normal vectors, among the first, second, and third vectors, is less than 180 degrees. The support structure can also be configured as follows. At least one of the first member, the second member, or the third member may be elastic. The third member may be connected to or integrated with the second member. The first member may be a rotational member. The rotational member may include positioning portions that determine a longitudinal position of the rotational member. A rotational axis of the rotational member may be located between the positioning portions. The positioning portions may be separated from each other from the rotational axis. The rotational member may include positioning portions that determine a longitudinal position of the rotational member. A rotational axis of the rotational member may be located between the positioning portions. The positioning portions contact the second member on the rotational axis of the rotational member. The first member may include an arc surface at at least one of the first contact, the second contact, or the third contact. The first member may include an arc surface at each of the first contact, the second contact, and the third contact. Each of the first, second, and third vectors may intersect a center of one of the respective arc surfaces. Each of the arc surfaces may have a same radius of curvature. A key support structure of the keyboard device according to an embodiment of the present disclosure includes a first member, a second member contacting the first member at a first contact point and a second contact point, and a third member contacting the third member at a third contact point. the second member and the third member support the first member at the first contact point, the second contact point, and the third contact point. The support structure can also be configured as follows. At least one of the first member, the second member, or the third member may be elastic. The first member may be a rotational member (rotatable member). The rotational member may have positionings portion that determine a longitudinal position of the rotational member. A rotational axis of the rotational member is located between the positioning portions. The positioning portions are separated from each other from the rotational axis. The rotational member may have positioning portions that determine a longitudinal position of the rotational member. A rotational axis of the rotational member is located between the positioning portions. The positioning portions contact the second member on the rotational axis of the rotational member. The third member may be connected to or integrated with the second member. A keyboard device according to an embodiment of the present disclosure may have a frame, a key and the key support structure(s) described above. The first member may be a part of the key. The second member may be a part of the frame. An electronic musical instrument according to an embodiment of the present disclosure may have the keyboard device described above and a sound output device. As long as the gist of the present disclosure is provided, it is within the scope of the present disclosure that a skill in art adds, deletes, or modifies a design of a component or adds, omits, or modifies a process based on the configuration described above as an embodiment of the present disclosure. The above-described embodiments or modifications can be appropriately combined as long as they are not mutually contradictory. Technical matters common to each embodiment are included in each embodiment even if they are not explicitly described. It is to be understood that other operational effects different from those provided by the aspects of the respective embodiments or modifications described above are naturally brought about by the present disclosure for those apparent from the description herein or those which can be easily predicted by those skilled in the art.
69,425
11862135
DETAILED DESCRIPTION The above and other objectives, features and advantages of the present invention will be more clearly understood from the following preferred embodiments taken in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed herein, and may be modified into different forms. These embodiments are provided to thoroughly explain the invention and to sufficiently transfer the spirit of the present invention to those skilled in the art. Throughout the drawings, the same reference numerals will refer to the same or like elements. For the sake of clarity of the present invention, the dimensions of structures are depicted as being larger than the actual sizes thereof. It will be understood that, although terms such as “first”, “second”, etc. may be used herein to describe various elements, these elements are not to be limited by these terms. These terms are only used to distinguish one element from another element. For instance, a “first” element discussed below could be termed a “second” element without departing from the scope of the present invention. Similarly, the “second” element could also be termed a “first” element. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise”, “include”, “have”, etc., when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof. Also, it will be understood that when an element such as a layer, film, area, or sheet is referred to as being “on” another element, it can be directly on the other element, or intervening elements may be present therebetween. Similarly, when an element such as a layer, film, area, or sheet is referred to as being “under” another element, it can be directly under the other element, or intervening elements may be present therebetween. Unless otherwise specified, all numbers, values, and/or representations that express the amounts of components, reaction conditions, polymer compositions, and mixtures used herein are to be taken as approximations including various uncertainties affecting measurement that inherently occur in obtaining these values, among others, and thus should be understood to be modified by the term “about” in all cases. Further, unless specifically stated or obvious from context, as used herein, the term “about” is understood as within a range of normal tolerance in the art, for example within 2 standard deviations of the mean. “About” can be understood as within 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1%, 0.5%, 0.1%, 0.05%, or 0.01% of the stated value. Unless otherwise clear from the context, all numerical values provided herein are modified by the term “about.” When a numerical range is disclosed in this specification, the range is continuous, and includes all values from the minimum value of said range to the maximum value thereof, unless otherwise indicated. Moreover, when such a range pertains to integer values, all integers including the minimum value to the maximum value are included, unless otherwise indicated. In the present specification, when a range is described for a variable, it will be understood that the variable includes all values including the end points described within the stated range. For example, the range of “5 to 10” will be understood to include any subranges, such as 6 to 10, 7 to 10, 6 to 9, 7 to 9, and the like, as well as individual values of 5, 6, 7, 8, 9 and 10, and will also be understood to include any value between valid integers within the stated range, such as 5.5, 6.5, 7.5, 5.5 to 8.5, 6.5 to 9, and the like. Also, for example, the range of “10% to 30%” will be understood to include subranges, such as 10% to 15%, 12% to 18%, 20% to 30%, etc., as well as all integers including values of 10%, 11%, 12%, 13% and the like up to 30%, and will also be understood to include any value between valid integers within the stated range, such as 10.5%, 15.5%, 25.5%, and the like. It is understood that the term “vehicle” or “vehicular” or other similar term as used herein is inclusive of motor vehicles in general such as passenger automobiles including sports utility vehicles (SUV), buses, trucks, various commercial vehicles, watercraft including a variety of boats and ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles and other alternative fuel vehicles (e.g. fuels derived from resources other than petroleum). As referred to herein, a hybrid vehicle is a vehicle that has two or more sources of power, for example both gasoline-powered and electric-powered vehicles. The present invention pertains to a polyester sound absorption material, a method of manufacturing a molded product using the polyester sound absorption material, and a molded product manufactured by the method. Hereinafter, a description will be given of the method of manufacturing the molded product using the polyester sound absorption material of the present invention, and additionally, the polyester sound absorption material of the present invention and the molded product using the sound absorption material20are described. In an aspect, the method of manufacturing the molded product using the polyester sound absorption material includes manufacturing a sound absorption material20, for example, by mixing a base fiber including a polyester resin, an adhesive fiber including a low-melting-point polyester resin and a hollow fiber including a polyester resin, preheating the sound absorption material20in an oven, and manufacturing a molded product by subjecting the preheated sound absorption material20and a skin member10together to cold pressing. FIGS.1and2show flowcharts showing the process of manufacturing the molded product using the polyester sound absorption material according to the present invention. With reference thereto, individual steps are specified below. Manufacturing Sound Absorption Material20 A sound absorption material20may include, for example, by mixing a base fiber including a polyester resin, an adhesive fiber including a low-melting-point polyester resin and a hollow fiber including a polyester resin. Particularly, the sound absorption material20including a web including the base fiber, the adhesive fiber and the hollow fiber may be manufactured. The web may be manufactured through any one process selected from dry process, a wet process, a spunbond process and combinations thereof, and the process of manufacturing the web is not particularly limited in the present invention, so long as it is a process of typically manufacturing a web included in a sound absorption material20(i.e. “mixing fibers” in the present invention may be used in the same sense as “forming a web”, to avoid confusion). The sound absorption material20of the present invention may include at least one web that is stacked. When the sound absorption material20includes two or more webs, the webs may be bonded and stacked through any one process selected from the group consisting of adhering, heating, needle punching, jetting and combinations thereof. The base fiber may include one or more selected from the group consisting of polyethylene terephthalate (PET), polybutylene terephthalate (PBT), polytrimethylene terephthalic acid (PTT), polyethylene naphthalate (PEN), polyethylene terephthalate glycol (PETG), and polycyclohexane dimethylene terephthalate (PCT). The base fiber preferably may a fineness of about 3 to 10 denier (De). The web may suitably include an amount of about 20 to 40 wt % of the base fiber based on the total weight of the web. The adhesive fiber may include a polyester resin having a low melting point. The melting point of the adhesive fiber may preferably range from about 80 to about 130° C. The adhesive fiber preferably may include one or more selected from the group consisting of polyethylene terephthalate (PET), polyethylene (PE), polypropylene (PP), and polyamide (PA). Unlike the base fiber and the hollow fiber, the adhesive fiber is melted at a relatively low temperature, which may increase the bonding force between the base fiber and the hollow fiber of the present invention. Moreover, the sound absorption material20may be integrally attached to the skin member10without a problem of detachment therefrom. The web of the present invention may include an amount of about 40 to 60 wt % of the adhesive fiber based on the total weight of the web. When the amount of the adhesive fiber is less than about 40 wt %, the stiffness and durability of the sound absorption material20may decrease. When the amount thereof is greater than about 60 wt %, the sound absorption material20may become hardened. The hollow fiber may suitably include one or more pores therein, the cross-section thereof may be a circular shape, a non-circular shape and combinations thereof. The hollow fiber may suitably one or more selected from the group consisting of polyethylene terephthalate (PET), polybutylene terephthalate (PBT), polytrimethylene terephthalic acid (PTT), polyethylene naphthalate (PEN), polyethylene terephthalate glycol (PETG), and polycyclohexane dimethylene terephthalate (PCT). The hollow fiber preferably may have a hollow core ratio of about 15 to 25%. The hollow fiber may include crimps, and the number of crimps may preferably be about 10 to 25/inch. When the number of crimps is less than about 10/inch, the elastic force of the fiber may be reduced and thus the shape restorability of the sound absorption portion21after being subjected to external pressure may decrease, and moreover, the sound absorption portion21may be depressed and wrinkled. When the number of crimps is greater than about 25/inch, the elastic force of the fiber may be excessively increased and thus problems in moldability of the sound absorption material20may occur, which causes a problem of blurring the boundary of the sound absorption portion21of the sound absorption material20. The hollow fiber may suitably have a fineness of about 3 to 15 denier (De). The web may suitably include an amount of about 15 to 25 wt % of the hollow fiber, or particularly an amount of about 18 to 23 wt % based on the total weight of the web. When the amount of the hollow fiber is less than about 15 wt %, the sound absorption portion21having a convex shape may not be properly formed on the sound absorption material20of the present invention. When the amount thereof is greater than about 25 wt %, the effect due to the addition of the hollow fiber may become insignificant. Preheating The heat absorption material20of the present invention including the web may be heat-treated in an oven. The heat treatment may be performed for 60 to 70 sec at a temperature of about 180 to 300° C. Through the heat treatment, the adhesive fiber of the present invention may be partially or completely melted, and the sound absorption material20may be made suitable for molding into a desired shape. The oven may be used without particular limitation, so long as it may heat-treat the entirety of the sound absorption material20. For example, the heat treatment method and device are not particularly limited. The sound absorption material20may be heat-treated alone, or may be heat-treated in contact with the separately prepared skin member10. Manufacturing Molded Product The preheated sound absorption material20and the skin member10may be pressed using a cold press to produce a molded product. The skin member10may generally include a high-density nonwoven fabric capable of being applied to interior materials for vehicles. The skin member10may include one or more selected from the group consisting of polyethylene terephthalate (PET), polybutylene terephthalate (PBT), polytrimethylene terephthalic acid (PTT), polyethylene naphthalate (PEN), polyethylene terephthalate glycol (PETG), and polycyclohexane dimethylene terephthalate (PCT). The skin member10may be preferably composed of the same substance as any one of substances included in the sound absorption material20in view of recycling. For example, the skin member10may include a substance that is the same as the base fiber of the sound absorption material20. Manufacturing the molded product may preferably include preparation of a mold30, applying and compression. Preparation of Mold30(S1) A mold30having the shape of a molded product may be prepared. The cold press of the present invention may further include the mold30. The mold30may be prepared as a frame for manufacturing a part into a desired shape. The mold30may include a concave portion31that is able to form a sound absorption portion21having a convex shape on the sound absorption material20. Preferably, the molded product of the present invention is configured such that the skin member10and the sound absorption material20are integrally molded. A portion of the sound absorption material20that requires more sound absorption performance may be needed, depending on the location of the part. For example, the inner shape of the mold30may be designed so that pressing is not performed at all, or so that only slight pressing may be performed such that a portion of the sound absorption material20requiring more sound absorption performance is formed thicker. As shown inFIG.2, a portion of the mold30in a direction contacting the sound absorption material20may be formed with a concave portion31. Seating (S2) The sound absorption material20and the skin member10may be placed (seated) in the mold30. The sound absorption material20and the skin member10may be provided separately or may be provided in the form of being laminated together, and may then be disposed or positioned. As such, the mold30, which applies pressure by contacting the sound absorption material20, preferably includes a concave portion31. Compression (S3) A molded product may be manufactured by pressing the sound absorption material20and the skin member10. The pressing of the present invention may be performed through cold pressing. For example, the cold pressing may preferably be conducted at a temperature of about 10 to 15° C. for about 40 to 50 sec. The skin member10and the sound absorption material20may be laminated through pressing and molded in the shape of a mold30. The skin member10and the sound absorption material20may be laminated and integrated into a single molded product. The sound absorption material20included in the molded product may include a sound absorption portion21. The sound absorption portion21may be formed by the concave portion31of the mold30, and may have a specific embossed shape. Typically, the sound absorption portion21, which is formed so as to have a predetermined thickness in a convex shape, may not maintain the initial thickness of the sound absorption portion21a predetermined time after pressing. This is because it is difficult to maintain the shape thereof due to the characteristics of general fiber substances included in the sound absorption material20. As shown inFIG.2, when using a typical fiber substance S3′, the sound absorption portion21may shrink and become distorted over time. In the sound absorption material20including the base fiber, the adhesive fiber and the hollow fiber in the predetermined amounts, the thickness change of the sound absorption portion21may be very small. The sound absorption portion21included in the sound absorption material20may have a thickness of about 8 mm to 30 mm, and the sound absorption portion21may exhibit a thickness change of less than about 3%, or particularly less than about 1%. When the thickness of the sound absorption material20is about 4 mm or less, the thickness change may preferably be less than about 1.5-2%, and when the thickness of the sound absorption material20is about 1 to 3 mm, the thickness change may preferably be less than 1%. The time taken for the above thickness change to occur is not particularly limited, and the thickness change means the thickness difference starting immediately after molding until the thickness of the sound absorption portion21, molded in the shape of the concave portion31through cold pressing, no longer changes. EXAMPLE A better understanding of the present invention will be given through the following examples, which are not to be construed as limiting the present invention. Example 1 A felt, including 30 wt % of a base fiber including polyethylene terephthalate, 50 wt % of an adhesive fiber including low-melting-point polyethylene terephthalate and 20 wt % of a hollow fiber including polyethylene terephthalate and having a hollow core ratio of 18 wt % and a circular cross-section, was prepared, and the felt was temporarily bonded to a skin member10including polyethylene terephthalate fiber having a fineness of 7 denier, placed in an oven, and preheated at a temperature of 200° C. for 60 sec. The preheated felt and skin member10were disposed or positioned in a mold30of a cold press. Here, the concave portion31is formed in the mold30located in a direction contacting the felt. The bonded felt and skin member10were subjected to cold pressing at a temperature of 15° C. for 50 sec with the mold30to produce a molded product. Here, the thickness of the sound absorption portion21formed immediately after the pressing was 10 mm. The molded product manufactured in Example 1 is shown inFIG.3B. Comparative Example 1 A felt, including 50 wt % of a base fiber including polyethylene terephthalate and 50 wt % of an adhesive fiber including low-melting-point polyethylene terephthalate, was prepared, and the felt was temporarily bonded to a skin member10, placed in an oven, and preheated at a temperature of about 320° C. (the inner atmosphere temperature of the oven was 190° C.) for 60 sec. The preheated felt and skin member10were applied and seated in a mold30of a cold press. Here, the concave portion31is formed in the mold30located in a direction contacting the felt. The bonded felt and skin member10were subjected to cold pressing at a temperature of 15° C. for 50 sec with the mold30to produce a molded product. Here, the thickness of the sound absorption portion21formed immediately after the pressing was 10 mm. Comparative Example 2 An extrusion sheet, including 70 wt % of polypropylene and 30 wt % of talc, was prepared, and a skin member10was temporarily bonded to the upper and lower surfaces of the extrusion sheet, placed in an oven, and preheated at a temperature of about 320° C. (the inner atmosphere temperature of the oven was 190° C.) for 60 sec. The preheated extrusion sheet and skin member10were disposed and positioned in a mold30of a cold press. The bonded extrusion sheet and skin member10were subjected to cold pressing at a temperature of 15° C. for 50 sec with the mold30to produce a molded product. A sound absorption pad60having a thickness of 10 mm was attached to the surface of the manufactured molded product using an adhesive at the same position as the sound absorption portion21of Example 1. Here, the sound absorption pad60was composed of a nonwoven fabric (3M Thinsulate) made of microfiber including polypropylene. The molded product manufactured in Comparative Example 2 is shown inFIG.3A. Example 2 The sound absorption pad60of Comparative Example 2 was attached to a portion of the sound absorption portion21of the molded product manufactured in Example 1. The molded product manufactured in Example 2 is shown inFIG.3C. Example 3 The sound absorption pad60of Comparative Example 2 was attached to the entire sound absorption portion21of the molded product manufactured in Example 1. The molded product manufactured in Example 3 is shown inFIG.3D. Comparative Examples 3 and 4 The molded products of Comparative Examples 3 and 4 were manufactured in the same manner as in Example 1, with the exception that the composition of the felt was adjusted as shown in Table 1 below. TABLE 1ComparativeComparativeExample 1Example 3Example 4FeltBase fiber (wt %)303520Adhesive fiber (wt %)505550Hollow fiber (wt %)201030 Test Example 1 After waiting until the thickness of the sound absorption portion21of the molded product manufactured in each of Example 1 and Comparative Example 1 no longer changed, the final molded products were compared, and are shown inFIGS.4A and4B.FIG.4Ashows the molded product manufactured in Example 1 andFIG.4Bshows the molded product manufactured in Comparative Example 1. As shown inFIG.4A, it can be seen that the sound absorption portion21maintained its shape without shrinking or distortion, whereas the molded product ofFIG.4Bshows that portions of the sound absorption portion21shrank and became distorted. Test Example 2 The molded product manufactured in each of Comparative Example 2 and Examples 1 to 3 was measured for sound absorption performance in the frequency range of 400 to 10,000 Hz in a simple reverberation chamber. The results thereof are shown inFIG.5. With reference to the graph, even when the sound absorption pad was not attached, the sound absorption performance of Example 1 was superior to that of Comparative Example 2. Moreover, when the sound absorption pad60was attached in Example 1, as in Examples 2 and 3, it can be confirmed that a molded product having further improved sound absorption performance was obtained. Test Example 3 The molded product manufactured in each of Comparative Example 2 and Example 1 was mounted to an actual vehicle and noise in the rear seat during driving (FIG.6A) and trunk indoor proximity noise (FIG.6B) were measured. With reference thereto, it can be confirmed that there was no difference between the effect of Example 1 of the present invention and the effect of Comparative Example 2. Test Example 4 The molded products of Example 1 and Comparative Examples 3 and 4 were evaluated for outer appearance of the sound absorption portion21, part stiffness and sound absorption performance. The results thereof are shown in Table 2 below. TABLE 2ComparativeComparativeExample 1Example 3Example 4Formability of sound10610absorption portion 21Strength ofWarping895molded productShaking894Sound absorption performance100%60%100%* Formability of sound absorption portion 21: The outer appearance of the sound absorption portion 21, such as distortion, was evaluated by 10 persons, and an average of scores thereof was calculated (out of 10)* Strength of molded product: Based on the result in which the extent of warping/shaking of the molded product of Comparative Example 2 was determined to be 10, the strength of each molded product (warping, shaking) was evaluated.* Warping: The extent of endurance of the molded product was measured when the molded product was fixed on the floor and force was applied in a direction perpendicular thereto.* Shaking: The extent of endurance of the molded product was measured when the molded product was fixed on the floor and force was applied in a direction parallel thereto.* Sound absorption performance: Based on the arithmetic mean value of AC (Absorption Coefficient) values obtained at specific frequencies of Example 1 in Test Example 2, the percentage of the AC arithmetic mean value of each of Comparative Example 3 and Comparative Example 4, determined at the same frequency as Example 1, was calculated. As shown in Table 2, in the molded product of Comparative Example 3, the formability of the sound absorption portion21was very poor, and the sound absorbing performance was also significantly lower than that of Example 1. Also, in Comparative Example 4, the sound absorption portion21was well formed, but the strength of the molded product was very low. Although the exemplary embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.
24,346
11862136
Wherein,1—acoustic metamaterial structural unit,2—frame,3—the perforated constraint,4—the hole perforated on the constraint,5—connection rod,6—the perforated flexible membrane,7—the hole perforated on the membrane,8—the frame of the basic type of the acoustic metamaterial plate in Example 1,9—the whole piece of the perforated flexible membrane in Example 1,10—the hole perforated on the membrane in Example 1,11—the perforated constraint in Example 1,12—the hole perforated on the constraint in Example 1,13—double-arm connection rod in Example 1,14—the acoustic metamaterial structural unit in Example 1,15—the basic type of the acoustic metamaterial plate in Example 1,16—heat resource room,17—heat delivery room,18—heat resource,19—the air inflow direction,20—the routine perforated plate unit with the same area density and the same size of holes,21—the routine micro-perforated plate unit with the same area density and the same perforation rate,22—the incident acoustic chamber,23—the transmission acoustic chamber,24—acoustic source of the acoustic impedance tube,25—the incident acoustic tube of the acoustic impedance tube,26—the transmission acoustic tube of the acoustic impedance tube,27—the absorption sound wedge on the end of the acoustic impedance tube,28—terminals for fixing the microphone,29—microphone,30—the tested sample,31—the incident soundwave,32—the frame of the light and thin types of the acoustic metamaterial plate in Example 2,33—the whole piece of the perforated flexible membrane in Example 2,34—the hole perforated on the membrane in Example 2,35—the perforated constraint in Example 2,36—the hole perforated on the constraint in Example 2,37—connection rod with double arms in Example 2,38—the acoustic metamaterial structural unit in Example 2,39—the acoustic metamaterial plate in Example 2,39—frame of the acoustic metamaterial plate comprising units in different parameters in Example 3,40—the whole piece of the perforated flexible membrane in Example 3,41—the hole perforated on the membrane in Example 3,42—the perforated constraint in Example 3,43—the hole perforated on the constraint in Example 3,44—connection rod with double arms in Example 3,45—the acoustic metamaterial structural unit in Example 3,46—frame of the acoustic metamaterial plate with large size of hole in Example 4,47—the large size of the hole on the constraint in Example 4,48—the constraint with the small size of hole in Example 4,49—the small size of the hole on the constraint in Example 4,50—connection rod with double arms in Example 4,51—the acoustic metamaterial structural unit in Example 3,51-the basic acoustic metamaterial structural unit in Example 4,52—the whole piece of the perforated flexible membrane in Example 4,53—the small size of hole on the membrane in Example 4,54—the large size of hole on the membrane in Example 4,55—the frame of the general acoustic metamaterial structural unit in Example 4,56—the perforated constraint of the general acoustic metamaterial structural unit in Example 4,57—the connection rod of the general acoustic metamaterial structural unit in Example 4,58—the general acoustic metamaterial structural unit in Example 4,59—the frame of the general acoustic metamaterial structural unit deriving from Example 4,60—the perforated constraint of the general acoustic metamaterial structural unit deriving from Example 4,61—the holes perforated on the constraint of the general acoustic metamaterial structural unit deriving from Example,62—the first type of the connection rod of the general acoustic metamaterial structural unit deriving from Example 4,63—the perforated flexible membrane of the general acoustic metamaterial structural unit deriving from Example 4,64—the holes on the membrane of the general acoustic metamaterial structural unit deriving from Example 4,65—the second type of the connection rod of the general acoustic metamaterial structural unit deriving from Example 4,66—round frame in Example 5,67—the hole perforated on the constraint in Example 5,68—the constraint in Example 5,69—the connection rod with double arms in Example 5,70—the regular hexagon frame in Example 5,71—the connection rod with a single arm in Example 5,72—the rectangle frame in Example 5,73—the frame of the acoustic metamaterial structural unit covered the membrane on both surfaces in Example 6,74—the first layer of the perforated flexible membrane in Example 6,75—the second layer of the perforated flexible membrane in Example 6,76—the hole of the first layer of the perforated flexible membrane in Example 6,77—the hole of the second layer of the perforated flexible membrane in Example 6,78—the perforated constraint in Example 6,79—the connection rod with double arms in Example 6,80—the space of the chambers,81—the hole on the constraint in Example 6,82—the porous material in Example 6,83—the hole on the porous material in Example 6,84—the frame of the acoustic metamaterial structural unit with the function of heat transferring enforcement in Example 7,85—the first layer of the perforated flexible membrane in Example 7,86—the second layer of the perforated flexible membrane in Example 7,87—the hole of the second layer of the perforated flexible membrane in Example 7,88—the additional round hole on the second layer of the perforated flexible membrane in Example 7,89—the hole on the first layer of the perforated flexible membrane in Example 7,90—the perforated constraint in Example 7,91—the hole on the constraint in Example 7,92—the connection rod with double arms in Example 7,93—the additional holes with different sizes and shapes on the second layer of the perforated flexible membrane in Example 7,94—the elastic diaphragm in Example 7,95—the framework of the acoustic metamaterial plate in Example 8,96—the whole piece of the perforated membrane of the acoustic metamaterial plate in Example 8,97—the routine acoustic material in Example 8,98—the framework of the first layer of the acoustic metamaterial plates in Example 9,99—the whole piece of the perforated membrane of the first layer of the acoustic metamaterial plate in Example 9,100—the framework of the second layer of the acoustic metamaterial plates in Example 9,101—the whole piece of the perforated membrane of the second layer of the acoustic metamaterial plate in Example,102—the air gap between the two layers of the routine acoustic material plates,103—the porous material in Example 9,104—the acoustic metamaterial structural unit with the curved surface in Example 10,105—the wedge connector in Example 10. EMBODIMENTS In order to sufficiently describe the technical solutions for solving the present technical problem, the description are detailed as follows, combining the examples and the drawings. But, the technical solutions, the embodiment and the protection scope is not limited as shown herein. “acoustic metamaterial” described herein is general defined as following: it is an artifact designed microstructure, and possesses the acoustic properties that the national and routine material can not realize, the acoustic properties comprises the characteristic “negative mass” and “negative volume module” that are necessary for controlling the low-frequency soundwave. In the present field, the acoustic metamaterial is a type of the structure and the constructed material self is routine material. The said “acoustic metamaterial” is commonly known for a person skilled in the art. The present invention provides an acoustic metamaterial structural unit with the functions of soundproof, gas permeability and heat-transferring enhancement. The acoustic metamaterial structural unit comprises the frame, at least one perforated constraint and the flexible perforated membrane covering at least one side. More than one acoustic metamaterial structural units are constructed by the inner combination and splices to form acoustic metamaterial plate. Preferably, the parameters of sizes and materials of the constructed acoustic metamaterial structural units are different. The acoustic metamaterial plate can be composited with the routine material plate to form the material composite structure. More than one layers of the acoustic metamaterial plates can be constructed to form acoustic metamaterial composite plate by the outer vertical stack. Preferably, the parameters of sizes and materials of the multi-layers acoustic metamaterial plates are different. The frame is connected with the perforated constraint by the rigid connection rod. The shapes and numbers of the rigid connection rods are not limited. The perforated membrane is covered on the frame and is constrained by the profile of the constraint. Preferably, the perforated constraint is flush with at least one surface of the frame. The shape of the frame is not limited. The shape such as regular, square or hexagons are preferable, and they can realize the maximum area ratio of the structural unit for periodic extending. The perforated constraint contacts the perforated flexible membrane by the linear contact or surface contact. Preferably, the shape formed by the contacting is regular symmetric geometry. More preferably, the geometric shape is spherical, square or hexagons. The quantities of the perforated constraints is not limited. At least one perforated constraint is placed, and the perforated constraint is generally placed the in the frame and near the maximum area of the amplitude produced by the resonant vibration of the structure unit without placing the perforated constraint. For example, when the first type of the resonant vibration is produced by the structure unit that the geometric shape of the frame is symmetric and the constraint is not placed, the amplitude of the central area is maximum. The present technical solutions that the constraint rigidly connected with the frame are used for adjusting the flexural rigidity of the flexible membrane can be used for changing the vibration frequency of the whole unit. In other words, the introduction of the constraint can selectively inhibited or created the specific vibration mode of the flexible membrane, which may increase the degree-of-freedom in the outer direction of the acoustic metamaterial structural unit surface. The shape of the holes on the constraint is regular symmetric geometry. Preferably, the geometric shape is spherical, and it considers on the basis of the process on one side, and also considers the speed of the fluid on the other side. The size of the hole is determined by both the flow rate passing through the hole and the soundproof operating frequency bond. For example, when it is used in the occasions that the requirement of flow-passing efficiency is high, the hole should be big enough so as to reduce the loss of the flow rate and the influence of the pressure reduction. In the occasions that the soundproof operating frequency bond approaches the low frequency, under the precondition that the geometric size and the material parameters of the membrane is not changed, the hole in small size may make the soundproof operating frequency bond approach the low frequency. The materials of the frame and the perforated constraint are respectively selected from aluminum, steel, wood, rubber, plastic, glass, gypsum, cement, high molecular polymer and composite fiber, which can satisfy the supporting strength of the structure self and the requirement of the structural rigidity in the operating frequency bond. The material of the perforated flexible membrane can be any soft material, for example the elastic material with the similar properties of rubber, the high molecular polymer membrane material with the similar properties of polyvinylchloride, polyethylene and polyetherimide. When the perforated flexible membrane is connected with the frame and the perforated constraint, the pretension are not exerted and the flexible membrane is assembled under the freely spreading conditions. The holes on the flexible membrane can be preprocessed or can be perforated after covering the flexible membrane. The operating frequency of the acoustic metamaterial structural unit can be accurately designed by adjusting the structural sizes or the material parameters of the frame, constraint, the holes on the constraint, the flexible membrane and the hole on the perforated membrane, which results that the flow rate of the fluid and the operating frequency for soundproof can be ordered before production. For example, when the acoustic metamaterial structural unit is needed to work on the low frequency, small holes on the constraint and the membrane, large size of frame, short diameter of constraint, the thinner flexible membrane or the flexible membrane with less curved YANG's capacity can be chosen. On the contrary, when the acoustic metamaterial structural unit is needed to work on the high frequency, big holes on the constraint and the membrane, small size of frame, long diameter of constraint, the thicker flexible membrane or the flexible membrane with larger curved Young modulus can be chosen. In order to fully used the space for the structure unit and to increase the effect for reducing the noise, as for the acoustic metamaterial structural unit that the thickness of the frame is larger, the two sides surface of the frame both can be covered with the perforated membrane. Both the thickness and the material parameters of the two layers of membrane can be different, and the two different main operating frequency can be realized in the meanwhile. Besides, the porous materials such as glass fiber, open and closed holes of foam can be filled in the space which is naturally formed by the two layers of the membranes, so that the properties of sound absorption and energy consumption of the whole structure is further promoted. The present invention also provides an acoustic metamaterial structural unit. On the one hand, the diffuse efficiency of the heat energy of the mediums on both sides of the hole is increased by the vibration of the self-structure under the excitation of the soundwave. On the other hand, when the flow is passing, the vibration of the units may prevent the formation of the heat boundary layer and the speed boundary layer, and can further increase the turbulence intensity of the fluid on the wall of the heat resource and accelerate the efficiency of heat exchange. Besides, the turbulence intensity of the near fluid field is further increased by covering the other side of the acoustic metamaterial structural unit with the perforated flexible membrane or several layers of the flexible membranes, and the multiple holes on the membrane are in same size and in the shape of round or the size and the shape of the multiple holes on the membrane are different. The present invention also provides an acoustic metamaterial plate is combined and spliced in the inner plane direction by the said acoustic metamaterial structural unit. The geometric size and the material parameters of the acoustic metamaterial structural units is not strictly limited to the same. The frame of the acoustic metamaterial structural unit is connected rigidly or flexibly. They can also be combined by the wedge connector to form the acoustic metamaterial plate with a certain curvature, which can satisfy the installment requirement on the non-flat and non-vertical surfaces in the practical engineering application. The said acoustic metamaterial plate and the routine acoustic material plate can be constructed to form the composite structure. Wherein, the routine acoustic material plate is the porous materials (such as glass fiber or open and closed holes of foam open and closed holes of foam), and routine perforated plate, micro-perforated plate, damping material plate and etc. The introduction of routine acoustic material may widen the operating frequency bond of the acoustic metamaterial plate in different extent. The acoustic metamaterial composite structure is constructed by stacking in the outer vertical direction of the multiple layers acoustic metamaterial plates. The geometric size and the material parameters of the acoustic metamaterial plate constructed the acoustic metamaterial composite structure are not strictly limited to the same. The space formed by the mulita-layer of acoustic metamaterial plates are filled with the porous materials such as glass fiber or open and closed holes of foam open and closed holes of foam. The near sound waves produced by the neighboring layers of the acoustic metamaterial plates is reflected back and forth to increase the sound energy density, and further the sound absorption frequency of the porous materials is creased. Therefore, the sound absorption coefficient of the porous material is not necessary to be big in the low frequency, and the characteristic impedance should match the membrane, which can avoid the soundwave not entering into the porous material effectively. In the meanwhile, the influence of the filled porous material on the flexural vibration rigidity of the membrane should be considered, and the operating frequency of the original designed acoustic metamaterial structural unit should be modified. The embodiments are used for further describing the present invention in detail by combining the drawings. FIG.1is an embodiment of the present invention, and it is the acoustic metamaterial composite plate constructed with the array acoustic metamaterial structural unit in inner surface direction. The sizes of the acoustic metamaterial structural unit (1) as the basic array element should be different. Each structural unit comprises the frames (2), the perforated constraint (3), the frames connected with the perforated constraint by the double-arm connection rod (5). The perforated flexible membrane (6) covers on the top surface of the acoustic metamaterial structural unit, in which holes perforated on the membrane (7) and holes are the hole perforated on the constraint (4) are placed. FIG.2is a schematic drawing of the basic type of the acoustic metamaterial structural unit and the acoustic metamaterial composite plate constructed thereof in inner surface direction in example 1. Wherein, the geometric sizes of the acoustic metamaterial structural unit (14) as the basic array element is completely same. Each structural unit comprises the perforated constraint (11), the hole perforated on the constraint (12), the frames connected with the perforated constraint by the double-arm connection rod (13). The whole piece of the perforated flexible membrane (9) are covered on the one side of the frame (8) under the freely spreading conditions, and any pretension is not exerted on the membrane. The diameter of the hole perforated on the membrane (10) is same as the hole perforated on the constraint (12). Wherein, the shape of the frame of the acoustic metamaterial (14) is square, which the inner side length is 27 mm and the outer side length is 29 mm, the thickness is 5 mm. The diameter of the outer contour of the perforated constraint (11) is 10 mm. The diameter of the holes perforated on the constraint is 5 mm. The thickness of the perforated flexible membrane (9) is 0.05 mm, the diameter of the holes (10) on which is also 5 mm. The cross-section of the connection rod (13) connected the frame and the constraint is rectangular whose length is 4 mm and the width is 3 mm. The materials of the frame (8), the perforated constraint (11) and the double-arm connection rod (13) is FR-4 glass fiber and they are same. The material of the perforated flexible membrane (9) is polyetherimide. FIG.3is the finite element method (FEM) simulation result of the distribution of the stable temperature field of the basic type of the acoustic metamaterial plate (15) under the situation of the convection heat transfer in example 1. Wherein, in the FEM simulation mode, the white cylinder is defined as heat source (18), and the total power is 10 W. The white arrow represents the air inflow direction (19), the initial temperature of the cross-section is designed as 20° C., and the average flow rate of the air is 0.2 m/s. The model further comprises heat resource room (16) and heat delivery room (17). Except the side placed the basic type of the acoustic metamaterial plate of the two rooms, all other sides of the two rooms are designed as insulation wall. From the calculation result of the FEM, it can be seen that the higher temperature of the temperature field is 25° C. and the temperature of most area is near the room temperature (20° C.), which demonstrates that the function of ventilation and heat dissipation of the basic type of the acoustic metamaterial plate is good and the heat energy is not accumulated near the heat source. Therefore, when the basic type of the acoustic metamaterial plate in Example 1 is installed on one side of the insulated and closed chamber, there is no heat dissipation obstacle existing. FIG.4is a schematic drawing of the finite element method (FEM) simulation calculation models of the Sound Transmission Loss (Sound Transmission Loss, short for STL) in normal direction for the acoustic metamaterial structural unit (14), the routine perforated plate (20) with the same sizes of holes, and the micro-perforated plate (21) with the same area density and the same perforation rate. Wherein, in the FEM simulation mode, the front side and the back side of the three structural unit all place the incident acoustic chamber (22) and the transmission acoustic chamber (23). The incident soundwave from the incident acoustic chamber strikes on the structural unit, and the reflection soundwave PRand the transmitter soundwave PTare produced. The Sound.Transmission Loss in normal direction is calculated by STL=20 log10|PI/PT|. In the FEM simulation mode, the thickness of the routine perforated plate with the same area density and the same perforation rate is 1.2 mm; the material is 6063 Aluminum alloy and the diameter of the hole is 5 mm. The thickness of the micro-perforated plate (21) with the same area density and the same perforation rate is 1.2 mm; the material is 6063 Aluminum alloy and the diameter of the hole is 1 mm. The area density of the three structural units is 3.56 kg/m2and the perforation rate of the three structural units is 2.33%. FIG.5is a comparative drawing of the finite element method (FEM) simulation results of the Sound Transmission Loss in normal direction for the acoustic metamaterial structural unit (14), the routine perforated plate unit (20) with the same sizes of holes, and the micro-perforated plate unit (21) with the same area density and the same perforation rate. Wherein, the solid line represents the acoustic metamaterial structural unit (14), the dashed line represents the routine perforated plate unit (20) with the same sizes of holes and the same area density, and the dotted line represents the micro-perforated plate unit (21) with the same area density and the same perforation rate. From the figure, it can see that the Sound Transmission Loss in normal direction of the acoustic metamaterial structural unit (14) is higher than the routine perforated plate unit (20) with the same sizes of holes and the same area density in the frequency bond lower than 680 Hz. The Sound Transmission Loss in normal direction of the acoustic metamaterial structural unit (14) is higher than the micro-perforated plate unit (21) with the same area density and the same perforation rate in the frequency bond lower than 880 Hz. Besides, the curve of the Sound Transmission Loss in normal direction of the acoustic metamaterial structural unit (14) appears a spike in the frequency of 440 Hz, and the STL value reaches 17 dB. The spike STL value is higher than the micro-perforated plate unit (21) with the same area density and the same perforation rate about 14 dB, and higher than the routine perforated plate unit (20) with the same sizes of holes and the same area density about 15.4 dB. Besides, it can be seen that the function of the low-frequency soundproofing for the micro-perforated plate unit (21) with the same area density and the same perforation rate is worst. The reason is that the single micro-perforated plate unit lacks the back panel structure, and the Helmholtz Resonant Absorber cannot be formed and further the effective chamber resonance and friction energy consumption cannot realize. FIG.6is the finite element method (FEM) simulation results of speed directions of the air particles in incident acoustic chamber and the transmission acoustic chamber for the acoustic metamaterial structural unit (14), the routine perforated plate unit (20) with the same sizes of holes and the same area density, and the micro-perforated plate unit (21) with the same area density and the same perforation rate is excited by the soundwave frequency of 440 Hz. Wherein,FIG.6(a)is the finite element method (FEM) simulation result of the acoustic metamaterial structural unit (14);FIG.6(b)is the finite element method (FEM) simulation result of the routine perforated plate unit (20) with the same sizes of holes and the same area density, andFIG.6(b)is the finite element method (FEM) simulation result of the micro-perforated plate unit (21) with the same area density and the same perforation rate. FIG.7is a schematic drawing of acoustic impedance tube test system for testing the incident Sound Transmission Loss of the acoustic material sample in normal direction by the four-sensor method according to the standard of ASTM E2611-09 (Standard test method for measurement of normal incidence sound transmission of acoustical materials based on the transfer matrix method). Wherein, the acoustic impedance tube comprises the incident acoustic tube of the acoustic impedance tube (25) and the transmission acoustic tube of the acoustic impedance tube (26); the acoustic source (24) placed on the acoustic impedance tube (25). The white noise excitation incident soundwave (31) in broad frequency produced by the acoustic source is developed to be the plane sound wave before it reaches the tested sample, which the wave-front amplitude tends to uniform. The absorption sound wedge (27) placed on the end of the transmitting acoustic impedance tube (26) can reduce the influence of the several times of reflection of the sound for the test result. Besides, four terminals for fixing the microphone (28) are placed on the two sides of the testing sample. The microphones (29) (Mode: 4187, Brüel & Kjær) are inserted into the terminals for fixing the microphones, each two of which respectively are used for the incident acoustic tube of the acoustic impedance tube (25) and the transmission acoustic tube of the acoustic impedance tube (26). The effective tested frequency bond is 70 Hz˜890 Hz for the testing system, which covers third octave frequency bond of the central frequency of 80 Hz˜800 Hz. The central line of the soundproof curve can also reflect the soundproof level of the sample factually in other frequency except the said frequency bond. FIG.8is a comparative drawing of the finite element method (FEM) simulation results and testing result of the incident Sound Transmission Loss in normal direction for the samples of acoustic metamaterial structural unit, the routine perforated plate with the same area density and the same sizes of holes, and the micro-perforated plate with the same area density and the same perforation rate in Example 1. Wherein,FIG.8(a)is the finite element method (FEM) simulation result of the acoustic metamaterial structural unit (14) in Example 1;FIG.8(b) is the finite element method (FEM) simulation result of the routine perforated plate unit (20) with the same area density and the same sizes of holes, andFIG.8(c)is the finite element method (FEM) simulation result of the micro-perforated plate unit (21) with the same area density and the same perforation rate. FIG.9is a schematic drawing of the acoustic metamaterial structural unit and the thin and light acoustic metamaterial plate constructed thereof in inner surface direction in Example 2. Wherein, the structure size of the acoustic metamaterial structural units (38) as the basic array element is same. The most difference between the present sample and the sample in Example 1 in structural types is stated as follows. The connection rod (37) connected the perforated constraint (35) and the frame (32) of the acoustic metamaterial structural units (38) is flush with the frame, so it avoids the design of the subsidence surface, which simplifies the process complexity. Further, the thickness of the whole acoustic metamaterial plate can be thinner. In Example 2, the shape of the frame of the acoustic metamaterial unit (38) is square; the inner side length is 35 mm; the width of the frame (32) is 3 mm; the thickness of the frame (32) is 1.5 mm. The diameter of the outer contour of the perforated constraint (35) is 12 mm. The diameter of the holes perforated on the constraint (36) is 7 mm. The whole piece of the perforated flexible membrane (9) that the thickness is 0.05 mm is covered on the one side of the frame (32) under the freely spreading conditions and any pretension is not exerted on the membrane. The diameter of the hole perforated on the membrane (34) is same as the hole perforated on the constraint (36), i.e., 7 mm. The cross-section of the connection rod (37) connected the frame (32) and the perforated constraint (35) is rectangular whose length is 3 mm and the width is 1.5 mm. The materials of the frame (32), the perforated constraint (35) and the double-arm connection rod (37) is common carbon steel with the grade of Q235A, and they are same. The material of the perforated flexible membrane is polyetherimide. The area density of the thin and light acoustic metamaterial plate is 4.20 kg/m2and the perforation rate is 3.48%. FIG.10is the testing result of the incident Sound Transmission Loss in normal direction for the light and thin acoustic metamaterial plate in Example 2. The sample photo is on the right of the Figure, and the outer diameter is 225 mm. FIG.11is a schematic drawing of the acoustic metamaterial structural unit and the acoustic metamaterial plate constructed the units with different parameters in inner surface direction in Example 3. Wherein, the structure sizes of the acoustic metamaterial structural units as the basic array element are different. The diameter of the inner constraint is different from the diameter of the holes perforated on the constraint. Take a certain acoustic metamaterial structural unit (45) as an example, the connection rod (44) connected the perforated constraint (42) and the frame (39) of the acoustic metamaterial structural units (45) is flush with the frame (39). The structure is similar with the acoustic metamaterial structural unit (38) in Example 2. FIG.12is the testing result of the incident Sound Transmission Loss in normal direction for the samples acoustic metamaterial plate constructed the units with different parameters in Example 3. The sample photo is on the right of the Figure, and the outer diameter is 225 mm. FIG.13is a schematic drawing of the acoustic metamaterial structural unit and the acoustic metamaterial plate constructed the units in inner surface direction in Example 4, and the large size of holes are placed on the acoustic metamaterial structural unit. FIG.14is the testing result of the incident Sound Transmission Loss in normal direction for the samples acoustic metamaterial plate constructed the units with the large size of holes in Example 4. The sample photo is on the left of the Figure, and the outer diameter is 225 mm. FIG.15is a schematic drawing of the two types of acoustic metamaterial structural units placed large size of holes deriving from Example 4. Wherein, the constraint perforated with large size of holes inFIG.15(a)is corresponding to the constraint in Example 4 that only the left side, right side and the frame of the whole unit connected the two sides are retained. The constraint perforated with large size of holes inFIG.15(b)is corresponding to the constraint in Example 4 that the left side, right side, the top side, the bottom side are connected with the frame of the whole unit. FIG.16is a structural schematic drawing of the acoustic metamaterial structural unit with different structural types of frames, the constraint and connection rod in Example 5. Wherein, the shape of the frame is spherical inFIG.16(a), and the perforated constraint connects with the frame by the double-arm connection rod. The shape of the frame is regular hexagon inFIG.16(b), and the perforated constraint connects with the frame by the double-arm connection rod. The shape of the frame is spherical inFIG.16(c), and the perforated constraint connects with the frame by the single-arm connection rod. The shape of the frame is regular hexagon inFIG.16(d), and the perforated constraint connects with the frame by the single-arm connection rod. InFIG.16(e), the shape of the frame is rectangular formed by combining the two adjacent square units, and the two perforated constraints respectively connects with the frame by the sing-arm connection rod. FIG.17is the testing result of the incident Sound Transmission Loss in normal direction for the acoustic metamaterial structural unit (the structure is shown inFIG.16(c)) and the samples the arrays of acoustic metamaterial plates constructed the units in inner surface direction in Example 5, and the acoustic metamaterial structural unit comprises the round frame and the single-arm constraint connection rod. FIG.18is a structural schematic drawing of the acoustic metamaterial structural unit covering the membrane on both surfaces in Example 6.FIG.18(a)is the lateral sectional view of the unit andFIG.18(b)is the exploded view of the unit. FIG.19is a structural schematic drawing of the acoustic metamaterial structural unit covering the membrane on both surfaces and the space between the first perforated flexible membrane and the second perforated flexible membrane is filled with the porous material, which is improved by the Example 6.FIG.19(a)is the lateral sectional view of the unit andFIG.19(b)is the exploded view of the unit. FIG.20is a comparative drawing of the testing result of the incident Sound Transmission Loss in normal direction for the sample of the array acoustic metamaterial plate constructed with the acoustic metamaterial structural units covering the membrane on the both sides in inner surface direction in example 6 and the sample of the basic acoustic metamaterial structural plate covering the membrane only on one surface in example 1. Wherein, the hollow circle represents the result of the basic acoustic metamaterial structural plate covering the membrane only on one surface in example 1; the solid line represents the result of the sample of the acoustic metamaterial structural units covering the membrane on the both surfaces in example 6. FIG.21is a comparative drawing of the testing result of the incident Sound Transmission Loss in normal direction for the sample of the array acoustic metamaterial plate constructed with the acoustic metamaterial structural units covering the membrane on the both surfaces in inner surface direction in example 6 and the sample of the array acoustic metamaterial plate constructed with the acoustic metamaterial structural units in inner surface direction covering the membrane on the both surfaces and the space between the two perforated membranes filled with the porous material in example 6. FIG.22is the first structural schematic drawing of the acoustic metamaterial structural units with the function of the heat-transferring enhancement in Example 7. The perforated flexible membrane is covered on one side of the acoustic metamaterial unit, on which several hole in different size or in same size are placed. Under the condition that the effect of soundproof of the acoustic metamaterial structural unit is not influenced, the turbulence intensity can be strengthened by increasing the number of holes perforated on the membrane.FIG.22(a)is the equiaxial lateral sectional view of the unit andFIG.22(b)is the exploded view of the unit. FIG.23is the second structural schematic drawing of the acoustic metamaterial structural units with the function of the heat-transferring enhancement in Example 7. The perforated flexible membrane is covered on the other side of the acoustic metamaterial unit, on which several hole in different size and in different shapes are placed. Under the condition that the effect of soundproof of the acoustic metamaterial structural unit is not influenced, the turbulence intensity can be strengthened by perforating different size and different shapes of holes on the membrane.FIG.23(a)is the equiaxial lateral sectional view of the unit andFIG.23(b)is the exploded view of the unit. FIG.24is the third structural schematic drawing of the acoustic metamaterial structural units with the function of the heat-transferring enhancement in Example 7. The flexible membrane is covered on the other side of the acoustic metamaterial unit, on which several hole in different size and in different shapes are placed. Under the condition that the effect of soundproof of the acoustic metamaterial structural unit is not influenced, the turbulence intensity or the flow rate of the near flow field can be strengthened by swinging or vibration produced by excitation of the incident soundwave.FIG.24(a)is the equiaxial lateral sectional view of the unit andFIG.24(b)is the exploded view of the unit. FIG.25is the testing result of the incident Sound Transmission Loss in normal direction for the sample of the first structural schematic drawing of the acoustic metamaterial structural units in Example 7. The sample photo is on the right of the Figure, and the outer diameter is 225 mm. FIG.26is the schematic drawing of the acoustic composite structure constructed with the acoustic metamaterial plate and the routine material plate in Example 8. The routine material plate is placed on the side of the acoustic metamaterial plate (comprises the frame95and the perforated flexible membrane96) facing the incident source. The routine material plate may be porous materials (such as glass fiber or open and closed holes of foam), routine perforated plate, micro-perforated plate, damping material plate and etc. The introduction of routine acoustic material may widen the operating frequency bond of the acoustic metamaterial plate in different extent. FIG.27is the testing result of the incident Sound Transmission Loss in normal direction for the sample of the acoustic composite structure constructed with the acoustic metamaterial plate and the porous materials plate in Example 8. The sample photo is on the right of the Figure, and the outer diameter is 225 mm. Wherein, the acoustic metamaterial plate is the basic type of the acoustic metamaterial plate in Example 1, and the structural parameters and the materials are same as the shown inFIG.7(a). The material of the routine acoustic material plate is glass fiber; the thickness is 10 mm and the nominal flow resistivity is 19000 Nsm−4. It can be shown from the figures, comparing with the basic type of acoustic metamaterial plate, the Sound Transmission Loss in normal direction of the present acoustic composite structure sample is higher than the basic acoustic metamaterial plate except near the frequency of 440 Hz corresponding to STL spike, especially in mid- or high frequency bond on the right of STL spike. The STL value of the present acoustic composite structure sample is lightly lower than basic acoustic metamaterial plate near the frequency of 440 Hz corresponding to STL spike. The reason is that the introduction of glass fiber is equivalent to increase the structural damping of the basic acoustic metamaterial plate, and the effect of the structural damping mainly embodies the amplitude on the frequency of the gentle resonance and the reflection resonance. As is mentioned above, for the acoustic metamaterial structural unit whose frame is thicker, the perforated flexible membrane can also be covered on the other side and the porous material can be filled the space between the two layers of the membrane. The soundproof function of the whole acoustic metamaterial increases and the inner space is fully used in the meanwhile. For the acoustic metamaterial structural unit whose frame is thinner, if a layer of perforated flexible membrane is also covered on the other side, the space between the two layers of the membrane is too narrow to fill the porous material. Besides, strong near-field couple produced by the two closely layers of membrane can destroy the operating conditions of the acoustic metamaterial structural unit covering one layer of flexible membrane, which results the soundproof effect becomes worse. In this case, following technical means may be considered: several acoustic metamaterial structural unit whose frame is thinner can be formed two layers or multi-layers of acoustic metamaterial composite plate by stack in the outer vertical direction. FIG.28is the schematic drawing of the acoustic composite plate constructed by two layers of acoustic metamaterial plates that they are pulled so as to form a certain space in Example 9. FIG.29is the schematic drawing of the acoustic composite plate constructed by two layers of acoustic metamaterial plates that they are pulled so as to form a certain space, and a layer of porous material is inserted in the space in Example 9. FIG.30is the testing result of the incident Sound Transmission Loss in normal direction for the sample of the acoustic metamaterial composite plate in Example 9. The sample photo is on the right of the figure, and the outer diameter is 225 mm. The glass fiber is filled between the two layers of the acoustic metamaterial composite plates with the same structure parameters and material parameters. FIG.31is the schematic drawing of the acoustic metamaterial plate with the curved surface in Example 10. The acoustic metamaterial structural units (104) of the present invention are connected with wedge connector (105) to form the acoustic metamaterial plate with a certain curvature. The present example is especially suitable for the shell or other installment structure that a certain curvature is required. EXAMPLES The testing method and the material resource for carrying out the present invention are stated as follow. The finite element method (FEM) simulation of the distribution of the stable temperature field of the acoustic metamaterial plate under the situation of the convection heat transfer is stated as follows. The FEM calculation model of the acoustic metamaterial plate is built based on the Acoustic-Solid Interaction, Frequency Domain Interface (Laminar Flow Conjugate Heat Transfer Interface, Stationary), a module in a finite-element analysis and solver software package, COMSOL Multiphysics 5.2. This simulation model comprises “solid physical fields for heat-transferring”, “fluid physical fields for heat-transferring” composed of the incident chamber and transmitting chamber, and “Laminar Flow field”. Heat source is placed in the incident chamber and the total power of the heat source is defined. The incident air cavity is also called as “heat source room”. The air entrance is placed on one side of the incident chamber, and the initial temperature and the average flow rate of the air are determined here. The air exit is placed on one side of the transmitting chamber, and the transmitting chamber is also called as “transmitting room”. Except the side placed the acoustic metamaterial plate of the two rooms, all other sides of the two rooms are designed as insulation wall. Then, steady calculation is carried out by the built-in steady implicit solver of the software. After the steady calculation, the temperature field distribution is visualized by the post-treatment module of the software. Calculation method for the FEM simulated STL of acoustic metamaterial units: The FEM calculation model of the acoustic metamaterial plate is built based on the Acoustic-Solid Interaction, Frequency Domain Interface (Laminar Flow Conjugate Heat Transfer Interface, Stationary), a module in a finite-element analysis and solver software package, COMSOL Multiphysics 5.2. This model comprises “solid physical fields” composed of three structural units, and “the pressure acoustic physical field” composed of the incident chamber and transmitting chamber. Coupling of the two fields is achieved by the acoustic-solid boundary condition. Boundary condition of Floquet periodicity is applied on the unit cell so as to simulate the periodic extension of the unit cells in the practical fabrication. The incident sound waves are set as plane waves with a frequency range from 20 Hz to 1000 Hz, a step of 10 Hz in incident chamber. The plane wave passes through the vertical excitation structure unit in the incident chamber, a part of sound energy is reflected, the other part of sound energy is transmitted into the transmitting chamber. The normal sound transmission loss (Sound Transmission Loss, short for STL) can be calculated by the energy of incident waves and transmitted waves: STL=20 log10|PI/PT| In the equation above, PIis the incident acoustic pressure amplitude. PTis the transmitted acoustic pressure amplitude. They can be obtained by post-treatment module of the software COMSOL. Measurement method for testing the normal incident sound transmission loss for the sample in the acoustic impedance tube: According the standard E2611-09 set by ASTM (American Society for Testing and Materials), “Standard test method for measurement of normal incidence sound transmission of acoustical materials based on the transfer matrix method”, STL is measured by the four-microphone method in the impedance tube. The materials used in following examples are commercially available, for example, FR-4 glass fiber, 6063 grade aluminum alloy, Q235A common carbon steel, polyvinyl chloride film, polyethylene film, polyetherimide film and like high polymer. Example 1 the Preparation of Basic Type of Acoustic Metamaterial Plate and the Test of the Properties The preparation of basic type of acoustic metamaterial plate and the test of the property are illustrated on the basis of theFIGS.2-8as follows 1. The Preparation of Basic Type of Acoustic Metamaterial Plate Sample The FR-4 glass fiber is milled to the frame as shown inFIG.2. The width of the frame (8) is 2 mm, and the frame comprise a series of acoustic metamaterial structure units (14) with the same geometric shapes. The shape of each unit is square; the inner side length is 27 mm; the outer side length is 29 mm, and the thickness is 5 mm. In the same way, the FR-4 glass fiber is made to be the perforated constraint (11) as shown inFIG.2. The frame (8) is rigidly connected with the perforated constraint (11) by the double-arm rod (13), the specific connection type is produced by the integral forming process (milling process). The outer contour diameter of the perforated constraint (13) is 10 mm, and the diameter of the hole (12) perforated on the constraint is 5 mm. The section of the double-arm connection rod (13) rigidly connected the constraint (11) and the frame (8) is rectangular, which the length is 4 mm and the width is 3 mm. The whole piece of the perforated flexible membrane (9) whose thickness is 0.05 mm is covered on the one side of the frame (8) and the perforated constraint (11) under the freely spreading situations. The diameter of the hole is also 5 mm and it is corresponding to the hole perforated on the constraint. During the practical operation, the hole (10) on the perforated flexible membrane (9) can be perforated by drilling, punching and digging after the perforated flexible membrane (9) is covered so as to avoid the situation that the holes on the perforated membrane and the constraint cannot be one-to-one correspondent. The material of the perforated flexible membrane (9) is polyetherimide film, and the type of covering is gluing. Finally, the basic acoustic metamaterial plate sample is obtained. 2. The Property Simulation of the Basic Acoustic Metamaterial Plate Sample FIG.3is the finite element method (FEM) simulation result of the distribution of the stable temperature field of the basic type of the acoustic metamaterial plate (15) under the situation of the convection heat transfer in example 1. Wherein, in the FEM simulation mode, the white cylinder is defined as heat source (18), and the total power is 10 W. The white arrow represents the air inflow direction (19), the initial temperature of the cross-section is designed as 20° C., and the average flow rate of the air is 0.2 m/s. The model further comprises heat resource room (16) and heat delivery room (17). Except the side placed the basic type of the acoustic metamaterial plate of the two rooms, all other sides of the two rooms are designed as insulation wall. From the calculation result of the FEM, it can be seen that the higher temperature of the temperature field is 25° C. and the temperature of most area is near the room temperature (20° C.), which demonstrates that the function of ventilation and heat dissipation of the basic type of the acoustic metamaterial plate is good and the heat energy is not accumulated near the heat source. Therefore, when the basic type of the acoustic metamaterial plate in Example 1 is installed on one side of the insulated and closed chamber, there is no heat dissipation obstacle existing. In order to reduce the calculation complexity, only one acoustic metamaterial structural unit is used in the FEM calculation model. The boundary condition of the unit is set as the Floquet periodic boundary condition, which is used for simulating the boundary installment of the whole piece acoustic metamaterial plate. As shown inFIG.4, during designing the FEM simulation mode, the front side and the back side of the structural unit respectively places the incident acoustic chamber (11) and the transmission acoustic chamber (12). In the meanwhile, both of the ends of the two acoustic chambers respectively place the acoustic absorption boundary, which avoids the calculation result is influenced by multi-reflections of soundwave. The incident soundwave from the incident acoustic chamber (11) strikes on the structural unit, and the reflection soundwave PRand the transmitter soundwave PTare produced. The Sound.Transmission Loss in normal direction is calculated by STL=20 log10|PI/PT|. 3. The Properties Test of the Basic Acoustic Metamaterial Plate Sample The incident Sound Transmission Loss of the acoustic material sample in normal direction is measured by the four-sensor method according to the standard of ASTM E2611-09.FIG.7is a schematic drawing of acoustic impedance tube test system. The acoustic impedance tube comprises the incident acoustic tube of the acoustic impedance tube (25) and the transmission acoustic tube (26) of the acoustic impedance tube (26); The acoustic source (24) placed on the acoustic impedance tube (25). The white noise excitation incident soundwave (31) in broad frequency produced by the acoustic source is developed to be the plane sound wave before it reaches the tested sample (30), which the wave-front amplitude tends to uniform. The soundwave vertically strikes on the front side of the tested sample (30). the absorption sound wedge (27) placed on the end of the transmitting acoustic impedance tube (26) can reduce the influence of the several times of reflection of the sound for the test result. Besides, four terminals for fixing the microphones (28) are placed on the two sides of the testing sample. The microphones (29) (Mode: 4187, Brüel & Kjær) are inserted into the terminals for fixing the microphones, each two of which respectively are used for the incident acoustic tube of the acoustic impedance tube (25) and the transmission acoustic tube of the acoustic impedance tube (26). The acoustic pressure frequency spectrum is tested by the four microphones, and further the delivery function is calculated. Finally, the incident Sound Transmission Loss of the acoustic material sample is obtained. The effective tested frequency bond is 70 Hz˜890 Hz for the testing system, which covers third octave frequency bond of the central frequency of 80 Hz˜800 Hz. The central line of the soundproof curve can also reflect the soundproof level of the sample factually in other frequency except the said frequency bond. Therefore, when the frequency bond of the testing soundproof result reaches the upper limit of 1600 Hz, it also reflects the soundproof ability of the sample truly and effectively. 4. Comparison with the Prior Art It mainly compares the Sound Transmission Loss in normal direction for the acoustic metamaterial structural unit, the routine perforated plate with the same sizes of holes, and the micro-perforated plate with the same area density and the same perforation rate here. Refer toFIG.4. The thickness of the routine perforated plate (20) with the same area density and the same perforation rate is 1.2 mm; the material is 6063 Aluminum alloy and the diameter of the hole is 5 mm. The thickness of the micro-perforated plate (21) with the same area density and the same perforation rate is 1.2 mm; the material is 6063 Aluminum alloy and the diameter of the hole is 1 mm. The area density of the three structural units is 3.56 kg/m2and the perforation rate of the three structural units is 2.33%. FIG.5is a comparative drawing of the finite element method (FEM) simulation results of the three structural units. Wherein, the solid line represents the acoustic metamaterial structural unit (14), the dashed line represents the routine perforated plate unit (20) with the same sizes of holes and the same area density, and the dotted line represents the micro-perforated plate unit (21) with the same sizes of holes and the same area density. From the figure, it can see that the Sound Transmission Loss in normal direction of the acoustic metamaterial structural unit (14) is higher than the routine perforated plate unit (20) with the same sizes of holes and the same area density in the frequency bond lower than 680 Hz. The Sound Transmission Loss in normal direction of the acoustic metamaterial structural unit (14) is higher than the micro-perforated plate unit (21) with the same area density and the same perforation rate in the frequency bond lower than 880 Hz. Besides, the curve of the Sound Transmission Loss in normal direction of the acoustic metamaterial structural unit (14) appears a spike in the frequency of 440 Hz, and the STL value reaches 17 dB. The spike STL value is higher than the micro-perforated plate unit (21) with the same area density and the same perforation rate about 14 dB, and higher than the routine perforated plate unit (20) with the same sizes of holes and the same area density about 15.4 dB. Besides, it can be seen that the function of the low-frequency soundproofing for the micro-perforated plate unit (21) with the same area density and the same perforation rate is worst, which directly relates to the fact that the Helmholtz Resonant Absorber cannot be formed without the back plane structure. In order to verify the correctness of the FEM mode,FIG.8shows a comparative drawing of the testing result of the incident Sound Transmission Loss in normal direction for the samples of acoustic metamaterial structural unit in Example 1 and the routine perforated plate with the same area density and the same sizes of holes. The comparison between the result and the finite element method (FEM) simulation results inFIG.5. Wherein,FIG.8(a)is the finite element method (FEM) simulation result of the acoustic metamaterial structural unit (14) in Example 1. The solid line is the simulation result of the FEM, and the hollow circle is the testing result. The photos of the back surface and front surface of the sample are respectively shown on the left and right of the figure. The diameter of outer circle is 225 mm, which comprises more than 40 whole acoustic metamaterial structural units, and the influence of the installment boundary condition for the whole plate is eliminated. From the STL frequency spectrogram, the two are anastomoses good in the frequency bond of 100 Hz˜1000 Hz, and they appears spike in frequency of 440 Hz, which proves that the FEM mode is used for analyzing the properties of the acoustic metamaterial structural unit is believable.FIG.8(b)is the finite element method (FEM) simulation result of the routine perforated plate unit with the same area density and the same sizes of holes. The geometric size and the material parameters is same as20shown inFIG.3. The dashed line is the simulation result of the FEM, and the hollow circle is the testing result. The photo of the sample is shown on the left of the figure. The diameter of outer circle is 225 mm. The two are anastomoses good in the frequency bond of 100 Hz˜1000 Hz, which proves that the FEM mode is used for analyzing the properties of the acoustic metamaterial structural unit is believable.FIG.8(c)is the finite element method (FEM) simulation result of the micro-perforated plate unit (The geometric size and the material parameters is same as21shown inFIG.4). The dotted line is the simulation result of the FEM, and the hollow triangle curve is the testing result. The photo of the sample is shown on the left of the figure. The diameter of outer circle is 225 mm. The two are anastomoses good in the frequency bond of 100 Hz˜000 Hz, which proves that the FEM mode is used for analyzing the properties of the acoustic metamaterial structural unit is believable. The comparison between the testing result and the FEM simulation result of the three samples can proved that the designed FEM mode used for analyzing the properties of the acoustic metamaterial structural unit is correct and effective. 5. Operation Mechanism Analysis FIG.6shows the finite element method (FEM) simulation results of speed directions of the air particles in incident acoustic chamber and the transmission acoustic chamber for the acoustic metamaterial structural unit (14), the routine perforated plate unit (20) with the same sizes of holes and the same area density, and the micro-perforated plate unit (21) with the same area density and the same perforation rate is excited by the soundwave frequency of 440 Hz. Wherein,FIG.6(a)is the finite element method (FEM) simulation result of the acoustic metamaterial structural unit (14);FIG.6(b)is the finite element method (FEM) simulation result of the routine perforated plate unit (20) with the same sizes of holes and the same area density, andFIG.6(c)is the finite element method (FEM) simulation result of the micro-perforated plate unit (21) with the same area density and the same perforation rate. The left black crude arrow represents the inflow direction of the soundwave. The soundwave is plane wave, that is to say, the wave-front amplitude is uniform, which is set 1 Pa in the FEM mode. The black thin arrow represents the speed direction of the air particles. It can be seen from the figure, when the acoustic metamaterial structural unit (14) is excited by the soundwave frequency of 440 Hz, the speed vortex of the air particles obviously appears, and the direction of the air particles is vertical with and even is opposite with the direction of the incident soundwave. On the contrast, when the routine perforated plate unit (20) with the same sizes of holes and the same area density shown inFIG.6(b)and the micro-perforated plate unit (21) with the same area density and the same perforation rate shown inFIG.6(c)are excited by the soundwave frequency of 440 Hz, the air particles directions of the both sides are uniform, which is same as the direction of the incident acoustic soundwave. After the comparison, intuitively, the speed vortex produced by the air particles makes the corresponding normal incident Sound Transmission Loss curve of the acoustic metamaterial structural unit (14) appears the spike in the same incident frequency (combine withFIG.5). The physical mechanism is stated as follows. Under the frequency, the unperforated area of the flexible membrane of the acoustic metamaterial structural unit (14) produces the opposite vibration mode with the frame and the constraint, which makes acoustic field corresponding to the area is opposite and counteract with the continued acoustic field produced by the holes perforated on the constraint and the flexible membrane, and further, the acoustic pressure of amplitude tends to the minimum, which is only 0.0323 Pa in the simulation mode. The acoustic pressure in the incident chamber is partly rebounded by the acoustic metamaterial structural unit (14) and reaches the maximum value of 1.84 Pa, which is higher than the minimum value about 1.8077 Pa. Under the same condition which is excited by the soundwave frequency of 440 Hz, the other two structural units do not appear the similar speed vortex of the air particles. The whole structural unit moves in the same phase, which makes the near air particles move in the same direction and the difference of the absolute value of the acoustic pressure amplitude between the incident chamber and the transmitting chamber is small. It reflects that there is no spike on the normal incident Sound Transmission Loss curve and the value is not as high as the acoustic metamaterial structural unit (14) Example 2 The Preparation of the Thin and Light Type of the Acoustic Metamaterial Plate and the Test of the Properties 1. The Preparation of the Thin and Light Type of Acoustic Metamaterial Plate Sample As is shown inFIG.9, the frame (32) is produced by the laser cutting with the grade Q235A common carbon steel. The width is 3 mm, and the thickness is 1.5 mm. The frame comprises a series of acoustic metamaterial structure units (38) with the same geometric shapes. The shape of each unit is square; the inner side length is 35 mm. In the same way, the grade Q235A common carbon steel is made to be the perforated constraint (35). The frame (32) is rigidly connected with the perforated constraint (35) by the double-arm rod (37), the specific connection type is produced by the integral forming process. The outer contour diameter of the perforated constraint (35) is 10 mm, and the diameter of the hole (36) perforated on the constraint is 5 mm. The section of the double-arm connection rod (37) rigidly connected the constraint (37) and the frame (32) is rectangular, which the length is 3 mm and the width is 1.5 mm. The whole piece of the perforated flexible membrane (33) whose thickness is 0.05 mm is covered on the one side of the frame (32) and the perforated constraint (37) under the freely spreading situations. The diameter of the hole is also 7 mm and it is corresponding to the hole perforated on the constraint. The hole (34) on the perforated flexible membrane (33) can be perforated by drilling, punching and digging after the perforated flexible membrane (33) is covered so as to avoid the situation that the holes on the perforated membrane and the constraint cannot be one-to-one correspondent. The material of the perforated flexible membrane (33) is polyetherimide film, and the type of covering is gluing. Finally, the thin and light type of acoustic metamaterial plate sample is obtained as shown inFIG.9. The maximum difference between the present thin and light type of acoustic metamaterial structural unit and the basic acoustic metamaterial plate sample in Example 1 is stated as follows. The connection rod (37) that is connected the perforated constraint (35) and the frame (32) of the acoustic metamaterial structural units (38) is flush with the frame, so it avoids the design of the subsidence surface, which simplifies the process complexity. Further, the thickness of the whole acoustic metamaterial plate can be thinner. The area density of the present thin and light type of acoustic metamaterial plate is 4.20 kg/m2and the perforation rate is 3.48%. 2. The Properties Test of the Basic Acoustic Metamaterial Plate Sample FIG.10is the testing result of the incident Sound Transmission Loss in normal direction for the light and thin acoustic metamaterial plate in Example 2. The sample photo is on the right of the Figure, and the outer diameter is 225 mm. It comprises 21 whole acoustic metamaterial structural units. It can be seen from the figure that the spike appears in the frequency of 400 Hz and the corresponding STL value reaches to about 17 dB. The frequency bond that the STL value in the normal incident Sound Transmission Loss spectrogram of the present acoustic metamaterial plate sample is higher than 6 dB is 300 Hz˜520 Hz. Example 3: The Preparation of The Acoustic Metamaterial Plate Comprising Units in Different Parameters and the Test of the Properties 1. The Preparation of the Acoustic Metamaterial Plate Comprising Units in Different Parameters The schematic drawing of the acoustic metamaterial structural unit and the acoustic metamaterial plate constructed the units with different parameters in inner surface direction in Example 3 is shown inFIG.11. The structure sizes of the acoustic metamaterial structural units as the basic array element are different. The diameter of the inner constraint is different from the diameter of the holes perforated on the constraint. Take a certain acoustic metamaterial structural unit (45) as an example, the connection rod (44) connected the perforated constraint (42) and the frame (39) of the acoustic metamaterial structural units (45) is flush with the frame (39). The structure is similar with the acoustic metamaterial structural unit (38) in Example 2. The present acoustic metamaterial plates comprise four acoustic metamaterial structural units with different size parameters. The shape of the frame of each piece of acoustic metamaterial structural unit is square. The inner side length is 35 mm; the width of the outer frame (46) is 3 mm; the thickness is 1.5 mm. The diameters of the outer contour of the perforated constraint (42) comprises four different sizes, from low to high are 5 mm, 10 mm, 12 mm, 15mm. The diameters of the holes perforated on the constraint comprise three sizes, from low to high are 3 mm, 5 mm, 10 mm (36). The whole piece of the perforated flexible membrane (40) that the thickness is 0.05 mm is covered on the one side of the frame (39) under the freely spreading conditions and any pretension is not exerted on the membrane. The diameter of the hole (41) perforated on the membrane is same as the hole (43) perforated on the constraint (36). The cross-section of the connection rod (44) is rectangular whose length is 3 mm and the width is 1.5 mm. The materials of the frame (39), the perforated constraint (42) and the double-arm connection rod (44) is common carbon steel with the grade of Q235A, and they are same. The material of the perforated flexible membrane is polyetherimide. The area density of the thin and light acoustic metamaterial plate is 4.40 kg/m2and the perforation rate is 3.22%. 2. The Properties Test of the Basic Acoustic Metamaterial Plate Sample FIG.12is the testing result of the incident Sound Transmission Loss in normal direction for the samples acoustic metamaterial plate constructed the units with different parameters in Example 3. The sample photo is on the right of the Figure, and the outer diameter is 225mm. It comprises 21 whole acoustic metamaterial structural units. It can be seen from the figure that the spike appears in the frequency of 430 Hz and the corresponding STL value reaches to about 21 dB. The frequency bond that the STL value in the normal incident Sound Transmission Loss spectrogram of the present acoustic metamaterial plate sample is higher than 6 dB is 210 Hz˜600 Hz. The reason is that different sized of constraints and the hole perforated on the constraint are used for the different acoustic metamaterial structural units, and several STL spikes are produced and further the operating frequency bond is obviously widened. Example 4: The Preparation of the General Acoustic Metamaterial Structural Unit Placed Large Size of Holes and the Acoustic Metamaterial Plate Constructed the Units in Inner Surface Direction and the Test of the Properties 1. The Preparation of the Acoustic Metamaterial Plate Placed Large Size of Holes As is shown inFIG.13, the acoustic metamaterial plate constructs the acoustic metamaterial structural unit (51) in inner surface direction by the periodic array. The perforated constraint (48) and the double-arm connection rod (50) of one piece of the structural unit are removed from each of the 3×3 unit array clusters, and the large size of hole is formed. Further, the more general acoustic metamaterial structural unit (58) is formed. The general acoustic metamaterial structural unit (58) comprises the frame (55), the constraint (56) perforated large size of holes (47) and the connection rod (57). There are two types of sizes of holes perforated on the flexible membrane (52), which are small size of hole (53) and large size of hole (54). The shape of the frame of each piece of acoustic metamaterial structural unit is square. The inner side length is 35 mm; the width of the outer frame (46) is 3 mm; the thickness is 1.5 mm. The diameter of the outer contour of the perforated constraint (48) is 8 mm. The diameter of the holes (49) perforated on the constraint is 3 mm. The whole piece of the perforated flexible membrane (52) that the thickness is 0.05 mm is covered on the one side of the frame (46) under the freely spreading conditions and any pretension is not exerted on the membrane. The diameter of the small size of hole (53) perforated on the membrane is same as the small size of hole (49) perforated on the constraint, and they are both 3 mm. The diameter of the small size of hole (54) perforated on the membrane is same as the small size of hole (47) perforated on the constraint, and they are both 35 mm. The cross-section of the connection rod (50) connected the constraint (48) and the frame (46) is rectangular whose length is 3 mm and the width is 1.5 mm. The materials of the frame (46), the perforated constraint (48) and the double-arm connection rod (50) is common carbon steel with the grade of Q235A, and they are same. The material of the perforated flexible membrane is polyetherimide. The area density of the thin and light acoustic metamaterial plate is 3.66 kg/m2and the perforation rate is 21.70%. 2. The Properties Test of the General Acoustic Metamaterial Plate Sample FIG.14is the testing result of the incident Sound Transmission Loss in normal direction for the samples acoustic metamaterial plate constructed the units with the large size of holes in Example 4. The sample photo is on the left of the Figure, and the outer diameter is 225 mm. It can be seen from the figure that the spike appears in the frequency of 950 Hz and the corresponding STL value reaches to about 23 dB. Comparing the above results with the Examples 1-3, the effective operating frequency of the present acoustic metamaterial plate sample appears on higher frequency bond, and the bandwidth is also obvious narrower than any one of the Examples 1-3. In spite of this, the perforation rate of the present acoustic metamaterial plate sample surprisingly reaches 21.70%, which is much beneficial to pass through freely for the fluid. 3.The Derivation Type of the General Acoustic Metamaterial Plate Sample with Large Size of Hole On the basis of the construction, two types of the general acoustic metamaterial structural unit are derived, which is shown inFIG.15. Wherein, the constraint (60) perforated with large size of holes (61) inFIG.15(a)is the new connection rod (62) that only the left and the right of the connection rod (57) of the general acoustic metamaterial plate sample (58) are retained to form, which is connected with the frame (59) of the whole unit. The constraint (60) perforated with large size of holes (61) inFIG.15(b)is the new connection rod (62) that four sides of the left side, the right side, the bottom side and the top side of the connection rod (57) of the general acoustic metamaterial plate sample (58) are retained to form, which is connected with the frame (59) of the whole unit. Example 5: The Preparation of the Acoustic Metamaterial Structural Unit with Other Shapes of Frames, Connection Rods and Constraint, and the Acoustic Metamaterial Plate Constructed the Units in Inner Surface Direction and the Test of the Properties 1. The Structure of the Acoustic Metamaterial Structural Unit with Other Shapes of Frames, Connection Rods and Constraint Wherein, the shape of the frame (66) is spherical inFIG.16(a), and the perforated (67) constraint (68) connects with the frame (66) by the double-arm connection rod (69). The shape of the frame (70) is regular hexagon inFIG.16(b), and the perforated (67) constraint (68) connects with the frame (70) by the double-arm connection rod (69). The shape of the frame (66) is spherical inFIG.16(c), and the perforated (67) constraint (68) connects with the frame by the single-arm connection rod (71). The shape of the frame (70) is regular hexagon inFIG.16(d), and the perforated (67) constraint (68) connects with the frame (70) by the single-arm connection rod (71). InFIG.16(e), the shape of the frame is rectangular formed by combining the two adjacent square units, and the two perforated (67) constraints (68) respectively connects with the frame (72) by the sing-arm connection rod (71). It is worthy to note that the single-arm connection rod is especially fit for the frame with small size, which can further reduce the weight of the whole unit the precondition that the connection rigidity of the frame and the constraint is not changed. 2. The Preparation of the Acoustic Metamaterial Plate with the Spherical Frame and the Single-Arm Connection Rods, and the Properties Test Thereof The Example 5 describes the acoustic metamaterial structural unit with the s spherical frame and single-arm connection rod. The inner diameter of the frame (66) is 30 mm and the thickness is 5 mm. The diameter of the outer contour of the perforated constraint (68) is 8 mm. The diameter of the holes (67) perforated on the constraint is 5 mm. The whole piece of the perforated flexible membrane that the thickness is 0.05 mm is covered on the one side of the frame (46) under the freely spreading conditions and any pretension is not exerted on the membrane. The diameter of the hole perforated on the membrane is same as the hole (67) perforated on the constraint, and they are both 5 mm. The cross-section of the connection rod (71) connected the constraint (68) and the frame (66) is rectangular whose length is 5 mm and the width is 3 mm. The materials of the frame (66), the perforated constraint (68) and the double-arm connection rod (71) is FR-4 glass fiber, and they are same. The material of the perforated flexible membrane is polyetherimide. The area density of the thin and light acoustic metamaterial plate is 4.57 kg/m2and the perforation rate is 2.78%. FIG.17is the testing result of the incident Sound Transmission Loss in normal direction for the acoustic metamaterial structural unit (the structure is shown inFIG.16(c)and the samples the arrays of acoustic metamaterial plates constructed the units in inner surface direction in Example 5, and the acoustic metamaterial structural unit comprises the round frame and the single-arm constraint connection rod. It can be seen from theFIG.17that the spike appears in the frequency of 630 Hz and the corresponding STL value reaches to about 30 dB. The frequency bond that the STL value in the normal incident Sound Transmission Loss spectrogram of the present acoustic metamaterial plate sample is higher than 6 dB is 210 Hz˜600 Hz. Example 6: The Preparation of the Acoustic Metamaterial Structural Unit Covering the Membrane on Both Sides, and the Acoustic Metamaterial Plate Constructed the Units in Inner Surface Direction and the Test of the Properties 1. The Preparation of the Acoustic Metamaterial Structural Plate Covering the Membrane on Both Sides FIG.18is a structural schematic drawing of the acoustic metamaterial structural unit covering the membrane on both surfaces in Example 6.FIG.18(a)is the lateral sectional view of the unit andFIG.18(b)is the exploded view of the unit. The first perforated flexible membrane (74) and the second perforated flexible membrane (75) are respectively covered on the both sides of the same acoustic metamaterial structural unit. The diameters of the holes (76) perforated on the first perforated flexible membrane (74), the diameters of the holes (77) perforated on the second perforated flexible membrane (75) and the diameter of the holes perforated on the constraint are same. The example is especially fit for the situation that the thickness of the frame (73) is large. It not only sufficiently uses the other side of the frame, but also a new layer of vibration unit is formed. The two layers of vibration units can realize the superposition and coincidence of multiple layers of vibration unit, which can isolate the soundwave effectively. The present acoustic metamaterial structural unit is obtained by the modification of the basic acoustic metamaterial structural unit in Example 1 that the second perforated flexible membrane is covered on the other side. The material of the second perforated flexible membrane is polyetherimide and the thickness is 0.038 mm. The geometric parameters and the material parameters of other composite elements are same as Example 1. FIG.19is a structural schematic drawing of the acoustic metamaterial structural unit covering the membrane on both surfaces and the space between the first perforated flexible membrane (74) and the second perforated flexible membrane (75) is filled with the porous material (82), which is improved by the Example 6.FIG.19(a)is the lateral sectional view of the unit andFIG.19(b)is the exploded view of the unit. The filled porous material (82) may be glass fiber or open and closed holes of foam open and closed holes of foam. It not only can sufficiently use the chamber space between the two layers of the perforated membrane, but also it can obviously strengthen the acoustic function of the whole acoustic metamaterial structural unit. When the two perforated membrane neighbors closely, the near soundwaves are reflected back and forth to produce strong coupling, the acoustic pressure between the two layers of membrane increases drastically and the sound energy density increases. In this case, even r the sound absorption efficiency of the filled porous material also increases remarkably. Thus, under the situation that the thickness and the weight of the acoustic metamaterial structural unit is not increased, the transmitting acoustic energy is reducing remarkably and the better effect for reducing noise is realized. It is worthy to note that the characteristic impedance of the porous materials should match with the membrane, which can avoid the soundwave not entering into the porous material effectively. In the meanwhile, the influence of the filled porous material on the flexural vibration rigidity of the membrane should be considered, and the operating frequency of the original designed acoustic metamaterial structural unit should be modified. 2. The Properties Test of the Acoustic Metamaterial Plate Sample Covering Membrane on Both Sides FIG.20is a comparative drawing of the testing result of the incident Sound Transmission Loss in normal direction for the sample of the array acoustic metamaterial plate constructed with the acoustic metamaterial structural units covering the membrane on the both sides in inner surface direction in example 6 and the sample of the basic acoustic metamaterial structural plate covering the membrane only on one side in example 1. The difference of the two examples is that the second perforated flexible membrane is covered on the other side of the acoustic metamaterial structural unit covering membrane on both sides. The sample photo is on the right of the Figure, and the outer diameter is 225 mm. It can be seen from the figure that the spike appears in the frequency of 650 Hz, which is higher than the acoustic metamaterial structural unit sample in Example 1. For the acoustic metamaterial structural unit sample in Example 6, the frequency bond that the STL value in the normal incident Sound Transmission Loss spectrogram of the present acoustic metamaterial structural unit sample is higher than 6 dB is 300 Hz˜600 Hz. The reason is that the system characters of the original basic acoustic metamaterial structural unit is changed, when the second perforated flexible membrane is covered on the other side. In particular, on one hand, the structural comprising two layers of membrane and the closed air space can increase the structural rigidity of the original basic acoustic metamaterial structural unit. On the other hand, the vibrational degree of freedom of the system increases, which makes the acoustic metamaterial structural unit possesses both the negative mass property (the movement response is opposite to the direction of the excitation) and the negative volume modulus property (the change of volume is opposite to the direction of the excitation). The metamaterial property is further strengthened. FIG.21is a comparative drawing of the testing result of the incident Sound Transmission Loss in normal direction for the sample of the array acoustic metamaterial plate constructed with the acoustic metamaterial structural units covering the membrane on the both surfaces in inner surface direction in example 6 and the sample of the array acoustic metamaterial plate constructed with the acoustic metamaterial structural units in inner surface direction covering the membrane on the both surfaces and the space between the two perforated membranes filled with the porous material in example 6. The photos of the sample acoustic metamaterial structural units covering the membrane on the both surfaces and the space between the two perforated membranes filled with the porous material is shown on the left of the Figure, and the outer diameter is 225 mm. The filled porous material is glass fiber and the thickness is 10 mm. The nominal resistivity is 19000 Nsm−4. The fill of the porous materials makes the STL spike on the originally frequency of 650 Hz moves to the higher frequency bond. Further, the effective soundproof bond in the high frequency bond is further widened. When the acoustic metamaterial structural unit self is excited by the soundwave or the flow field, the multi-mode local resonance is produced, which can improve the synergy degree between the speed field and the temperature gradient field, and finally the effect of heat-transfer enhancement is realized. Moreover, the enough soundproof property in the low frequency of the acoustic metamaterial structural unit is also considered. The resonance directly corresponds to the result of the full acoustic transmission. On the basis of the above considerations, when the operation condition of the acoustic metamaterial structural unit is not changed or changed a little, for example, one layer containing several perforated flexible membranes or elastic membranes is covered on the other side of the structure unit, these changed structures can realize the effect of heat-transfer enhancement by strong vibrations excited by the soundwave or the flow field. Thus, a batch of the acoustic metamaterial structural units with the function of heat-transfer enhancement and the examples thereof are formed. Example 7: The Preparation of the Acoustic Metamaterial Structural Unit with the Function of the Heat-Transferring Enhancement and the Acoustic Metamaterial Plate Constructed the Units in Inner Surface Direction and the Test of the Properties 1.Three Different Structures of the Acoustic Metamaterial Structural Unit with the Function of the Heat-Transferring Enhancement FIG.22is the first structural schematic drawing of the acoustic metamaterial structural units with the function of the heat-transferring enhancement in Example 7. The perforated flexible membrane (86) is covered on one side of the acoustic metamaterial unit, on which several round holes (88) in different sizes or in same size are placed. Under the condition that the effect of soundproof of the acoustic metamaterial structural unit is not influenced, the turbulence intensity can be strengthened by increasing the number of holes perforated on the membrane.FIG.22(a)is the equiaxial lateral sectional view of the unit andFIG.22(b)is the exploded view of the unit. The size of the additional holes (88) on the perforated flexible membrane (86) may be same as or different from the size of the hole (87) originally perforated on the membrane. FIG.23is the second structural schematic drawing of the acoustic metamaterial structural units with the function of the heat-transferring enhancement in Example 7. The perforated flexible membrane (86) is covered on the other side of the acoustic metamaterial unit, on which several holes (93) in different size and in different shapes are placed. Under the condition that the effect of soundproof of the acoustic metamaterial structural unit is not influenced, the turbulence intensity can be strengthened by perforating different size and different shapes of holes on the membrane.FIG.23(a)is the equiaxial lateral sectional view of the unit andFIG.23(b)is the exploded view of the unit. The shape and the size of the additional holes (93) on the perforated flexible membrane (86) may be chosen arbitrarily. The shapes in the present examples are respectively round, rectangular, hexagon and the triangle. FIG.24is the third structural schematic drawing of the acoustic metamaterial structural units with the function of the heat-transferring enhancement in Example 7. Several elastic membranes (94) are covered on the other side of the acoustic metamaterial unit, on which several hole in different size and in different shapes are placed. Under the condition that the effect of soundproof of the acoustic metamaterial structural unit is not influenced, the turbulence intensity or the flow rate of the near flow field can be strengthened by swinging or vibration produced by excitation of the incident soundwave.FIG.24(a)is the equiaxial lateral sectional view of the unit andFIG.24(b)is the exploded view of the unit. 2. The Preparation of the First Structural Schematic Drawing of the Acoustic Metamaterial Structural Units with the Function of the Heat-Transferring Enhancement and the Acoustic Metamaterial Plate Constructed the Units in Inner Surface Direction and the Test of the Properties FIG.25is the testing result of the incident Sound Transmission Loss in normal direction for the sample of the first structural schematic drawing of the acoustic metamaterial structural units in Example 7. The sample photo is on the right of the Figure, and the outer diameter is 225 mm. The present example of acoustic metamaterial structural unit is improved from the Example 6 shown inFIG.18, and four additional holes whose diameter is all 3 mm perforated on the first perforated flexible membrane (the thickness is 0.050 mm and the material is polyetherimide; and the geometric parameters and material parameters of all other composite elements is same as the Example 6. It can be seen from the figure that the spike appears in the frequency of 85 Hz and the corresponding STL value reaches to about 22 dB. The frequency bond that the STL value in the normal incident Sound Transmission Loss spectrogram is higher than 6 dB is 300 Hz˜1100 Hz. Example 8: The Preparation of the Acoustic Metamaterial Composite Structure and the Test of the Properties The acoustic metamaterial structural units in Example 1 is constructed by array distribution in inner surface direction (xy plane), and the basic acoustic metamaterial plate is formed. The glass fiber (97) whose thickness is 10 mm and the nominal flow resistivity is 19000 Nsm−4is chosen as the routine acoustic material plate. The acoustic metamaterial plate and the routine acoustic material plate is combined; the different acoustic plates contacts each other directly and are further slightly extruded. They can also connect by the types of elastic connection, for example, small piece of the rubber bearing is used for supporting and isolating the different acoustic material plates. Finally, the acoustic metamaterial composite structure is constructed as shown inFIG.26. The incident Sound Transmission Loss curve testing by the acoustic impedance tube method is shown inFIG.27. Wherein, the circle corresponds to the result of the present Example 1. The dashed line is the result of the present acoustic metamaterial structural unit in Example 8. It can be shown from the figures, comparing with the basic type of acoustic metamaterial plate, the Sound Transmission Loss in normal direction of the present acoustic composite structure sample is higher than the basic acoustic metamaterial plate except near the frequency of 440 Hz corresponding to STL spike, especially in mid- or high frequency bond on the right of STL spike. The STL value of the present acoustic composite structure sample is lightly lower than basic acoustic metamaterial plate near the frequency of 440 Hz corresponding to STL spike. The reason is that the introduction of glass fiber is equivalent to increase the structural damping of the basic acoustic metamaterial plate, and the effect of the structural damping mainly embodies the amplitude on the frequency of the gentle resonance and the reflection resonance. Example 9: The Acoustic Metamaterial Composite Structure Constructed by Multiple Layers of Acoustic Metamaterial Plates Stacking in the Outer Vertical Direction FIG.28is the schematic drawing of the acoustic composite plate constructed by two layers of acoustic metamaterial plates that they are pulled so as to form a certain space in Example 9. Wherein, the structure and material parameters of the two thin layers of acoustic metamaterial plates may be same or different. They respectively comprises the first layer of acoustic metamaterial plate framework (98), the whole piece of the perforated membrane (99) of the first layer of acoustic metamaterial plate, the second layer of acoustic metamaterial plate framework (100), the whole piece of the perforated membrane (101) of the second layer of acoustic metamaterial plate. There is air gap between the two layers of the routine acoustic material plates (102). FIG.29is the schematic drawing of the acoustic composite plate constructed by two layers of acoustic metamaterial plates that they are pulled so as to form a certain space, and a layer of porous material is inserted in the space in Example 9. They respectively comprises the first layer of acoustic metamaterial plate framework (98), the whole piece of the perforated membrane (99) of the first layer of acoustic metamaterial plate, the second layer of acoustic metamaterial plate framework (100), the whole piece of the perforated membrane (101) of the second layer of acoustic metamaterial plate. The characteristic impedance of the porous material layer (103) should match with the characteristic impedance of two layers of membrane, which can avoid the soundwave not entering into the porous material effectively. In the meanwhile, the influence of the filled porous material on the flexural vibration rigidity of the two layers of membrane (99) and (101) should be considered, and the operating frequency of the original designed acoustic metamaterial structural unit should be modified. Besides, the porous material layer should be perforated, which ensures the holes are coincide with the holes perforated on the frame (98) and the constraint (100) so that the heat dissipation obstacle is not caused. FIG.30is the testing result of the incident Sound Transmission Loss in normal direction for the sample of the acoustic metamaterial composite plate in Example 9. The sample photo is on the right of the figure, and the outer diameter is 225 mm. The glass fiber is filled between the two layers of the acoustic metamaterial composite plates with the same structure and material parameters. Wherein, the structure parameter and the material parameter of the acoustic metamaterial plate are same as the thin and light type of the acoustic metamaterial plate in Example 2 shown inFIG.8. The thickness of the glass fiber is 10 mm and the nominal flow resistivity is 19000 Nsm−4. InFIG.30, hollow box curve represents one layer of thin and light type of the acoustic metamaterial plate in Example 2; the dashed line represents the acoustic metamaterial composite plate constructed with two layers of thin and light type of the acoustic metamaterial plates filled with one layer of glass fiber whose thickness is 10 mm in the space formed between the two in Example 2. It can be seen that the STL value of the two types of the acoustic metamaterial composite plate samples lies in the frequency bond of 100 Hz˜1000 Hz, which is higher than the single layer of the thin and light type of the acoustic metamaterial plate. Moreover, the effect of the increase of the STL value mainly embody the mid- and high- frequency bond on the right of the spike. From comparison the solid line and the dashed line, we can see that the spike frequency of the acoustic metamaterial composite plate sample filled with glass fiber moves to the high frequency, and the effective operating frequency bond is the widest of the three. Example 10: The Acoustic Metamaterial Plates with the Curved Surface Structure and the Method for Assembling Thereof FIG.31is the schematic drawing of the acoustic metamaterial plate with the curved surface in Example 10. The acoustic metamaterial structural units (104) of the present invention are connected with wedge connector (105) to form the acoustic metamaterial plate with a certain curvature. The wedge connector (105) may be rubber, acrylic, nylon and the material of the wedge connector in the Example is rubber. The present example is especially suitable for the shell or other installment structure that a certain curvature is required. In the end, the above-mentioned examples are preferred ones only, and are not used to limit the present invention. Those skilled in the art should understand that various modifications and transformations could be taken on the present invention. Every modification, equal alternation and improvement of the present invention within its spirits and principles should be encompassed in the protection scope of the present invention.
95,176
11862137
DETAILED DESCRIPTION OF THE EMBODIMENTS It is understood that the term “vehicle” or “vehicular” or other similar term as used herein is inclusive of motor vehicles in general such as passenger automobiles including sports utility vehicles (SUV), buses, trucks, various commercial vehicles, watercraft including a variety of boats and ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles and other alternative fuel vehicles (e.g., fuels derived from resources other than petroleum). As referred to herein, a hybrid vehicle is a vehicle that has two or more sources of power, for example, both gasoline-powered and electric-powered vehicles. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Throughout the specification, unless explicitly described to the contrary, the word “comprise” and variations such as “comprises” or “comprising” will be understood to imply the inclusion of stated elements but not the exclusion of any other elements. In addition, the terms “unit”, “-er”, “-or”, and “module” described in the specification mean units for processing at least one function and operation, and can be implemented by hardware components or software components and combinations thereof. Further, the control logic of the present disclosure may be embodied as non-transitory computer readable media on a computer readable medium containing executable program instructions executed by a processor, controller or the like. Examples of computer readable media include, but are not limited to, ROM, RAM, compact disc (CD)-ROMs, magnetic tapes, floppy disks, flash drives, smart cards and optical data storage devices. The computer readable medium can also be distributed in network coupled computer systems so that the computer readable media is stored and executed in a distributed fashion, e.g., by a telematics server or a Controller Area Network (CAN). The present disclosure will be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the disclosure are shown. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present disclosure. The drawings and description are to be regarded as illustrative in nature and not restrictive, and throughout the specification, the same or similar constituent elements are explained by applying the same reference numeral. In the following description, dividing names of components into first, second, and the like is to divide the names because the names of the components are the same as each other, and an order thereof is not particularly limited. A vibration reduction device according to an embodiment of the present disclosure is configured to reduce noise transmitted through structures in various industrial fields such as vehicles, aircraft, home appliances, and mechanical structures. That is, noise generated from engines or motors in vehicles, aircraft, home appliances, and mechanical structures is transmitted through air or through structures. Accordingly, a vibration reduction device according to an embodiment of the present disclosure is attached to a structure and can be applied to reduce the noise transmitted through the structure. For example, the structure may be an inner panel or a support of an electronic product such as a washing machine, a refrigerator, a dishwasher, a microwave oven, an air conditioner, or a hot air fan. In addition, the structure may be a support or reinforcement for supporting a soundproof wall of a road or a rainwater drain pipe of a building, and may be a device for performing milling, cutting, extrusion, and molding. In addition, the structure may be a support or housing of rotation equipment such as a pump, compressor, and turbine of a power plant, or a support of a computer hard disk. In particular, the structure applied in the vehicle industry may be a roof panel as a part of the vehicle body, and may be a top panel disposed on the upper side of the cowl of an engine room. In addition, it can be applied not only to the part where vibration and noise are transmitted from the vehicle body, but also to all devices where vibration and noise are transmitted. In addition, the vibration reduction device according to the embodiment of the present disclosure is formed of an acoustic meta-material having an acoustic meta-structure, and the acoustic meta-material refers to a structure that is artificially designed to have a unique wave characteristic that cannot be found in nature. That is, unlike materials existing in nature, the acoustic meta-material refers to a medium having a zero or negative dielectric constant or a negative refractive index. By periodically arranging unit cells smaller than a wavelength, the acoustic meta material can block propagation of waves by making the mass density or volumetric elastic modulus a negative value in a specific frequency band. In this case, a band in which a wavenumber corresponding to a specific frequency is empty occurs due to a local resonance effect. Such a band in which the frequency is empty is called a stop band, and theoretically, since there is no wave propagating in the stop band, the wave propagation can be completely blocked. That is, the unit cell is designed based on the stop band. FIG.1is a schematic diagram of a vibration reducing device according to an embodiment of the present disclosure. Referring toFIG.1, a plurality of unit cells10formed of an acoustic meta material are disposed to form a unit structure5, and by attaching the unit structure5to a structure1, noise and vibration transmitted from the structure1can be reduced. In this case, the unit cell10may be connected in plural through first bridges20such that a single unit structure5is formed. In addition, unit structure5may be attached to the structure1to reduce vibration, and further, the unit structure5may be attached to other unit structure(s) by being connected to each other through second bridges30. FIG.2is a perspective view of a unit structure applied to a vibration reducing device of the according to the embodiment of the present disclosure, andFIG.3is a cross-sectional view of a unit cell applied to the vibration reducing device according to the embodiment of the present disclosure. Referring toFIG.2, the unit structure5applied to a vibration reducing device3according to the embodiment of the present disclosure may be formed of four unit cells10connected to each other. The unit structure5may be formed by disposing four unit cells10symmetrically in all directions. Although the unit structure5has been described as an example in which four unit cells10are connected to each other, it is not necessarily limited thereto, and the number of unit cells10may be set within a range from two to eight as needed, and it is advantageous to set it to even numbers. In the embodiment of the present disclosure, a reference direction is set in the left, right, front, rear, and vertical directions based onFIG.2, and a portion facing upward is defined as an upper portion, an upper end, an upper surface, and an upper end portion, and a portion facing downward is defined as a lower portion, a lower end, a lower surface, and a lower end portion. The definition of the reference direction as described above is a relative meaning, and since the direction may vary depending on the reference position of the vibration reducing device3or the reference position of assembled parts, the reference direction is not necessarily limited to the reference direction of the present embodiment. Referring toFIG.3, the unit cell10forming the unit structure5includes a mass portion11, a base frame15, and a support portion19. The mass portion11may be formed in a rectangular block shape. For example, the mass portion11may have a rectangle shape. A size of the mass portion11may be set according to a target frequency. For example, the size of the mass portion11increases as the target frequency decreases. Similarly, the size of the mass portion11decreases as the target frequency band increases. Since the mass portion11can increase the amount of vibration reduction as its size increases, the size increases as the target frequency decreases. The mass portion11includes an engraving portion13for numbering on the upper surface. That is, the engraving unit13is for numbering each unit cell constituting a single unit assembly. In addition, the base frame15may be formed as a square frame. The mass portion11is eccentrically disposed in the base frame15. An adhesive member17is formed on the lower surface of the base frame15and can be attached to the structure1. The adhesive member17may include an adhesive or adhesive tape. It is advantageous that the base frame15is formed to secure a gap of at least 1 mm from the outside of the mass portion11. In addition, the support portion19is disposed to connect the mass portion11and the base frame15. The support portion19is formed of a first fixing portion190with one end protruding. The support portion19is fixed to one side of the upper surface of the base frame15through the first fixing portion190. In addition, the support portion19is formed integrally with a connecting portion191at a position spaced apart from the first fixing portion190by a predetermined height. The support portion19is connected to a center of one side of the mass portion11through the connecting portion191and a second fixing portion193integrally formed at an opposite end. The support portion19defines a portion connecting between the one end and the opposite end as a length1, and a direction intersecting with respect to the length1is defined as a width w. In addition, the support portion19has a variable groove195is formed in the central portion of the connecting portion191. The support portion19can adjust the entire length1by changing the size of the variable groove195according to the target frequency. That is, when the variable groove195is formed to be small, the entire length1of the support portion19is shortened, and when the variable groove195is formed to be large, the entire length1of the support portion19is increased. The support portion19is formed to vibrate together with the mass portion11while one end is fixed to the base frame15when the mass portion11is vibrating. The above-described support portion19has a wider width w as the target frequency band is higher. The support portion19connects the mass portion11and the base frame15, but in each unit cell10forming the unit structure5, it is advantageous to be disposed in an inwardly facing direction, respectively. For example, a first mass portion11aand a second mass portion11bare eccentrically disposed on the inside of each base frame15, each of the support portion19is connected to the inside facing each other, and each mass portion11aand11bis disposed eccentrically through the support portion19. Similarly, a third mass portion11cand a fourth mass portion11dare eccentrically disposed on the inside of each base frame15, each of the support portion19is connected to the inside facing each other, and each of the mass portions11cand11dis eccentrically disposed through the support portion19. The respective positions of the mass portion11and the support portion19may vary according to the number of the unit cells10. Meanwhile, the unit cell10is connected to a predetermined number through the first bridge20to form the unit structure5. A plurality of the first bridges20may be connected between each base frame15of the unit cell10forming the unit structure5. For example, every two first bridges20may be disposed on the base frame15of the adjacent unit cell10forming the unit structure5to connect each unit cell10. Such a first bridge20may be formed in a hemispherical ring shape. Since the first bridge20affects the vibration of the unit cell10, it is advantageous to make its size as small as possible. In addition, the first bridge20is advantageously made of a flexible material. That is, the first bridge20is made of a material that can be bent such that it can be attached to a curved surface while binding the four unit cells10as a set. In addition, the unit structure5may be connected to a predetermined number through the second bridge30and attached to the structure1. The second bridge30connects one predetermined base frame15of each base frame15of the unit cell10that forms one unit structure5and one predetermined base frame15of another adjacent unit structure5. Such a second bridge30may be formed in a hemispherical ring shape. The second bridge30is formed to be cuttable when necessary, and is for attaching a plurality of unit structures5to the structure1at once by interconnecting a plurality of unit structures5. In addition, the second bridge30connects between the unit structures5such that the number of the unit structures5can be adjusted according to the area to be attached. In addition, it is advantageous that the second bridge30is made of a flexible material that can respond to a curved surface. In the vibration reducing device3, when the frequency of structure1is 500 Hz, a target frequency band is set with ±50 Hz such that four unit cells10can be set to have an effect between 450 Hz and 550 Hz or less. For example, the vibration reducing device3forms one unit structure5by tuning the four unit cells10to have target frequencies of 460 Hz, 490 Hz, 520 Hz, and 540 Hz, respectively, and a predetermined number of unit structures5can be attached to the structure1. In this case, when six unit cells10are applied to the vibration reducing device3, one unit structure can be formed by tuning the respective unit cells10to have target frequencies of 450 Hz, 470 Hz, 490 Hz, 510 Hz, 530 Hz, and 550 Hz. Accordingly, the target frequency band is set according to a frequency of the structure1to be reduced, the number of the unit cells10is set, and a target frequency of the unit cells10can be set according to the number of unit cells10compared to the target frequency band. FIG.4is a graph illustrating a dispersion relationship between a wave vector and a frequency of the vibration reducing device according to the embodiment of the present disclosure. The vibration reducing device3formed as described above can be interpreted through a wave dispersion relationship that is a relationship between a wave number and a frequency characteristic of the wave. Referring toFIG.4, a general structure A and a structure B to which the vibration reducing device3according to the embodiment of the present disclosure are compared. The X axis represents a wave vector according to a position of the unit cell10, and the Y axis represents a frequency. The dispersion relationship of a general structure has a corresponding wave number in all frequency bands (A). That is, in a general structure, waves can be transmitted in all frequency bands. However, it can be determined that the structure1to which the vibration reduction device3according to the embodiment of the present disclosure is attached generates a band (stop band) in which a wave number corresponding to a frequency is empty due to a local resonance effect (B). Since it is interpreted that there is no wave that can propagate theoretically in such a stop band, transmission of noise and vibration can be prevented by completely blocking the wave propagation. FIG.5is a graph showing a vibration response of the vibration reducing device according to the embodiment of the present disclosure measured by an acceleration system. The graph ofFIG.5shows a vibration response of the structure measured by an acceleration system while applying vibration with an impact hammer after attaching the vibration reducing device3according to the embodiment of the present disclosure to the structure1. Referring toFIG.5, compared to the general structure A, it can be determined that vibration of a structure B to which the vibration reducing device3according to the embodiment of the present disclosure is attached is significantly reduced in the stop band (150 Hz to 300 Hz). Therefore, the vibration reducing device3according to the embodiment of the present disclosure can effectively reduce vibration and noise transmitted through the structure1. In addition, the vibration reducing device3according to the embodiment of the present disclosure is applicable regardless of the type and state of the structure1by adjusting the number of unit structures5. For example, the vibration reducing device3has a benefit in that it can be attached to a curved panel. While this disclosure has been described in connection with what is presently considered to be practical embodiments, it is to be understood that the disclosure is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
17,804
11862138
The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts. Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description. DETAILED DESCRIPTION OF EMBODIMENTS The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof. The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The present disclosure relates to hearing devices, e.g. hearing aids or headsets or ear phones. The present disclosure specifically deals with Active Emission Cancellation (AEC) in hearing devices. FIG.1schematically illustrates the principle of Active emission Cancellation (AEC). Traditionally, a hearing aid user (U) wears a hearing device (HD) consisting of or comprising an earpiece (EP) at or in an ear (Ear) of the user (U). The hearing aid can be arranged in different configurations (styles) such as Behind-The-Ear (BTE), Receiver-In-The-Ear (RITE), In-The-Canal (ITC), Completely-In-Canal (CIC), etc. The earpiece and the rest of the hearing aid (e.g. a separate body adapted to be arranged at or behind the ear (e.g. Pinna) of the user (U)). In all cases, the amplified sounds are presented to the eardrum, and it can ‘leak’ to the outside world from the ear canal when the sound level gets too high. This sound emission happens through the ventilation channels on the earpiece or through the leakage between the ear canal and the earpiece (see sound symbols denoted ‘Sound emitted from the ear canal’ inFIG.1), similar to the acoustic feedback problem in hearing aids. However, it becomes problematic if the emitted sounds are so loud that it is audible to persons located in the surroundings, even though the hearing aid might be stable and unaffected by this emission sound due to its feedback control system. Another scenario may e.g. be a headset wearer (or a wearer of ear phones) listening to loud music, which may be annoying to persons in the immediate surroundings. It is proposed to use an additional loudspeaker (denoted ‘Additional loudspeaker for AEC’ inFIG.1) (HD) to play an anti-emission signal (denoted ‘Anti-emission sound’ inFIG.1) controlled by the hearing aid (HD) to compensate for the emitted sounds to the environment, so that the resulting signal (denoted ‘Resulting emission sound’) has a limited amplitude and (preferably) becomes inaudible to the outside world (e.g. to persons located around the hearing aid user (U)). This would be a similar (but inverse) approach of what is known as the ANC approach. In other words, as we know what is being presented at the traditional hearing device receiver (to the eardrum), we can create an anti-emission signal (opposite phase) to be played by the additional speaker. A further modification to this idea is to also add an optional microphone inside the ear canal (denoted ‘Additional microphone for AEC’ inFIG.1) controlled by the hearing aid, with the goal of measuring actually presented sounds at the eardrum (rather than using the sounds played by the receiver), and in this way to be able to get an even better anti-emission signal. The AEC signal can be obtained by using a fixed filtering of the hearing aid output signal through a fixed compensation filter (a), and it can be determined up front based on measurement data, either for individual users or as an average for a number of users.This fixed filter (a) can be applied as a stand-alone filter. This is illustrated inFIG.4.This fixed filter (a) can be applied in addition to the existing feedback cancellation system with the adaptive filter (h′(n)). This is illustrated inFIG.5. The AEC signal can be obtained by using a time-varying filter (a(n)), and an adaptive algorithm, similar and/or identical to the well-known feedback cancellation system, can be used to estimate the AEC filter (a(n)). In contrast to the traditional feedback cancellation system, which has the goal to ensure system stability, this AEC filter has typically a somehow less strict constraint to create an anti-emission signal, in order to make the emission sound inaudible to the external world, so the estimation to this time-varying filter (a(n)) can be simpler, slower compared to the estimation to the traditional feedback cancellation filter (h′(n)).In one setup, the AEC filter (a(n)) is used without a traditional hearing aid feedback cancellation filter (h′(n)). This is illustrated inFIG.6.In another setup, The AEC filter (a(n)) is used with a modified hearing aid feedback cancellation filter (h′(n)) which would minimize the residual feedback. This is illustrated inFIG.7. An additional in-ear microphone may be used to monitor the “true” sound levels at different frequencies, and it can be used to finetune/correct the AEC signals by adjusting the filtera(n), see e.g.FIG.8. FIG.2Ashows an embodiment of a BTE-style hearing aid (HD) comprising an active emission canceller according to the present disclosure. The hearing device (HD) comprises a BTE-part comprising a loudspeaker (HA-SPK) and an ITE-part comprising an (possibly customized) ear mould (MO). The BTE-part and the ITE-part are connected by an acoustic propagation element (e.g. a tube IC). The BTE-part (BTE) is adapted for being located at or behind an ear of a user, and the ITE-part (ITE) is adapted for being located in or at an ear canal of a user's ear. The ITE-part comprises a through-going opening providing a speaker sound outlet (SO) for the loudspeaker of the BTE-part (HA-SPK) allowing sound to be propagated via the connecting element (IC) to the ear drum (Eardrum) of the user (cf. sound field SED). The BTE-part and the ITE-part may be electrically connected by connecting element (IC) in addition to the acoustic propagation channel, e.g. a hollow tube. The loudspeaker HA-SPK of the BTE-part is configured to play into the connecting element (IC) and further into the speaker sound outlet (SO) of the ITE-part. The loudspeaker is connected by internal wiring in the BTE-part (cf. e.g. schematically illustrated as wiring Wx in the BTE-part) to relevant electronic circuitry of the hearing device, e.g. to a processor (DSP). The BTE-parts comprises first and second input transducers, e.g. microphones (MBTE1and MBTE2), respectively, which are used to pick up sounds from the environment of a user wearing the hearing device (cf. sound field S). The ITE-part comprises an ear-mould and is intended to allow a relatively large sound pressure level to be delivered to the ear drum of the user (e.g. to a user having a severe-to-profound hearing loss). Nevertheless, a part of the sound (SHA) provided by the loudspeaker (HA-SPK) of the BTE-part may leak out along the interface between the ITE-part and the ear canal tissue (sf. Sound SLEAK). Such leaked sound may lead to unwanted feedback problems if picked by microphone of the hearing aid and amplified and presented to the user via the loudspeaker (HA-SPK). Such ‘acoustic feedback’ may be controlled by a proper feedback control system (e.g. (partly) compensated by the AEC system according to the present disclosure). The leaked sound SLEAKmay however also be heard by persons around the user (and possibly by the user him- or herself). The BTE-part (e.g. the DSP) further comprises an active emission canceller configured to provide an electric sound cancelling signal fed to an environment facing loudspeaker (AEC-SPK). The environment facing loudspeaker is located in the ITE-part facing the environment (when the ITE-part is mounted in or at the ear canal (Ear canal) of the user). The environment facing loudspeaker converts the electric sound cancelling signal to an output sound (SAEC) to the environment. The ITE-part further comprises an eardrum facing input transducer (MED, e.g. a microphone) located so that it picks up sound from the speaker sound outlet (SO) of the ITE-part and provides an electric signal representative thereof. The active emission canceller is configured to determine the electric sound cancelling signal in dependence of said electric signal of the eardrum facing input transducer (MED). The output sound (SAEC) to the environment from the environment facing loudspeaker (AEC-SPK) is thereby aimed to cancel or attenuate sound (SLEAK) leaked to the environment from the speaker sound outlet of the hearing aid. The hearing aid (HD) (here the BTE-part) further comprises two (e.g. individually selectable) wireless receivers (WLR1, WLR2) for providing respective directly received auxiliary audio input and/or control or information signals. The wireless receivers may be configured to receive signals from another hearing device (e.g. of a binaural hearing system) or from any other communication device, e.g. telephone, such as a smartphone, or from a wireless microphone or a T-coil. The wireless receivers may be capable of receiving (and possibly also of transmitting) audio and/or control or information signals. The wireless receivers may be based on Bluetooth or similar technology or may be based on near-field communication (e.g. inductive coupling). The BTE-part comprises a substrate SUB whereon a number of electronic components (MEM, FE, DSP) are mounted. The BTE-part comprises a configurable signal processor (DSP) and memory (MEM) accessible therefrom. In an embodiment, the signal processor (DSP) form part of an integrated circuit, e.g. a (mainly) digital integrated circuit. The hearing aid (HD) exemplified inFIG.2Arepresents a portable device and further comprises a battery (BAT), e.g. a rechargeable battery, for energizing electronic components of the BTE-part and possibly the ITE-part. The hearing aid (e.g. the processor (DSP)) may be adapted to provide a frequency dependent gain and/or a level dependent compression and/or a transposition (with or without frequency compression) of one or more frequency ranges to one or more other frequency ranges, e.g. to compensate for a hearing impairment of a user. FIG.2Bshows an embodiment of an ITC (ITE) style hearing aid comprising an active emission canceller according to the present disclosure. The hearing aid (HD) comprises or consists of an ITE-part (ITC) comprising a housing (Housing), which may be a standard housing aimed at fitting a group of users, or it may be customized to a user's ear (e.g. as an ear mould, e.g. to provide an appropriate fitting to the outer ear and/or the ear canal). The housing schematically illustrated inFIG.2Bhas a symmetric form, e.g. around a longitudinal axis from the environment towards the ear drum (Eardrum) of the user (when mounted), but this need not be the case. It may be customized to the form of a particular user's ear canal. The hearing aid may be configured to be located in the outer part of the ear canal, e.g. partially visible from the outside, or it may be configured to be located completely in the ear canal (implementing a CIC-styler hearing aid), possibly deep in the ear canal, e.g. fully or partially in the bony part of the ear canal. To minimize leakage of sound (played by the hearing aid towards the ear drum of the user) from the ear canal to the environment (cf. ‘Leakage path’ inFIG.2B), a good mechanical contact between the housing of the hearing aid and the Skin/tissue of the ear canal is aimed at. In an attempt to minimize such leakage, the housing of the ITE-part may be customized to the ear of a particular user. The hearing aid (HD) comprises a at least one environment facing (forward path) microphone, here one microphone (M), e.g. located on a part of the surface of the housing that faces the environment when the hearing aid is operationally mounted in or at the ear of the user. The microphone is configured to convert sound received from a sound field (S) around the user at its location to an (analogue) electric signal (sin) representing the sound. The microphone is coupled an analogue to digital converter (AD) to provide (analogue) electric signal (sin) as a digitized signal (sin). The digitized signal may further be coupled to a filter bank to provide the electric input signal (time domain signal (sin)) as a frequency sub-band signal (frequency domain signal). The (digitized) electric input signal (sin) is fed to a digital signal processor (DSP) for applying one or more processing algorithms to the audio signal (sin), e.g. including one or more of noise reduction, compression (frequency and level dependent amplification/attenuation according to a user's needs, e.g. hearing impairment), spatial cue preservation/restoration, feedback control, active noise cancellation, as well as active emission control according to the present disclosure, etc. The digital signal processor (DSP) may e.g. comprise appropriate filter banks (e.g. analysis as well as synthesis filter banks) to allow processing in the frequency domain (individual processing of frequency sub-band signals). The digital signal processor (DSP) is configured to provide a processed signal soutcomprising a representation of the sound field S (e.g. including an estimate of a target signal therein). The processed signal soutis fed to an output transducer (here a forward path loudspeaker (HA-SPK), e.g. via a digital to analogue converter (DA) or a digital to digital converter, for conversion of a processed (digital electric) signal sout(or analogue version sout) to a sound signal SHA The hearing aid (HD (ITC)) may e.g. comprise a ventilation channel (Vent) configured to minimize the effect of occlusion (when the user speaks). In addition to allowing an (unintended) acoustic propagation path Sleakfrom a residual volume (cf. Res. Vol inFIG.2B) between a hearing aid housing and the ear drum to be established (cf. ‘Leakage path’ inFIG.3), the ventilation channel also provides a direct acoustic propagation path of sound from the environment to the residual volume. The directly propagated sound Sdirreaching the residual volume is mixed with the acoustic output (SHA) of the hearing aid (HD) to create a resulting sound SEDat the ear drum. In a mode of operation, active noise suppression (ANS or ANC) is activated in an attempt to cancel out the directly propagated sound Sdir. According to the present disclosure, e.g. in a specific AEC-mode of operation, the digital signal processor (DSP) comprises an active emission canceller (AEC, cf. e.g.FIG.3) configured to provide an electric sound cancelling signal (sAEC) in dependence of the processed (digital electric) signal sout. The electric sound cancelling signal (sAEC) is fed to an environment facing loudspeaker (AEC-SPK), e.g. via a digital to analogue converter (DA), as appropriate. The environment facing loudspeaker converts the electric sound cancelling signal (sAEC) to an output sound (SAEC) to the environment. The intention of the output sound (SAEC) is to cancel (or at least attenuate) the leaked sound SLEAK(cf. ‘Leakage path’ inFIG.2B). The ITE-part (ITC) further comprises an eardrum facing input transducer (MED, e.g. a microphone) located so that it picks up sound from the forward path loudspeaker (HA-SPK) and provides an electric signal (s′out) representative thereof (e.g. via an analogue to digital converter (AD), as appropriate). The active emission canceller (AEC) of the digital signal processor (DSP) is configured to determine the electric sound cancelling signal in dependence of the electric signal (s′out) of the eardrum facing input transducer (MED), possibly in combination with the processed (digital electric) signal sout. The output sound (SAEC) to the environment from the environment facing loudspeaker (AEC-SPK) is aimed to cancel or attenuate sound (SLEAK) leaked to the environment from the speaker sound outlet of the hearing aid (to not disturb persons around the hearing aid user's, if the amplification of the input sound provided by the hearing aid (and/or the ‘openness’ of the ITE-part) is large). The AD and DA converters may form part of the DSP, as appropriate. The hearing aid comprises an energy source, e.g. a battery (BAT), e.g. a rechargeable battery, for energizing the components of the device. FIG.2Cshows an embodiment of a RITE style hearing aid comprising an active emission canceller according to the present disclosure. The embodiment ofFIG.2Cresembles the embodiment ofFIG.2A, both comprise a BTE-part wherein the energy (battery (BAT) and main processing of the hearing aid is provided (the latter via digital signal processor DSP, memory (MEM), frontend- (FE) and radio-chips (WLR1, WLR2)). A difference is that the forward path loudspeaker (HA-SPK) of the embodiment ofFIG.2Cis located in an ITE-part located in an ear canal of the user instead of in the BTE-part. To connect the loudspeaker (HA-SPK) with the signal processor (DSP), the acoustic tube of the connecting element (IC) inFIG.2Ais dispensed with in the embodiment ofFIG.2C, so that the connection element is implemented by an electric cable (only). The electric cable is configured to comprise a multitude of electrically conducting wires or channels to allow the processor of the BTE part to communicate with the forward path loudspeaker (HA-SPK), the environment facing loudspeaker (AEC-SPK) and the eardrum facing microphone (MED, if present), and possible other electronic components of the ITE part (ITE). Further, the electric cable may also be configured to allow energising the electronic components of the ITE-part (as well as those of the BTE-part) from the battery (BAT) of the BTE-part. The partition of functional tasks between the BTE-part and the ITE-part may be different from the one mentioned in connection with the embodiments ofFIGS.2A and2C. Some of the processing, for example the processing of the active emission canceller (AEC) may be located in the ITE-part to avoid communication related to the environment facing loudspeaker (AEC-SPK) and/or the eardrum facing microphone (MED, if present) to/from the signal processor (DSP) of the BTE-part. Thereby the electric interface (IC) between the BTE- and ITE-parts may be simplified. FIG.3shows a simplified block diagram of an embodiment of a hearing aid comprising an active emission canceller according to the present disclosure. The hearing aid (HD) may be adapted for being located at or in an ear of a user. The hearing aid comprises a forward path for processing an audio input signal and providing a (preferably) improved, processed, signal intended for presentation to the user. The forward path comprises at least one forward path input transducer (e.g. microphone(s), here first and second microphones (M1, M2), configured to pick up environment sound from the environment around the user when the user is wearing the hearing aid. The two microphones provide respective (e.g. analogue or digitized) electric input signals (sIN1, sIN2) representative of the environment sound. The forward path further comprises (an optional) directional system (BFU) implementing one or more beamformers and providing one or more beamformed signals, here beamformed signal (sINBF). The forward path comprises a hearing aid signal processor (HLC) for processing the beamformed signal (sINBF) and providing a processed signal (sOUT), e.g. configured to compensate for a hearing impairment of the user. The forward path further comprises a loudspeaker (HA-SPK) connected to a speaker sound outlet of the hearing aid and configured to provide an output sound (SHA) to an eardrum (Eardrum) of the user in dependence of the processed signal (sOUT). The hearing aid further comprises an active emission canceller (AEC) configured to provide an electric sound cancelling signal (sAEC) and an environment facing loudspeaker (AEC-SPK) connected to the active emission canceller (AEC) and configured to provide an output sound to the environment (cf. dashed sound symbol denoted SAECinFIG.3). The active emission canceller (AEC) is connected to the environment facing loudspeaker (AEC-SPK) and the electric sound cancelling signal (sAEC) is determined in dependence of the processed signal (sOUT) or from a signal originating therefrom. The electric sound cancelling signal (sAEC) is configured to cancel or attenuate sound (SLEAK) leaked from the speaker sound outlet to the environment when played by the environment facing loudspeaker (AEC-SPK). The leakage of sound (SLEAK) around a housing and possible other parts of the hearing aid (HD) located in the ear canal (see e.g. examples of different hearing aid styles inFIG.2A,2B,2C) is symbolized by dashed bottom rectangle denoted ‘Leakage’ inFIG.3. The leakage may be due to a ventilation channel through or along the surface of the hearing aid (or an ITE-part of the hearing aid, see e.g.FIG.2A,2B or2C), or it may be due to an ‘open fitting’ e.g. comprising a body that does not fill out the cross sectional area of the ear canal, which is guided by an open dome-like element (comprising holes through which sound can leak to (and from) the environment, see e.g.FIG.2C). The environment facing loudspeaker (AEC-SPK) may be located on or having a sound outlet at an environment facing surface of the ITE-part, e.g. as close as possible to a main leakage opening (e.g. a ventilation channel), without being located in such opening (e.g. a ventilation channel). As indicated inFIG.3, the hearing aid may comprise and eardrum facing input transducer, here microphone (MED), e.g. located close to the speaker sound outlet from hearing aid loudspeaker (HA-SPK). However, the eardrum facing input transducer, here microphone (MED), may be located on or having a sound inlet at an eardrum facing surface of the ITE-part, e.g. as close as possible to a main leakage opening (e.g. a ventilation channel), without being located in such opening (e.g. a ventilation channel). The eardrum facing microphone (MED) is configured to pick up output sound from the speaker sound outlet and to provide an electric signal (s′OUT) representative thereof. The active emission canceller (AEC) is configured to provide that the electric sound cancelling signal (sAEC) is an estimate of the signal leaked from a residual volume at the eardrum to the environment at the environment facing loudspeaker in dependence of the electric signal (s′OUT) from the eardrum facing microphone (MED). The active emission canceller (AEC) is configured to provide that the electric sound cancelling signal (sAEC) is played by the environment facing loudspeaker to provide the output sound (SAEC) to the environment in opposite phase of the leakage of sound (SLEAK). Thereby the leaked sound will be cancelled or (at least) diminished. The environment facing loudspeaker (AEC-SPK) of a hearing aid according to the present disclosure (including the embodiments ofFIG.2A,2B,2C,3) may be directed in a preferred direction (e.g. by an acoustic outlet canal) to optimize its cancellation effect, maybe in dependence of a location of a ventilation channel opening and/or direction and/or other (intended or unintended (but possibly probable)) leakage channel. Alternatively or additionally, the ITE-part (and/or a BTE-part) may comprise one or more additional environment facing loudspeakers (AEC-SPK), e.g. depending on the application in question. e.g. directed towards each their preferred direction, or adapted to provide a resulting directional output (e.g. as a weighted combination of the individual (electric) loudspeaker outputs). FIG.4shows a simplified block diagram of an embodiment of a hearing device, e.g. a hearing aid, according to the present disclosure comprising an active emission cancelation system comprising a fixed filter (Fixed AEC Filtera, wherearepresents a transfer function for the fixed filter). The hearing aid comprises a forward path for processing (cf. block ‘Processing HLC’ inFIG.4) an audio signal y(n) picked up by a microphone (M) and for providing a processed (e.g. compensated for a user's hearing impairment) signal u(n), which is presented as sound SHAto a user via loudspeaker (HA-SPK). The hearing aid further comprises an active emission canceller, her implemented by a fixed filter (cf. block ‘Fixed AEC Filter a’ inFIG.4). The active emission canceller provides electric sound cancelling signal sAEC(n) by filtering the processed signal u(n). In addition to the active emission canceller, the active emission cancellation system further comprises a loudspeaker (AEC-SPK) facing the environment. The environment facing loudspeaker (AEC-SPK) provides output sound SAECin dependence of electric sound cancelling signal sAEC(n). The output sound SAECis aimed at cancelling sound provided by the forward path loudspeaker (HA-SPK) of the hearing aid leaked from the ear-canal to the environment, inFIG.4represented by feedback sound signal v(n) arriving via (one or more feedback paths) (represented by block ‘Feedback Pathh(n)’ inFIG.4, whereh(n) represents a (time variant) transfer function for the feedback path). The active emission cancellation (output) sound SAECis mixed with the feedback sound signal v(n) (symbolically indicated by sum unit ‘+’ inFIG.4) providing resulting emission sound SRES(denoted ‘AEC compensated signal, SRES’ inFIG.4). The resulting emission sound SRESis mixed with sound from the environment x(n) and picked up by the microphone (M). The electric input signal y(n) representative of sound may thus comprise resulting emission sound SRES(originating from the hearing aid) in addition to the (other) environment sound. FIG.5shows a simplified block diagram of an embodiment of a hearing device according to the present disclosure comprising an active emission cancelation system comprising a fixed filter as shown inFIG.4and additionally comprising an adaptive feedback control system. The adaptive feedback control system comprises an adaptive filter and a combination unit (sum unit ‘+’ in the forward path of the hearing aid inFIG.5). The adaptive filter comprises an adaptive algorithm (‘Adaptive Algorithm’ block inFIG.5) and a variable filter (‘Adaptive AFC Filterh′(n)’ inFIG.5). The transfer function of the variable filter is controlled by the adaptive algorithm (cf. arrow from the ‘Adaptive Algorithm’ block to the ‘Adaptive AFC Filterh′(n)’ inFIG.5). The adaptive algorithm is configured to determine updates to the filter coefficients of the variable filter. The adaptive algorithm may be configured to calculate the filter updates using stochastic gradient algorithms, including some form of the Least Mean Square (LMS) or the Normalized LMS (NLMS) algorithms. They both have the property to minimize an error signal in the mean square sense with the NLMS additionally normalizing the filter update with respect to the squared Euclidean norm of some reference signal Other adaptive algorithms known in the art may be used. The variable filter provides an estimate v′(n) of the feedback signal v(n) (or of the AEC compensated signal SRESin the presence of the fixed AEC filter a), by filtering a reference signal, here the processed signal u(n). In the embodiment ofFIG.5the adaptive algorithm determines the update filter coefficients of the adaptive filter by minimizing the error signal e(n) in view of the processed signal u(n) (reference signal). The error signal e(n) is the feedback corrected signal provided by the combination unit (‘+’) of the forward path. The error signal e(n) is here constituted by the electric input signal y(n) subtracted by the estimate v′(n) of the feedback signal v(n) (or of the AEC compensated signal SRES. Thereby the signal played by the loudspeaker of the forward path is (ideally) corrected for feedback from the loudspeaker (HA-SPK) to the microphone (M) of the forward path, thereby keeping the audio system stable, in case that the fixed AEC filter a is not sufficient for suppressing the feedback signal v(n) and the resulting emission sound SRESstill imposes a high feedback risk. FIG.6shows a simplified block diagram of an embodiment of a hearing device according to the present disclosure comprising an active emission cancelation system comprising an active emission canceller implemented by an adaptive filter. The embodiment ofFIG.6is equivalent to the embodiment ofFIG.4except that the fixed filter of the active emission canceller is implemented as an adaptive filter. The adaptive filter works equivalently to the adaptive filter of the feedback control system as described in connection withFIG.5. The adaptive filter of the active emission cancellation system comprises an adaptive algorithm (‘Adaptive Algorithm’ block inFIG.6) and a variable filter (‘Adaptive AEC Filtera(n)’ inFIG.6). In the adaptive filter of the active emission canceller, the adaptive algorithm receives electric input signal y(n) as error signal and the processed signal u(n) as reference signal. Based thereon the adaptive algorithm provides update filter coefficientsato the variable filter. The variable filter provides the electric sound cancelling signal sAEC(n) by filtering the processed signal u(n). FIG.7shows a simplified block diagram of an embodiment of a hearing device according to the present disclosure comprising an active emission cancelation system comprising an adaptive filter as shown inFIG.6, and additionally comprising an adaptive feedback control system as shown inFIG.5, where the function of the adaptive feedback control system is described. FIG.8shows a simplified block diagram of an embodiment of a hearing device according to the present disclosure comprising an active emission cancelation system comprising an adaptive filter and an adaptive feedback control system as shown inFIG.7, and wherein the active emission cancelation system additionally comprises an eardrum facing microphone (MED). The eardrum facing microphone (MED) is located in the hearing device housing to facilitate the capture of sound from the residual volume near the ear drum (e.g. output sound from the speaker sound outlet) when the hearing device is appropriately mounted in the user's ear canal. The eardrum facing microphone (MED) provides electric input signal z(n) which is fed to the adaptive algorithm of the adaptive filter and may (as shown inFIG.8) as well be fed to the variable filter of active emission canceller. The signal z(n) from the ear-drum facing microphone (MED) may be used in addition to or as an alternative to the processed signal u(n) in the adaptive algorithm of the AEC system in the determination of update filter coefficients of the variable filer for estimating the electric sound cancelling signal sAEC(n). This has the expected advantage that a correct sound shaping of the ear cavity/canal is already included in this microphone signal and there is no need to estimate that from the processed signal u(n). The ‘Adaptive AEC filter input ofFIG.8receives from the ‘output side’ the processed signal u(n) as well as the eardrum facing microphone signal z(n). The signals z(n) and u(n) are alternatives to each other. The eardrum facing microphone signal z(n) is more optimal for the adaptive AEC filter estimation, because it has a shaping of the residual volume (ear cavity)/ear canal. If the processed signal u(n) has to be used for AEC filter estimation, then it should be corrected for the residual volume (ear cavity)/ear canal. This might have been modelled and compensated by the adaptive filter. However, such modeling would certainly lead to modelling errors (e.g., how fast and how precise is the estimate), and it would increase the adaptive filter length, and a longer adaptive filter leads to undesired properties such as slower convergence rate and higher computational complexity. It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process. Embodiments of the disclosure may e.g. be useful in applications such as hearing aids, headsets, earphones, etc. As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise. It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. The claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.
36,354
11862139
DESCRIPTION OF EMBODIMENTS FIG.1is a schematic view of a system for creating a plurality of sound zones within a car cockpit. As shown inFIG.1, the system comprises a control unit3, three actuators1in the form of loudspeakers, and five error sensors2in the form of microphones. The control unit3may comprise a processor, a DSP, a CPU. The control unit3may comprise a storage unit (not shown). FIG.1also shows an audio unit4for providing an audio data signal for generating a desired sound in a desired sound zone of the plurality of sound zones. The audio unit4may be an in-vehicle infotainment (IVI) system or an In-car entertainment (ICE) system. The IVI/ICE system may refer to a vehicle system that combines entertainment and information delivery to drivers and passengers. The IVI/ICE system may use audio/video (A/V) interfaces, touchscreens, keypads and other types of devices to provide these types of services. The control unit3and the audio unit4may be two separate units, as shown inFIG.1. Alternatively, the control unit3and the audio unit4may be combined as one unit. For example, the control unit3may be implemented in the audio unit4. InFIG.1, all the error sensors2are connected in series to the control unit3. The actuators1are connected in parallel to the control unit3. However, these ways of connections shown inFIG.1are only examples for illustration. For example, at least two of the error sensors2may be connected in parallel to the control unit3. For example, at least two of the actuators1may be connected in series to the control unit3. Any one of the connection links between the error sensors2, between the actuators1, between the control unit3to an actuator1, or to an error sensor2, may be wired or wireless. A sound zone may be a volume within an acoustic cavity. For example, a sound zone may be a volume around a head and/or an ear of a driver or a passenger. Sound zones within a vehicle cockpit may correspond to different seating positions or a group of seating positions in the vehicle. A bright sound zone may be a sound zone, in which a provided sound is desired to be heard by a person, e.g., a driver or a passenger, within the sound zone. The volume outside the bright sound zone may be one or a plurality of different dark sound zone(s), in which the provided sound is undesired and not want to be heard by a person within the dark sound zone(s). InFIG.1, there are two different sound zones A, B. The bright sound zone A is located at a front seat position and the dark sound zone B is located at a rear seat position. Examples of different bright and dark sound zones are shown inFIG.2. InFIG.2a, the bright sound zone A is located at the front seats position and the dark sound zone B is located at the rear seats position. InFIG.2b, the bright sound zone A is located at the front left seat position. The first dark sound zone B1is located at the front right seat position. The second dark sound zone B2is located at the rear seats position. InFIG.2c, the bright sound zone A is located at the rear seats position and the dark sound zone B is located at the front seats position. InFIG.2d, the bright sound zone A is located at the rear left seat position. The first dark sound zone B1is located at the front seats position. The second dark sound zone B2is located at the rear right seat position. The provided sound is only desired to be heard by the person within the bright sound zone A. Under ideal conditions, it is desired that the persons within the dark sound zone(s) B, B1, B2, cannot hear the provided sound. However, in implementations, it is sufficient to keep a sound pressure level of the provided sound in the dark sound zone(s) as little as possible. It is known that sound pressure level (SPL) or acoustic pressure level is a logarithmic measure of an effective pressure of a sound relative to a reference value. The sound pressure level, or shorted as pressure level, is typically measured in dB. A difference in a pressure level between the bright and dark sound zone can be quantified in terms of a contrast, typically expressed as: Contrast=10⁢log⁡(〈e2〉bright〈e2〉dark), wherein <e2>brightand <e2>darkrepresent an average squared pressure level in the bright and dark sound zone, respectively. In the present application, σband σdare sometimes used for referring to the bright and the dark sound zone, respectively. The following description is written for an audio signal provided to generate a desired sound in a bright sound zone and the notation σbis sometimes omitted. Various perceptual experiments have indicated that a required contrast between the bright and the dark sound zone should be between 10 to 40 dB. The experiments can be found, for example in Francombe, J., Mason, R., Dewhirst, M., and Bech, S. (2012). “Determining the threshold of acceptability for an interfering audio programme,” inProceedings of the132nd AES Convention, Budapest, Hungary, 26-29 Apr. 2012; and Baykaner, K., Hummersone, C., Mason, R., and Bech, S. (2013). “The prediction of the acceptability of auditory interference based on audibility,” inProceedings of the52nd AES International ConferenceGuildford, UK, 2-4 Sep. 2013. Thus, in order to achieve a larger contrast, it is desired to have as little pressure level as possible in the dark sound zone σd. That is, the dark sound zone σdshould have as little sound leaking from the bright zone as possible. FIG.3is a diagram of a method for creating a plurality of sound zones within an acoustic cavity. The audio data signal x(n)=[x, x, . . . , x] is provided for generating a desired sound in the bright sound zone A. That is, the provided audio data signal is the same for each one of the actuators1and the adaptive filters. Actuator Generation Coefficients kgk: A set of actuator generation coefficients kgkcan be used for controlling the actuators1to generate a desired sound in the bright sound zone A, while not generating excessive sounds in the dark sound zones B1, B2. For each of the plurality of actuators1, a respective generation input signal may be generated based on the set of actuator generation coefficients kgkand the provided audio data signal x(n). The set of actuator generation coefficients kgkmay be in the form of an actuator generation matrix Kgcomprising a plurality of actuator generation coefficients kgk, wherein k is the number of actuators. The actuator generation matrix Kgmay be a diagonal matrix. A main diagonal, also known as a principal diagonal, a primary diagonal, a leading diagonal, or a major diagonal, of a matrix M is a collection of elements Mi,jwherein i equals to j (i=j). All off-diagonal elements are zero in a diagonal matrix. That is, the actuator generation matrix Kgmay be a diagonal matrix, wherein the coefficients kgkoutside the main diagonal are all zeros (0s). An example of the actuator generation matrix Kgmay be Kg(σb)=[kg1(σb)…0⋮⋱⋮0…kgK(σb)] The set of actuator generation coefficients kgkare chosen so that the expression [S]Kgx can result in a desired sound in the bright sound zone A, while no excessive sounds in the dark sound zones B1, B2. [S] represents applying a respective secondary path from a respective actuator1to a respective error sensor2. Since the generated sound is desired in the bright sound zone A and undesired in the dark sound zones B1, B2, the generated sound is a disturbance sound in the dark zones B1, B2. The actuators1may be lowly directive at low frequencies (about 20 to 300 Hz). Thus, it is not possible to generate a low frequency sound selectively in the bright sound zone A without exciting the whole acoustic cavity, including the dark sound zones B1, B2. It is however unnecessary to generate excessive sound in the dark sound zones B1, B2. In a simple form, the actuator generation coefficients kgkmay only consist of zeros (0s) and ones (1s). An actuator generation coefficient being equal to zero (0) means that an actuator is not contributed in the generation of the desired sound in the bright sound zone A. That is, this actuator is not used for generating the desired sound in the bright sound zone A. An actuator generation coefficient being equal to one (1) means that an actuator is 100% contributed in the generation of the desired sound in the bright sound zone A. Thus, a subset of actuators may be selected for the generation of the desired sound in the bright sound zone A, by applying the coefficients kgkto the actuators1. In a complex form, the coefficients kgkmay be any real number. Actuator Exclusion Coefficients Kek: A set of actuator exclusion coefficients kekcan be used for controlling the contribution of the actuators in cancelling the disturbance sound in the dark sound zones B1, B2, which is generated along with the generation of the desired sound in the bright sound zone A. The set of actuator exclusion coefficients kekcan be used for keeping the desired sound in the bright sound zone A unchanged as much as possible, in order to create a larger contrast between the bright sound zone A and dark sound zones B1, B2. Each of the adaptive filters receives the provided audio data signal x(n) as the input signal, and generates a respective output signal y(n) based on the input signal x(n) and the filter coefficient of the filter Wk(z). Wherein z is a notation referring to a z-transform. For each of the plurality of actuators1, a respective exclusion input signal may be generated based on the set of actuator exclusion coefficients kekand the respective output signal y(n). For each of the plurality of actuators1, a respective drive signal may be generated based on the respective generation input signal and the respective exclusion input signal, such that each of the plurality of actuators1may generate a respective acoustic output in response to the respective drive signal. The generated acoustic output may be transmitted within the acoustic cavity to provide an individual sound in each sound zone. The set of actuator exclusion coefficients kekmay be in the form of an actuator exclusion matrix Kecomprising a plurality of coefficients kek, each for controlling one of the plurality of actuators1. The actuator exclusion matrix Kemay be a diagonal matrix. The coefficients keioutside the main diagonal may all be zeros (0s). The actuator exclusion coefficients kekon the main diagonal may be chosen so that a minimum squared value of a pressure level of a sound can be generated at a monitor location in the dark sound zones B1, B2, while keeping as much as possible the desired signal amplitude at a monitor location in the bright sound zone A. An example of the actuator exclusion matrix Kemay be Ke(σb)=[ke1(σb)…0⋮⋱⋮0…keK(σb)] The actuator exclusion coefficient kekmay be zero (0), one (1), or any other real number. Error Signals e: The error signals e=[e1, e2, . . . , eM] are generated by the plurality of the error sensors2, each representing a respective sound detected by the respective error sensor2. M represents the number of the error sensors. Each error signal is a sum of two different components. The first component is a disturbance signal, also known as a primary error signal in the active noise control theory. The second component is a cancelling signal, often known as a secondary error signal in the active noise control theory. In the dark sound zones, the disturbance signal is to be reduced as much as possible by the cancelling signal at the error sensor. The ideal situation is that the two components are totally cancelled by each other, to result in the error signal as zero. However, in implementations, it is sufficient to keep an error signal as little as possible, i.e. to achieve a minimum error signal. The disturbance signal may be resulted by the respective generation input signal, which is generated based on the set of actuator generation coefficients kgkand the provided audio data signal x(n). The disturbance signal can be represented by the expression [S]Kgx, wherein [S] represents applying the respective secondary path from the actuator1to the error sensor2. For example, S32refers to a secondary path from the third actuator to the second error sensor, as shown inFIG.3. The cancelling signal may be resulted by the respective exclusion input signal, which is generated based on the set of actuator exclusion coefficients kekand the respective output signal y(n). The cancelling signal can be represented by the expression [S]Key, wherein [S] represents applying the respective secondary path from the actuator to the error sensor. The error signal may be represented by the expression: e=[S]Kgx+[S]Key Sensor Weighting Coefficients mem A set of sensor weighting coefficients mem, may be used for controlling the contribution of each error sensor for reducing the disturbance sound in the dark sound zones B1, B2, which is generated along with the generation of the desired sound in the bright sound zone A. A respective weighted error signal may be generated, based on the set of sensor weighting coefficients mem, and the respective error signal e. The weighted error signal can be expressed as: em′=meem The set of sensor weighting coefficients memmay be in the form of a sensor weighting matrix Mecomprising a plurality of sensor weighting coefficients mem. The subscript m represents the number of the error sensors. The sensor weighting matrix Memay be a diagonal matrix. The sensor weighting coefficients memoutside the main diagonal may all be zeros (0s). The coefficient memmay be zero (0), one (1), or any other real number. An example of the sensor weighting coefficients Memay be Me(σb)=[me1(σb)…0⋮⋱⋮0…meM(σb)] The weighted error signal may also be expressed as: Mee=Me[S]Kgx+Me[S]Key Update of the Filters Wk(z): The filter Wk(z) associated to the actuator k may be updated by a standard Least Mean Square (LMS) method, or a standard Filtered Least Mean Square (FXLMS) method, described in e.g., Kuo, Active Noise Control Systems, Sen M. Kuo and Dennis Morgan. 1995. Active Noise Control Systems: Algorithms and DSP Implementations(1st ed.). John Wiley & Sons, Inc., New York, NY, USA. A respective updated filter may be generated in order to reduce the respective weighted error signal, or the respective error signal. The updated filter may be the filter provided with at least one updated filter coefficient. The respective updated filter coefficient may be generated based on the respective weighted error signal and a reference signal x′km(n), to reduce the respective weighted error signal. The filter Wk(z) at a time step n can be expressed as: Wk(n)=[wk,0(n)wk,1(n) . . .wk,Lw−1(n)]T Here the filter Wk(z) is provided with more than one filter coefficient. Alternatively, the filter Wk(z) may be provided with only one filter coefficient. Wherein wk,iare the filter coefficients of the filter Wk. The updated filter Wk(z) at a time step n+1 can be expressed as: Wk(n+1)=Wk(n)-μ⁢∑m=1M⁢xk⁢m′(n)⁢em′(n),wherein x′km(n) represents a reference audio signal;e′m(n) represents the weighted error signal; andμ is a step size. The weighted error signal e′m(n) may be obtained by application of the set of sensor weighting coefficients memto the error signal em. The reference signal x′km(n) may be generated based on the provided audio data signal x(n), the set of actuator exclusion coefficients kek, and the secondary sound path model Ŝ representing acoustic transmission paths between each of the plurality of actuators1and each of the plurality of error sensor2. The reference signal x′km(n) can be expressed as: x′km=kekŜkm*m The step size μ may be a positive real number. The step size μ may have a small magnitude relative to the filter coefficients. The step size μ may be determined based on an amplitude of the audio data signal x(n). A typical value of μ may be between 0 and 1. If the step size μ is set to zero and the initial value of the filter coefficients of the filter Wk(z) for each adaptive filter is set to be zero, then the adaptive filter is not actively involved in the system/method. Filters Vk(z): FIG.4is another example of a method for creating a plurality of sound zones within an acoustic cavity. Comparing with the diagram ofFIG.3, a respective static filter Vkis provided for each adaptive filter. The method according toFIG.4further comprises providing a respective static filter for filtering the provided audio data signal x(n), and generating a respective filtered signal in response to the provided audio data signal x(n). Each of the adaptive filters receives the respective filtered signal as the input signal, and generates a respective output signal y(n) based on the input signal and the filter coefficients. The respective static filter Vkmay be an independent filter outside the respective adaptive filter. As the static filter Vkis a static filter, the formulation for updating the filter coefficients of the filter Wk(z) may remain the same. That is, even with the static filter Vkin the system, the updated filter Wk(z) at the time step n+1 can still be expressed as: Wk(n+1)=Wk(n)-μ⁢∑m=1M⁢xk⁢m′(n)⁢em′(n), The static filters Vk(z) may be defined as a converged solution of the adaptive filter used in the method ofFIG.3for a broadband audio data signal x(n), e.g., a broadband noise, in the frequency range of interest. Wherein z is a notation referring to the z-transform. The static filter Vkmay be a vector of filter coefficients [Vk,0, Vk,1, . . . ] of the same length as the filter Wk. A broadband noise, also known as a wideband noise, is a noise signal whose energy is present over a wide audible range of frequencies, as opposed to a narrowband noise. Providing the static filter Vk, as shown inFIG.4, may make the filter coefficients of the filter Wk(z) tend to be zero (0) for the broadband audio data signal x(n). Thus, the method ofFIG.4may be adapted for any broadband audio data signal x(n) with similar statistical characteristics as the data signal used to determine the static filters Vk(z). The static filters Vk(z) may be derived offline, e.g. during a calibration. Alternatively, the static filters Vk(z) may be derived by simulation based on the secondary path model Ŝ representing the acoustic transmission paths from each of the plurality of actuators1and each of the plurality of error sensor2. During a simulated calibration, the acoustic transmission paths from each actuator to each error sensor may be simulated based on the secondary path model Ŝ. The static filters Vk(z) may be the same or different for each sound zone. If the step size μ is set to zero and the filter coefficient of the filter Wk(z) for each adaptive filter is set to be zero, then the adaptive filter is not actively involved in the system/method, and only the static filters Vk(z) are involved. InFIG.5a, the bright sound zone A is at the left front seat position, i.e. the driver's position. The two dark sound zones B1, B2are at the front and rear passenger's position, respectively. FIG.5bshows an example of the arrangement of the actuators and the error sensors in the car cockpit ofFIG.5afor creating a plurality of sound zones. Six actuators1-1,1-2, . . . ,1-5,1-6, and eight error sensors2-1,2-2, . . . ,2-7,2-8, are arranged within the car cockpit. FIG.5cshows a simulation result of a contrast in sound pressure levels between the bright sound zone A and dark sound zones B1, B2, respectively, created by the method and the system using the actuators and error sensors arranged according toFIG.5b, for a broadband audio data signal. The x-axis represents a frequency value in Hz.FIG.5conly shows the simulation result in a frequency range of 30 to 120 Hz. The y-axis represents a contrast of a sound pressure level (SPL) between two sound zones in dBA. The short dashed line represents the contrast of sound pressure level between the sound zone A and the sound zone B1ofFIG.5a. The dotted line represents the contrast of sound pressure level between the sound zone A and the sound zone B2ofFIG.5a. The simulation is performed based on the measured transmission paths from the six actuators1-1,1-2, . . . ,1-5,1-6, to the eight error sensors2-1,2-2, . . . ,2-7,2-8, as shown inFIG.5b. Based on the arrangement of the six actuators1-1,1-2, . . . ,1-5,1-6and the eight error sensors2-1,2-2, . . . ,2-7,2-8as shown inFIG.5b, the set of actuator generation coefficients kgk, the set of actuator exclusion coefficients kek, and the set of sensor weighting coefficients memused in the simulation can be expressed as the matrices Kg, Ke, and Me, respectively. The matrices Kg, Ke, and Me, may be diagonal matrices comprising only elements of zeros (0s) and ones (1s). The actuator exclusion matrices Kgused may be Kg=[10(0)00(0)00] The actuator exclusion matrices Keused may be: Ke=[01(0)11(0)10] The sensor weighting matrices Me, used may be: Me=[10(0)0011(0)00] From the simulation result inFIG.5c, it is clear that in the frequency range about 55-105 Hz, the contrast is at least 10 dB between the bright sound zone A and either one of the dark sound zones B1, B2. Thus, the simulation result shown inFIG.5cdemonstrates that the sound zones created by the proposed method satisfy the perceptual requirement of at least 10 dB contrast, in the frequency range about 55-105 Hz. Also, the contrast of sound pressure level (SPL) between the bright sound zone A and the dark sound zones B2, i.e. the contrast of sound pressure level between the driver position and the rear passenger position, is larger than that between the bright sound zone A and the dark sound zones B1, i.e. the contrast of sound pressure level between the driver position and the front passenger position. The coefficients used in the simulation are only zeros (0s) and ones (1s) to verify the inventive concept. The simulation result ofFIG.5ccan be further improved by having more finely adjusted coefficients rather than zeros (0s) or ones (1s). At least one set of the set of actuator generation coefficients kgk, the set of actuator exclusion coefficients kek, and the set of sensor weighting coefficients mem, may be determined by an optimization process. The optimization process may comprise: determining a plurality of monitor locations within the acoustic cavity; determining, for each of the acoustic transmission paths from each actuator to each monitor location, a respective transfer function; wherein at least one monitor location is arranged within each of the plurality of sound zones. The transfer function is defined as a mathematical relation between a sound source and a response, e.g., from an actuator to a monitor location, or an error sensor. An acoustic transmission path therebetween can be fully characterised based on the transfer function. The monitor locations may be determined to be at a head or an ear position of a person within a sound zone, such as a head or an ear position of a driver or a passenger of a vehicle. A monitor sensor, e.g., a microphone, may be provided at each of the plurality of monitor locations. The respective transfer function may be determined by measuring a response at the monitor location. Said determining the respective transfer function by measuring may comprise: driving at least one of the plurality of actuators with a signal, preferably a white or pink noise signal, measuring a sound response at at least one of the plurality of monitor location. The monitor sensors may be used to measure a sound response at the head or the ear position of the person within the sound zone. The monitor sensors may be used only during the optimization process. That is, the monitor sensors may not be used for creating a plurality of sound zones within the acoustic cavity. Alternatively, one or more of the monitor sensors may also be used as the error sensors for creating a plurality of sound zones within the acoustic cavity. The respective transfer function may be determined by simulation. No monitor sensor is needed for simulation. The optimization process may comprise determining the set of actuator generation coefficients kgkfor generating a first sound at a first monitor location arranged within the desired sound zone, wherein a first value representing the first sound is greater than a first threshold. The first value may be a squared pressure level of the first sound, expressed as <dmonitors2>σb. The first threshold may be a value representing a minimal sound desired to be detected at a monitor location within the desire sound zone. That is, it is to determine the set of actuator generation coefficients kgkso that a big enough sound can be generated in the desired sound zone. Preferably, the set of actuator generation coefficients kgkmay be determined to maximise the first value. That is, to make the first value as great as possible. The optimization process may comprise determining the set of actuator generation coefficients kgkfor generating a second sound at a second monitor location arranged outside the desired sound zone, wherein a second value representing the second sound is smaller than a second threshold. The second value may be a squared pressure level of the second sound, expressed as <dmonitors2>σd. The second threshold may be a value representing a maximal sound to be detected at a monitor location outside the desire sound zone. Thus, it is to determine the set of actuator generation coefficients kgkso that a small enough sound can be generated outside the desired sound zone. That is, it is to generate less sound to be cancelled. Preferably, the set of actuator generation coefficients kgkmay be determined to minimise the second value. That is, to reduce the second value to a smallest possible amount. It is known that, for a low frequency sound, it is not possible to completely isolate the bright sound zones from the dark sound zones. That is, a reduction of a sound level of a low frequency sound in the dark sound zones will unavoidably affect a sound level of the low frequency sound in the bright sound zones as well. The coefficients memand/or kekmay be chosen so that by an optimal control of a sound field at the error sensors, a minimum squared value of an error signal at the monitor positions can be achieved in the dark sound zones σd, while keeping as much as possible the desired signal amplitude at the monitor positions in the bright sound zone σb. The optimization process may comprise determining a set of coefficients memand/or kekthat minimize the following function <emonitors2>σd+α<|emonitors−dmonitors|2>σb. <emonitors2>σdmay refer to a squared pressure level of a sound in the dark sound zones, which is generated by the actuators, along with the generation of the desired sound in the bright sound zone A. The sound may represent a resulting sound generated outside the desired sound zone, by the plurality of the actuators. That is, the sound is undesired, which needs to be cancelled or at least reduced. Thus, minimising <emonitors2>σdmeans to minimise the resulting sound generated outside the desired sound zone. α<|emonitors−dmonitors|2>σbmay refer to an amount of sound reduction within the bright sound zone A. α<|emonitors−dmonitors|2>σb, may represent an amount of a sound reduction in the desired sound zone, caused by a sound generated for cancelling the third sound. Thus, minimising α<|emonitors−dmonitors|2>σbmeans to keep the sound reduction in the bright zone as little as possible. α may be a weighting factor which weighs these two aspects. α may be used for controlling how much a bright sound zone may be affected by the method/system. α may be any positive real number. For any determined set of actuator generation matrix Kg, actuator exclusion matrix Keand sensor weighting matrix Me, and for a known broadband input signal x, such as a broadband signal, the following equation Mee=Me[S]Kgx+Me[S]KeY has a solution for y denoted yoptthat minimizes Mee in a least-square sense. Based on the solution yoptand information of the transmission paths from each of the actuators to each of the monitor sensors, an optimal cancelling signal can be obtained at each of the monitor locations or monitor sensors. By summing the optimal cancelling signal with the disturbance signal dmonitors, the error signal emonitorscan be obtained at the monitor sensors.FIG.6ais an example of a process diagram for determining the set of actuator generation coefficients kgk, the set of actuator exclusion coefficients kek, and the set of sensor weighting coefficients mem, for a sound zone. In S1, a secondary path model S representing the acoustic transmission paths from all potential actuator positions to all potential error sensor positions and all monitor positions within the acoustic cavity are determined, by measuring or simulation. FIG.6bis an example illustrating the potential positions for actuators1, for error sensors2, and for monitor sensors6, within a vehicle, as in S1. The monitor sensors may be provided at a head or ear position of a person, such as a driver or a passenger, as shown inFIG.6b. In S2, the actuator positions, the error sensor positions are determined for an optimal control for all sound zones within the acoustic cavity, based on the determined secondary path model S. The actuators and error sensors may be provided in the acoustic cavity according to the respective determined positions. FIG.6cis an example illustrating the vehicle ofFIG.6bwith the actuators1and error sensors2provided on the determined positions. In S3, the set of actuator generation coefficients kgk, the set of actuator exclusion coefficients kek, and the set of sensor weighting coefficients mem, are determined for each sound zone. In S4, the coefficients determined in S3are stored in a storage unit5. The storage unit5may be provided within the vehicle, as shown inFIG.6d. Alternatively, the storage unit5may be provided outside the vehicle, e.g., as a cloud storage unit. In S5, when the respective static filter is used in the method/system, the respective static filter is calibrated for each sound zone. Then the respective static filter calibration result may be stored in the storage unit5in S4. FIG.7is an example of a method for creating a plurality of sound zones within an acoustic cavity. Comparing withFIG.4, the method further comprises determining a desired sound zone, and retrieving the determined set of actuator generation coefficients kgk, the determined set of actuator exclusion coefficients kek, and the determined set of sensor weighting coefficients mem, based on the determined sound zone, from e.g., the storage unit5. The storage unit5may be provided within the vehicle, or outside the vehicle.
30,757
11862140
DETAILED DESCRIPTION FIG.1shows a schematic view of an ANC enabled playback device in form of a headphone HP that in this example is designed as an over-ear or circumaural headphone. Only a portion of the headphone HP is shown, corresponding to a single audio channel. However, extension to a stereo headphone will be apparent to the skilled reader for this and the following disclosure. The headphone HP comprises a housing HS carrying a speaker SP, a feedback noise microphone or error microphone FB_MIC and an ambient noise microphone or feedforward microphone FF_MIC. The error microphone FB_MIC is particularly directed or arranged such that it records both sound played over the speaker SP and ambient noise. Preferably the error microphone FB_MIC is arranged in close proximity to the speaker, for example close to an edge of the speaker SP or to the speaker's membrane, such that the speaker sound may be the predominant source for recording. The ambient noise/feedforward microphone FF_MIC is particularly directed or arranged such that it mainly records ambient noise from outside the headphone HP. Still, negligible portions of the speaker sound may reach the microphone FF_MIC. Depending on the type of ANC to be performed, the ambient noise microphone FF_MIC may be omitted, if only feedback ANC is performed. The error microphone FB_MIC may be used according to the improved concept to provide an error signal being the basis for a determination of the wearing condition, respectively leakage condition, of the headphone HP, when the headphone HP is worn by a user. In the embodiment ofFIG.1, a sound control processor SCP is located within the headphone HP for performing various kinds of signal processing operations, examples of which will be described within the disclosure below. The sound control processor SCP may also be placed outside the headphone HP, e.g. in an external device located in a mobile handset or phone or within a cable of the headphone HP. FIG.2shows a block diagram of a generic adaptive ANC system. The system comprises the error microphone FB_MIC and the feedforward microphone FF_MIC, both providing their output signals to the sound control processor SCP. The noise signal recorded with the feedforward microphone FF_MIC is further provided to a feedforward filter for generating and anti-noise signal being output via the speaker SP. At the error microphone FB_MIC, the sound being output from the speaker SP combines with ambient noise and is recorded as an error signal that includes the remaining portion of the ambient noise after ANC. This error signal is used by the sound control processor SCP for adjusting a filter response of the feedforward filter. FIG.3shows an example representation of a “leaky” type earphone, i.e. an earphone featuring some acoustic leakage between the ambient environment and the ear canal EC. In particular, a sound path between the ambient environment and the ear canal EC exists, denoted as “acoustic leakage” in the drawing. FIG.4shows an example configuration of a headphone HP worn by a user with several sound paths. The headphone HP shown inFIG.4stands as an example for any ear mountable playback device of a noise cancellation enabled audio system and can e.g. include in-ear headphones or earphones, on-ear headphones or over-ear headphones. Instead of a headphone, the ear mountable playback device could also be a mobile phone or a similar device. The headphone HP in this example features a loudspeaker SP, a feedback noise microphone FB_MIC and, optionally, an ambient noise microphone FF_MIC, which e.g. is designed as a feedforward noise cancellation microphone. Internal processing details of the headphone HP are not shown here for reasons of a better overview. In the configuration shown inFIG.4, several sound paths exist, of which each can be represented by a respective acoustic response function or acoustic transfer function. For example, a first acoustic transfer function DFBM represents a sound path between the speaker SP and the feedback noise microphone FB_MIC, and may be called a driver-to-feedback response function. The first acoustic transfer function DFBM may include the response of the speaker SP itself. A second acoustic transfer function DE represents the acoustic sound path between the headphone's speaker SP, potentially including the response of the speaker SP itself, and a user's eardrum ED being exposed to the speaker SP, and may be called a driver-to-ear response function. A third acoustic transfer function AE represents the acoustic sound path between the ambient sound source and the eardrum ED through the user's ear canal EC, and may be called an ambient-to-ear response function. A fourth acoustic transfer function AFBM represents the acoustic sound path between the ambient sound source and the feedback noise microphone FB_MIC, and may be called an ambient-to-feedback response function. If the ambient noise microphone FF_MIC is present, a fifth acoustic transfer function AFFM represents the acoustic sound path between the ambient sound source and the ambient noise microphone FF_MIC, and may be called an ambient-to-feedforward response function. Response functions or transfer functions of the headphone HP, in particular between the microphones FB_MIC and FF_MIC and the speaker SP, can be used with a feedback filter function B and feedforward filter function F, which may be parameterized as noise cancellation filters during operation. The headphone HP as an example of the ear-mountable playback device may be embodied with both the microphones FB_MIC and FF_MIC being active or enabled such that hybrid ANC can be performed, or as a FB ANC device, where only the feedback noise microphone FB_MIC is active and an ambient noise microphone FF_MIC is not present or at least not active. Hence, in the following, if signals or acoustic transfer functions are used that refer to the ambient noise microphone FF_MIC, this microphone is to be assumed as present, while it is otherwise assumed to be optional. Any processing of the microphone signals or any signal transmission are left out inFIG.4for reasons of a better overview. However, processing of the microphone signals in order to perform ANC may be implemented in a processor located within the headphone or other ear-mountable playback device or externally from the headphone in a dedicated processing unit. The processor or processing unit may be called a sound control processor. If the processing unit is integrated into the playback device, the playback device itself may form a noise cancellation enabled audio system. If processing is performed externally, the external device or processor together with the playback device may form the noise cancellation enabled audio system. For example, processing may be performed in a mobile device like a mobile phone or a mobile audio player, to which the headphone is connected with or without wires. In the various embodiments, the FB or error microphone FB_MIC may be located in a dedicated cavity, as for example detailed in ams application EP17208972.4. Referring now toFIG.5, another example of a noise cancellation enabled audio system is presented. In this example implementation, the system is formed by a mobile device like a mobile phone MP that includes the playback device with speaker SP, feedback or error microphone FB_MIC, ambient noise or feedforward microphone FF_MIC and a sound control processor SCP for performing inter alia ANC and/or other signal processing during operation. In a further implementation, not shown, a headphone HP, e.g. like that shown inFIG.1orFIG.4, can be connected to the mobile phone MP wherein signals from the microphones FB_MIC, FF_MIC are transmitted from the headphone to the mobile phone MP, in particular the mobile phone's processor PROC for generating the audio signal to be played over the headphone's speaker. For example, depending on whether the headphone is connected to the mobile phone or not, ANC is performed with the internal components, i.e. speaker and microphones, of the mobile phone or with the speaker and microphones of the headphone, thereby using different sets of filter parameters in each case. In the following, several implementations of the improved concept will be described in conjunction with specific use cases. It should however be apparent to the skilled person that details described for one implementation may still be applied to one or more of the other implementations. Generally, the following steps are performed, e.g. with the sound control processor SCP:controlling and/or monitoring a playback of a detection signal or a filtered version of the detection signal via the speaker SP;recording an error signal from the error microphone FB_MIC; anddetermining whether the headphone or other playback device HP is in a first state, where the playback device HP is worn by a user, or in a second state, where the playback device HP is not worn by a user, based on processing of the error signal. 1. Adaptive Headphone with Ear Cushion In one embodiment of this disclosure there is a headphone with a front volume which is directly acoustically coupled to the ear canal volume of a user, a driver SP which faces into the front volume and a rear volume which surrounds the rear face of the driver SP. The rear volume may have a vent with an acoustic resistor to allow some pressure relief from the rear of the driver. The front volume may also have a vent with an acoustic resistor to allow some pressure relief at the front of the driver. An error microphone FB_MIC is placed facing the front face of the driver such that it detects ambient noise and the signals from the front of the driver; and a feedforward microphone FF_MIC is placed facing out of the rear of the headphone such that it detects ambient noise, but detects negligible signals from the driver SP. An ear cushion surrounds the front face of the driver and makes up part of the front volume. In normal operation the headphone is placed on a user's head such that a complete or partial seal is made between the ear cushion and the users head, thereby at least in part acoustically coupling the front volume to the ear canal volume. The feedforward microphone FF_MIC, the error microphone FB_MIC and driver SP are connected to the sound control processor SCP acting as a noise cancellation processor. Referring toFIG.2, a noise signal detected by the FF microphone FF_MIC is routed through a FF filter and ultimately the headphone speaker SP, producing an anti-noise signal such that FF noise cancellation occurs at the error microphone point, and consequently the ear drum reference point (DRP). The noise signal is used as the detection signal. The error signal from the error microphone FB_MIC is routed to an adaption engine in the sound control processor SCP that in some way changes the anti-noise signal that is output from the speaker by changing at least one property of the FF filter to optimise noise cancellation at the error microphone FB_MIC. The sound control processor SCP periodically monitors the FF filter response at at least one frequency and compares this to a predefined set of acceptable filter responses which are stored in a memory of the sound control processor SCP. If the FF filter response is judged to be beyond the acceptable filter responses, an off ear state, i.e. second state, is triggered and the adaption engine ceases to change the FF filter in response to the error microphone signal. For instance, the FF filter is set to a low leak setting. For example, the FF filter may in some part represent the inverse of the low frequency characteristics of the driver response. The resultant FF filter response may be analysed at three low frequencies: 80 Hz, 100 Hz and 130 Hz. A different selection of the number of frequencies and the frequency range selected from this is possible. For example, a lower limit of a predefined frequency range may be between 40 Hz and 100 Hz and an upper limit of the predefined frequency range may be between 100 Hz and 800 Hz. Therefore a linear regression may determine the gradient and gain of this FF filter. In this example there is one acceptable filter response stored in memory as a gradient and gain scalar values which e.g. represent a linear regression of the inverse of the low frequency portion of the driver response when it is almost off the ear, that is with a high acoustic leakage between the ear cushion and the head. When the gradient of the linear regression of the FF filter becomes greater than the acceptable threshold filter gradient, or if the gain is greater than the acceptable threshold filter gain value, then an off ear state is triggered. The FF filter may be a close match of the transfer function: A⁢EA⁢F⁢F⁢M·DE where AE is the ambient to ear transfer function, AFFM is the ambient to FF microphone transfer function and DE is the driver to ear transfer function. When the headphone is in the off ear state, i.e. second state, the sound control processor SCP stops running unnecessary processes such as music playback and Bluetooth connection and switches to a low power mode with may include clocking processes at a lower rate, and which may include clocking the microphone ADCs at a lower rate. In this second state, the sound control processor SCP monitors the signals from the error and FF microphones and the sound control processor SCP calculates a phase difference of these two signals, i.e. the detection signal and the error signal. The phase calculation may occur by taking the argument of an FFT of the two signals and dividing them, then analysing when e.g. the mean of several bins from the FFT division moves beyond a threshold. The phase detection may occur by filtering each time domain signal, the filter may be one or more DFTs or implementations of the Goertzel algorithm at at least one frequency. The division of phase response of these two filtered signals at each frequency can give the phase difference at each frequency. For instance, the mean of these phase differences can be compared to a threshold. The phase detection may occur entirely in the time domain. If the phase difference moves beyond the threshold, then the earphone is returned to an on ear state, i.e. the first state. The FF filter is reset to a known stable state and adaption is re-enabled, that is the error signal from the error microphone FB_MIC continues to have an effect on the FF filter. Referring toFIG.6, a signal diagram displaying the phase difference between the error signal and the detection signal for different wearing states of a headphone or playback device is shown. For example, one phase difference signal corresponds to a 0 mm leak, another phase difference signal corresponds to a 28 mm leak and a third phase difference signal corresponds to an off ear state with a leakage that is larger than an acceptable maximum leakage, for example. These leakages are derived from a customised leakage adaptor, and are equivalent to a minimum and maximum realistic acoustic leakage. As can be seen from the diagram, in a frequency range from above 30 Hz to around 400 Hz, the phase difference in the off ear state is around 180°, whereas in the two other wearing states the phase difference is significantly different, in particular lower. Hence, for example, evaluation of the phase difference in the mentioned frequency range, in particular by comparing it to a phase threshold value, can give a good indication that the playback device is in or going to the on ear state. 2. Adaptive, Acoustically Leaky Earphone Another embodiment features an earphone with a driver, a rear volume and a front volume, e.g. like shown inFIG.3. The rear volume has a rear vent which is damped with an acoustic resistor. The front volume has a front vent which is damped with an acoustic resistor. The physical shape of the earphone dictates that when placed into an ear there is often an acoustic leakage between the ear canal and the earphone housing. This leakage may change depending on the shape of the ear, and how the earphone is sitting in the ear. A FF microphone FF_MIC is placed on the rear of the earphone such that it detects ambient noise but does not detect a significant signal from the driver. An error microphone FB_MIC is placed in close proximity to the front face of the driver such that it detects the drivers signal and the ambient noise signal. The noise signal from the FF microphone is, controlled by the sound control processor SCP, passed through the FF filter which outputs an anti-noise signal via the driver SP such that the superposition of the anti-noise signal and the ambient noise creates at least some noise cancellation. The error signal from the error microphone FB_MIC is passed into the signal processor and controls the FF filter such that the anti-noise signal changes based on the acoustic leakage between the ear canal walls and the earphone body. In this embodiment, the resultant filter response is analysed at at least one frequency and compared with an acoustics response that is representative of the earphone being at an extremely high leak. If the resultant filter response exceeds this acoustics response, the earphone enters an off ear state. This off ear state may stop adaption and set a filter for a medium acoustic leakage. In this off ear state, the signals from both microphones are monitored again at at least one frequency and when the phase difference exceeds a pre-defined threshold the earphone is returned to an on ear state, as described before in section 1 in conjunction withFIG.6. In the case that voice is present, the off ear detection still runs. In the case that quiet music is played from the driver, the off ear detection can still run. In the case that the music is substantially louder than the ambient noise, an alternative off ear detection metric may run as described in section 5 below. In this embodiment, the resultant FF filter may be arranged according to ams patent application EP17189001.5. 3. Non-Adaptive Earphone In another embodiment, the ANC headphones as previously described do not have an adaption means, i.e. feature a constant for the response of the feedforward filter. The FF filter is fixed. In this embodiment, an approximation to the ANC performance is made. If ANC performance is substantially worse than what is expected, the playback device is assumed to be off the ear. For example, the ANC performance is approximated by dividing the energy levels of the error microphone and the FF microphone. The headphone can then enter an off ear state. The on ear state can be triggered in exactly the same way or at least similar as for an adaptive headphone by monitoring the phase difference between the two microphones, as described before e.g. in section 1 in conjunction withFIG.6. In the case that voice is present, a voice activity detector may pause the off ear detection algorithm to avoid false positives. In the case that music is present, the energy level of the music, offset by the driver response may be subtracted from the energy level of the signal at the error microphone FB_MIC. 4. Headphone or Earphone with Hybrid ANC In this embodiment, the headphone may be as described in previous embodiments, but also features FB ANC in addition to FF ANC. For FB ANC, the FB microphone FB_MIC is connected to the driver via a FB filter, which may or may not be adaptive. The detection of reasons described previously still apply for such embodiments with hybrid ANC. 5. Triggered by Music Another embodiment may or may not feature noise cancellation, but adapts a filter in accordance with a response of the driver SP changing due to a varying acoustic leakage between the earphone and the ear canal. This filter may be used as all or part of a music compensation filter to compensate for music being attenuated by a feedback noise cancellation system, or may be used to compensate for the driver response changing due to the leakage. Referring toFIG.7, it shows an arrangement of this filter. In this case, the filter is adapted to match the acoustic “driver to error microphone” transfer function. In this embodiment, the headphone features at least the error microphone FB_MIC, wherein the presence of the feedforward microphone FF_MIC is not excluded. Here, a known identification signal WIS (e.g. a music signal or other payload audio signal) is output from the driver SP as a reference. The identification signal WIS is also filtered with the adaptive filter. The off ear case may be triggered by monitoring the adapted filter and analysing it as previously described. In particular, a similar evaluation as done with an adaptive feedforward filter is performed with the adapted, adjustable filter, e.g. by comparing a gain and/or gradient to respective associated threshold values. In this case, the on ear case may be triggered by monitoring the phase difference between the error signal from the error microphone FB_MIC and the known identification signal WIS driving the speaker SP. 6. Quiet Ambient Noise and No Music In this embodiment, an adaptive or non-adaptive noise cancelling earphone with a FF and a FB microphone is presented. In this case, the ambient noise may be extremely quiet, such that any useful signal from the microphones is in part masked by electronic noise from the microphones or other electronic means. That is, any signal from the microphones contain a significant portion of both useful ambient noise and random electronic noise. Furthermore, no music or only music with a low signal level is being played from the device. This case e.g. represents having the earphone in an ear but where there is negligible ambient noise and no useful sound is being played out of the driver. In this case, the previously detailed on/off ear detection methods will not be able to run reliably because the microphones cannot detect a useable signal from ambient noise or music playback. In this case, a similar approach as described above in section 5 may be used. For example, an identification signal WIS is generated by changing the filter between the FF microphone and the driver such that a small degree of noise boosting occurs at the FB microphone. Referring toFIG.8, instead of changing the FF ANC filter, a dedicated boosting filter can be applied to the noise signal of the FF microphone FF_MIC in order to generate the identification signal WIS. This identification signal WIS can be used to adjust the adjustable filter to match the acoustic “driver to error microphone transfer function”, as described above. With this process, the FB microphone can detect a useful signal from the driver, but because the filtered noise signal WIS from the FF microphone still contains a significant portion of quiet ambient noise the signal from the driver is largely coherent with the quiet ambient noise and is as such less perceivable to the user than playing an uncorrelated signal from the driver. In this case, a useful identification signal WIS is played via the driver, which is barely detectable to the user, and can be used as in section 5, where a known identification signal WIS is played from the driver, to detect if the earphone is on or off the ear. 7. Mobile Handset Another embodiment implements a mobile handset with a FF microphone FF_MIC and an error microphone FB_MIC, e.g. as shown inFIG.5. When the handset is placed on the ear, a partially closed air volume exists in the concha cavity with an acoustic leakage, and some ANC can take place. In this environment, the ANC would typically have some form of adaption as the acoustic leakage is liable to change significantly at each use. On and off ear detection can occur according to sections 1 or 2, for example. Where applicable any combination of these embodiments as described in the previous sections is plausible. For example, an adaptive earphone may use off ear detection based on the FF filter and phase difference between the two microphones, but may switch to be triggered by music if the ambient noise level is quiet or the ratio of music to ambient noise is high. In the following text, further aspects of the present disclosure are specified. The individual aspects are enumerated in order to facilitate the reference to features of other aspects. 1. An audio system for an ear mountable playback device comprising a speaker and an error microphone that senses or predominantly senses sound being output from the speaker, the audio system comprising a sound control processor that is configured tocontrolling and/or monitoring a playback of a detection signal or a filtered version of the detection signal via the speaker;recording an error signal from the error microphone; anddetermining whether the playback device is in a first state, where the playback device is worn by a user, or in a second state, where the playback device is not worn by a user, based on processing of the error signal. 2. The audio system according to aspect 1, wherein the sound control processor is configured to determine the first state based on an evaluation of a phase difference between the detection signal and the error signal. 3. The audio system according to aspect 2, wherein the sound control processor is configured to determine the first state, if the phase difference between the detection signal and the error signal exceeds a phase threshold value at one or more predefined frequencies. 4. The audio system according to aspect 2 or 3, wherein the evaluation of the phase difference is performed in the frequency domain. 5. The audio system according to one of aspects 1 to 4, which is configured to perform noise cancellation. 6. The audio system according to aspect 5, wherein the playback device further comprises a feedforward microphone that predominantly senses ambient sound and wherein the sound control processor is further configured torecording a noise signal from the feedforward microphone and using the noise signal as the detection signal;filtering the detection signal with a feedforward filter; andcontrolling the playback of the filtered detection signal via the speaker. 7. The audio system according to aspect 6, wherein the sound control processor is configured to determine the first state based on an evaluation of a performance of the noise cancellation as a function of the error signal and the noise signal or detection signal. 8. The audio system according to aspect 6 or 7, wherein the sound control processor is configured to determine the second state based on an evaluation of a performance of the noise cancellation as a function of the error signal and the noise signal or detection signal. 9. The audio system according to aspect 7 or 8, which further comprises a voice activity detector for determining whether a voice signal is recorded with the error microphone and/or the feedforward microphone, wherein the sound control processor is configured to pause a determination of the first and/or the second state, if the voice signal is determined to be recorded. 10. The audio system according to one of aspects 7 to 9, wherein the sound control processor is configured to evaluate the performance of the noise cancellation by determining an energy ratio between the error signal and the noise signal or detection signal. 11. The audio system according to aspect 10, wherein the sound control processor is configured, if a music signal is additionally played via the speaker, to take an energy level of the music signal into account when determining the energy ratio. 12. The audio system according to one of aspects 7 to 11, wherein a filter response of the feedforward filter is constant and/or is kept constant by the sound control processor at least during the determination of the state of the playback device. 13. The audio system according to aspect 6, wherein the sound control processor is configuredto adjust a filter response of the feedforward filter based on the error signal; andto determine the second state based on an evaluation of the filter response of the feedforward filter at at least one predetermined frequency. 14. The audio system according to aspect 13, wherein the sound control processor is configured to determine the second state if the filter response of the feedforward filter at the at least one predetermined frequency exceeds a response threshold value. 15. The audio system according to aspect 13 or 14, wherein the sound control processor is configured to determine the second state by determining a linear regression of the filter response of the feedforward filter in a predefined frequency range, the linear regression being defined by at least a filter gradient and a filter gain, and by evaluating the filter gradient and/or the filter gain. 16. The audio system according to aspect 15, wherein the sound control processor is configured to determine the second state if at least one of the following applies:the filter gradient exceeds a threshold gradient value;the filter gain exceeds a threshold gain value. 17. The audio system according to aspect 15 or 16, wherein a lower limit of the predefined frequency range is between 40 Hz and 100 Hz and an upper limit of the predefined frequency range is between 100 Hz and 800 Hz. 18. The audio system according to one of aspects 6 to 17, wherein the feedforward microphone senses only a negligible portion of the sound being output from the speaker. 19. The audio system according to one of aspects 1 to 18, wherein the detection signal is an identification signal, and wherein the sound control processor is configuredto control and/or monitor the playback of the identification signal via the speaker;to filter the identification signal with an adjustable filter;to adjust the adjustable filter based on a difference between the filtered identification signal and the error signal, in particular such that the adjustable filter approximates an acoustic transfer function between the speaker and the error microphone; andto determine the second state based on an evaluation of a filter response of the adjustable filter at at least one further predetermined frequency. 20. The audio system according to aspect 19, wherein the identification signal is one of the following or a combination of one of the following:a music signal;a payload audio signal;a filtered version of a noise signal that is recorded from a microphone predominantly sensing ambient sound. 21. The audio system according to aspect 19 or 20, wherein the sound control processor is configured to determine the second state if the filter response of the adjustable filter at the at least one further predetermined frequency exceeds an identification response threshold value. 22. The audio system according to one of aspects 19 to 21, wherein the sound control processor is configured to determine the second state by determining a linear regression of the filter response of the adjustable filter in a further predefined frequency range, the linear regression being defined by at least an identification filter gradient and an identification filter gain, and by evaluating the identification filter gradient and/or the identification filter gain. 23. The audio system according to aspect 22, wherein the sound control processor is configured to determine the second state if at least one of the following applies:the identification filter gradient exceeds an identification threshold gradient value;the identification filter gain exceeds an identification threshold gain value. 24. The audio system according to aspect 22 or 23, wherein a lower limit of the further predefined frequency range is between 40 Hz and 100 Hz and an upper limit of the further predefined frequency range is between 100 Hz and 800 Hz. 25. The audio system according to one of the preceding aspects, wherein the sound control processor is configured to control the audio system to a low power mode of operation, if the second state is determined, and to a regular mode of operation, if the first state is determined. 26. The audio system according to one of the preceding aspects, wherein the sound control processor is configured to determine whether the playback device is in the first state, only if the playback device is in the second state, and to determine whether the playback device is in the second state, only if the playback device is in the first state. 27. The audio system according to one of the preceding aspects, which includes the playback device. 28. The audio system according to the preceding aspect, wherein the sound control processor is included in a housing of the playback device. 29. The audio system according to one of the preceding aspects, wherein the playback device is a headphone or an earphone. 30. The audio system according to aspect 29, wherein the headphone or earphone is designed to be worn with a variable acoustic leakage between a body of the headphone or earphone and a head of a user. 31. The audio system according to one of aspects 1 to 27, wherein the playback device is a mobile phone. 32. A signal processing method for an ear mountable playback device comprising a speaker and an error microphone that senses or predominantly senses sound being output from the speaker, the method comprisingcontrolling and/or monitoring a playback of a detection signal or a filtered version of the detection signal via the speaker;recording an error signal from the error microphone; anddetermining whether the playback device is in a first state, where the playback device is worn by a user, or in a second state, where the playback device is not worn by a user, based on processing of the error signal. 33. The method according to aspect 32, wherein the first state is determined based on an evaluation of a phase difference between the detection signal and the error signal. 34. The method according to aspect 33, wherein the first state is determined, if the phase difference between the detection signal and the error signal exceeds a phase threshold value at one or more predefined frequencies. 35. The method according to aspect 33 or 34, wherein the evaluation of the phase difference is performed in the frequency domain. 36. The method according to one of aspects 32 to 35, further comprising performing noise cancellation. 37. The method according to aspect 36, wherein the playback device further comprises a feedforward microphone that predominantly senses ambient sound and wherein the method further comprisesrecording a noise signal from the feedforward microphone and using the noise signal as the detection signal;filtering the detection signal with a feedforward filter; andcontrolling the playback of the filtered detection signal via the speaker. 38. The method according to aspect 37, further comprisingdetermining the first state and/or the second state based on an evaluation of a performance of the noise cancellation as a function of the error signal and the noise signal or detection signal. 39. The method according to aspect 37, further comprisingadjusting a filter response of the feedforward filter based on the error signal; anddetermining the second state based on an evaluation of the filter response of the feedforward filter at at least one predetermined frequency. 40. The method according to one of aspects 32 to 39, wherein the detection signal is an identification signal, the method further comprisingcontrolling and/or monitoring the playback of the identification signal via the speaker;filtering the identification signal with an adjustable filter;adjusting the adjustable filter based on a difference between the filtered identification signal and the error signal, in particular such that the adjustable filter approximates an acoustic transfer function between the speaker and the error microphone; anddetermining the second state based on an evaluation of a filter response of the adjustable filter at at least one further predetermined frequency.
36,110
11862141
MODE FOR CARRYING OUT THE INVENTION Embodiments to which the present technology is applied will be described below with reference to the drawings. First Embodiment Present Technology First, an outline of the present technology will be described. Here, an example will be described in which, from an input acoustic signal obtained by collecting a mixed speech uttered by a plurality of speakers at the same time or at different timings with one or a plurality of microphones, utterances (speeches), one for each of the speakers, are separated by using a single sound source separation model. In particular, here, the number of speakers included in the mixed speech based on the input acoustic signal is unknown. The present technology makes it possible to more easily separate utterances (speeches), one for each of an unspecified unknown number of speakers from an input acoustic signal by using a single sound source separation model to recursively perform sound source separation on the input acoustic signal. Note that, in the examples described here, sounds of sound sources are utterances of speakers, but the sounds are not limited this, and may be any sounds such as animal calls or instrument sounds. The sound source separation models used in the present technology are models such as neural networks learned to separate input speeches on a speaker-to-speaker basis. That is, the sound source separation models have been learned in advance to separate, from an acoustic signal for learning of a mixed speech including an utterance of a speaker as a sound source, an acoustic signal of the utterance of the speaker. The sound source separation models perform a computation using an arithmetic coefficient in accordance with a predetermined sound source separation algorithm to separate an input acoustic signal into acoustic signals (hereinafter, also referred to as separated signals), one for each of the sound sources (speakers), and are implemented by the sound source separation algorithm and the arithmetic coefficient. In the present technology, sound source separation using a sound source separation model is performed on an input acoustic signal of a mixed speech in which the number of speakers is unknown or known. Then, on the basis of the obtained separated signals, it is determined whether or not a predetermined end condition is satisfied. Sound source separation using the same sound source separation model is recursively performed on the separated signals until it is determined that the end condition is satisfied, and finally, separated signals one for each of the sound sources (speakers) are obtained. Here, as a specific example, a case will be described in which a two-speaker separation model learned to separate an acoustic signal for learning including utterances of two speakers as sound sources into a separated signal including an utterance of one speaker and a separated signal including an utterance of the other speaker is used as a sound source separation model. Such a sound source separation model can be obtained by learning by using a learning technique such as deep clustering or permutation invariant training. In the two-speaker separation model, when an input acoustic signal of a mixed speech by two speakers is input, it is expected that separated signals of utterances (speeches), one for each of the speakers, are output as a sound source separation result. Furthermore, in the two-speaker separation model, when an input acoustic signal of a speech by one speaker is input, it is expected that a separated signal of an utterance of the one speaker and a silent separated signal are output as a sound source separation result. On the other hand, in a case of an input of the two-speaker separation model, that is, in a case where an input acoustic signal is a signal of a mixed speech of three or more speakers, such a mixed speech is an input that has not appeared at the time of learning of the two-speaker separation model. In this case, in response to the input of the mixed speech of three speakers, sound source separation is performed such that utterances (speeches) of two speakers are included in one separated signal as illustrated inFIG.1, for example. In the example illustrated inFIG.1, a mixed speech based on an input acoustic signal includes utterances of three speakers, a speaker PS1to a speaker PS3. As a result of sound source separation, that is, speaker separation on such an input acoustic signal using the two-speaker separation model as indicated by an arrow Q11, the mixed speech is separated such that one separated signal includes only the utterance of the speaker PS1and the other separated signal includes only the utterances of the speaker PS2and the speaker PS3. Furthermore, for example, as a result of further sound source separation using the two-speaker separation model on the separated signal including only the utterance of the speaker PS1as indicated by an arrow Q12, the speech is separated such that one separated signal includes only the utterance of the speaker PS1and the other separated signal is a silent signal. In a similar manner, for example, as a result of further sound source separation using the two-speaker separation model on the separated signal including only the utterances of the speaker PS2and the speaker PS3as indicated by an arrow Q13, the mixed speech is separated such that one separated signal includes only the utterance of the speaker PS2and the other separated signal includes only the utterance of the speaker PS3. In this way, when sound source separation is recursively performed on an input acoustic signal by using the same two-speaker separation model, separated signals, each of which includes only a corresponding one of the speaker PS1to the speaker PS3, are obtained. In this example, at the time when the first sound source separation indicated by the arrow Q11is performed, the obtained separated signals include only utterances of at most two speakers. In most cases, the input acoustic signal is not separated into a separated signal of the utterances of the three speakers and a silent separated signal. Therefore, at the time when the first sound source separation has been performed, all the separated signals are speeches that can be solved by using the two-speaker separation model, that is, signals from which separated signals, one for each of the speakers, can be obtained. Then, recursive sound source separation is performed on such separated signals as indicated by the arrow Q12and the arrow Q13so that separated signals, one for each of the speakers, can be obtained. Note that, even in a case where the input acoustic signal is a mixed speech of utterances of four or more speakers, the number of times of sound source separation recursively performed can be increased so that separated signals, one for each of the speakers, can be finally obtained. Furthermore, in a case where sound source separation is recursively performed to separate an input acoustic signal into separated signals, one for each of the speakers (to extract separated signals), when the number of speakers of the mixed speech of the input acoustic signal is unknown (not known), an end condition for ending the recursive sound source separation is required. This end condition is a condition satisfied when a separated signal obtained by the sound source separation includes only an utterance of one speaker, in other words, a condition satisfied when a separated signal does not include utterances of two or more speakers. Here, as an example, in a case where one separated signal obtained by the sound source separation is a silent signal, in more detail, in a case where an average level (energy) of one separated signal is equal to or less than a predetermined threshold value, it is assumed that the end condition is satisfied, that is, separated signals, one for each of the speakers, are obtained. According to the present technology as described above, even in a case where the number of speakers of an input acoustic signal is unknown, sound source separation can be easily performed without need for a model for estimating the number of speakers, a sound source separation model for each number of speakers, direction information indicating a direction of a sound source, or the like, and a separated signal of each sound source (speaker) can be obtained. The present technology therefore significantly suppresses an increase in the time for development of the sound source separation models and the like and an increase in the amount of memory for retaining the sound source separation models. That is, in the present technology, separated signals, one for each of the speakers, can be obtained by one sound source separation model regardless of the number of speakers of an input acoustic signal, and it is possible to simplify a system, reduce the necessary amount of memory, integrate development of the sound source separation models, and the like. Moreover, in the present technology, sound source separation is performed recursively so that a problem (task) to be solved by each sound source separation can be simplified, and as a result, separation performance can be improved. Note that an example of using a two-speaker separation model as the sound source separation model has been described here. However, this is not restrictive, and recursive sound source separation may be performed by a speaker separation model of a plurality of speakers that separates an input acoustic signal into separated signals, one for each of three or more speakers, such as a three-speaker separation model. For example, the three-speaker separation model is a speaker separation model learned to separate an acoustic signal for learning including utterances of three speakers as sound sources into three separated signals, each of which includes a corresponding one of the utterances of the three speakers, that is, separated signals, one for each of the three speakers. Configuration Example of Signal Processing Device Next, a signal processing device to which the present technology is applied will be described. The signal processing device to which the present technology is applied is configured as illustrated inFIG.2, for example. A signal processing device11illustrated inFIG.2has a sound source separation unit21and an end determination unit22. The sound source separation unit21receives an input acoustic signal from the outside. Furthermore, the sound source separation unit21retains a sound source separation model obtained in advance by learning. Note that, in this embodiment, the description will be given on the assumption that the input acoustic signal is an acoustic signal of a mixed speech in which the number of speakers, particularly the number of speakers who have made utterances at the same time, is unknown. Furthermore, here, a sound source separation model retained by the sound source separation unit21is a two-speaker separation model. In accordance with a result of end determination supplied from the end determination unit22, the sound source separation unit21recursively performs, on the basis of the sound source separation model that is retained, sound source separation on the supplied input acoustic signal to obtain separated signals, and supplies the resulting separated signals to the end determination unit22. The end determination unit22performs end determination to determine whether or not to end the recursive sound source separation, that is, whether or not an end condition is satisfied on the basis of the separated signals supplied from the sound source separation unit21, and supplies the determination result to the sound source separation unit21. Furthermore, if it is determined that the end condition is satisfied, the end determination unit22outputs the separated signals obtained by the sound source separation to a subsequent stage as acoustic signals of utterances, one for each of the speakers. Description of Sound Source Separation Processing Next, sound source separation processing performed by the signal processing device11will be described with reference to a flowchart inFIG.3. In step S11, the sound source separation unit21performs, on the basis of a sound source separation model that is retained, sound source separation on a supplied input acoustic signal to obtain separated signals, and supplies the resulting separated signals to the end determination unit22. Specifically, the sound source separation unit21performs arithmetic processing in accordance with a sound source separation algorithm corresponding to the sound source separation model on the basis of an arithmetic coefficient constituting the sound source separation model and the input acoustic signal, and obtains two separated signals, which are an output of the sound source separation model. In step S12, on the basis of the separated signals supplied from the sound source separation unit21, the end determination unit22performs end determination for each pair (set) of two separated signals obtained by one sound source separation, and determines whether or not all the pairs satisfy an end condition. Specifically, for example, the end determination unit22determines, for one pair, that the pair satisfies the end condition if an average level of one of the two separated signals constituting the pair is equal to or less than a predetermined threshold value. If it is determined in step S12that none of the pairs satisfies the end condition, the end determination unit22supplies the sound source separation unit21with information indicating the pair that does not satisfy the end condition as a result of the end determination, and then the processing proceeds to step S13. In step S13, on the basis of the result of the end determination supplied from the end determination unit22, the sound source separation unit21performs sound source separation using a sound source separation model on each of the separated signals constituting the pair that does not satisfy the end condition to obtain separated signals, and supplies the resulting separated signals to the end determination unit22. For example, in step S13, the same sound source separation model as the one used in step S11is used for the sound source separation. Note that the sound source separation may be recursively performed with the use of a plurality of sound source separation models that are different one from each other. For example, a three-speaker separation model may be used for the sound source separation in step S11and a two-speaker separation model may be used for the sound source separation in step S13. After the recursive sound source separation is performed in the processing of step S13, the processing returns to step S12, and the processing described above is repeated until it is determined that all the pairs satisfy the end condition. For example, in the example illustrated inFIG.1, since one separated signal is a silent signal in the sound source separation indicated by the arrow Q12, the pair of separated signals obtained as a result of the sound source separation indicated by the arrow Q12satisfies the end condition. On the other hand, since a silent separated signal cannot be obtained by the sound source separation indicated by the arrow Q13inFIG.1, it is not determined that the end condition is satisfied, and recursive sound source separation is performed in step S13for each of the two separated signals obtained by the sound source separation indicated by the arrow Q13. Furthermore, if it is determined in step S12inFIG.3that all the pairs satisfy the end condition, the input acoustic signal has been separated into separated signals, one for each of the speakers, and thus the processing proceeds to step S14. In step S14, the end determination unit22outputs, to a subsequent stage, the separated signals, one for each of the speakers, obtained by the sound source separations that have been performed, and the sound source separation processing ends. As described above, the signal processing device11recursively performs the sound source separation on the input acoustic signal until the end condition is satisfied, and obtains the separated signals, one for each of the speakers. In this way, sound source separation can be performed more easily and with sufficient separation performance. Second Embodiment Synthesis from Separation Results Meanwhile, in a case where sound source separation is recursively performed on an input acoustic signal by using a speaker separation model as a sound source separation model, an utterance of a certain speaker may be dispersed into different separation results, that is, different separated signals. Specifically, for example, as illustrated inFIG.1, a case is assumed in which sound source separation is performed by using a two-speaker separation model on an input acoustic signal of a mixed speech including utterances of the speaker PS1to the speaker PS3. In this case, for example, an utterance of a certain speaker may not appear only in one separated signal as in the result of sound source separation indicated by the arrow Q11inFIG.1, but may appear in a dispersed manner in two separated signals as illustrated inFIG.4. Note that, inFIG.4, the same reference numerals are given to the portions corresponding to those in the case ofFIG.1, and the description thereof will be omitted as appropriate. In the example illustrated inFIG.4, sound source separation (speaker separation) is recursively performed by using a two-speaker separation model on an input acoustic signal of a mixed speech including utterances of a speaker PS1to a speaker PS3. Here, first, as indicated by an arrow Q21, sound source separation is performed on the input acoustic signal. As a result, a separated signal including the utterance of the speaker PS1and a part of the utterance of the speaker PS2and a separated signal including the utterance of the speaker PS3and a part of the utterance of the speaker PS2are obtained. That is, although each of the utterances of the speaker PS1and the speaker PS3appears only in one separated signal, the utterance of the speaker PS2is dispersed into two separated signals. Here, recursive sound source separation using the two-speaker separation model as indicated by an arrow Q22is performed on the separated signal including the utterance of the speaker PS1and a part of the utterance of the speaker PS2obtained as a result of the sound source separation indicated by the arrow Q21, so that separated signals, one for each of the speakers, are obtained. That is, in this example, as a result of the sound source separation indicated by the arrow Q22, a separated signal including only the utterance of the speaker PS1and a separated signal including only a part of the utterance of the speaker PS2are obtained. In a similar manner, recursive sound source separation using the two-speaker separation model as indicated by an arrow Q23is performed on the separated signal including the utterance of the speaker PS3and a part of the utterance of the speaker PS2obtained as the result of the sound source separation indicated by the arrow Q21, so that separated signals, one for each of the speakers, are obtained. That is, in this example, as a result of the sound source separation indicated by the arrow Q23, a separated signal including only the utterance of the speaker PS3and a separated signal including only a part of the utterance of the speaker PS2are obtained. Even in such an example, each of the resulting separated signals includes only an utterance of one speaker. However, here, the utterance of the speaker PS2is dispersed into two separated signals. Thus, two or more separated speeches, that is, separated speeches (utterances) of the same speaker dispersed into a plurality of separated signals may be combined into one synthesized utterance of the speaker. In such a case, it is possible to use a speaker identification model in which separated signals are input and a speaker identification result is output. Specifically, for example, a neural network or the like that identifies any large number of speakers is learned in advance as a speaker identification model. Here, in a case where the number of speakers at the time of learning of the speaker identification model is large, it is not necessary that the speakers include speakers who are actual targets of sound source separation. A speaker identification model is prepared in this way, and then the speaker identification model is used for clustering of separated signals obtained by sound source separation, that is, speakers corresponding to the separated signals. At the time of clustering, each separated signal is input to the speaker identification model, and speaker identification is performed. At this time, an output of the speaker identification model, that is, a result of the speaker identification, or an activation (output) of an intermediate layer of the speaker identification model, that is, a computation result in the middle of arithmetic processing for obtaining a speaker identification result, is obtained as a feature value (speaker embedding) representing the speaker corresponding to the input separated signal. Note that, at the time of calculation of the feature value representing the speaker, a silent section of the separated signal can be ignored in the calculation. When the feature value has been obtained for each of the separated signals (separated speeches), a distance of the feature values to each other, that is, the distance between the feature values is obtained. Separated signals in which the distance between the feature values is equal to or less than a threshold value is determined to be separated signals of the same speaker. Moreover, as a result of the clustering, one separated signal is synthesized and obtained from a plurality of separated signals determined to be of the same speaker, as a final separated signal of the speaker. Therefore, for example, in the example inFIG.4, the separated signal including only a part of the utterance of the speaker PS2obtained by the sound source separation indicated by the arrow Q22and the separated signal including only a part of the utterance of the speaker PS2obtained by the sound source separation indicated by the arrow Q23are assumed to be of the same speaker. Then, the separated signals are added so that one separated signal is synthesized, and the resulting signal is output as a final separated signal including the utterance of the speaker PS2. Configuration Example of Signal Processing Device In a case where clustering of separated signals obtained by sound source separation is performed as described above, a signal processing device is configured as illustrated inFIG.5, for example. Note that, inFIG.5, the same reference numerals are given to the portions corresponding to those in the case ofFIG.2, and the description thereof will be omitted as appropriate. A signal processing device51illustrated inFIG.5has a sound source separation unit21, an end determination unit22, and a same speaker determination unit61. The configuration of the signal processing device51is different from the configuration of the signal processing device11in that the same speaker determination unit61is newly provided, but is otherwise the same as the configuration of the signal processing device11. The same speaker determination unit61performs a same speaker determination of determining whether or not a plurality of separated signals obtained by recursive sound source separation is signals of the same speaker, and then synthesizes and generates, in accordance with a result of the determination, a final separated signal of the speaker from the plurality of separated signals of the same speaker. More specifically, the same speaker determination unit61retains a speaker identification model obtained in advance by learning, and performs clustering on the basis of the speaker identification model that is retained and separated signals, one for each of the speakers, supplied from the end determination unit22. That is, the same speaker determination unit61performs a same speaker determination by performing clustering. Furthermore, the same speaker determination unit61performs clustering to synthesize a final separated signal of a speaker from separated signals determined to be of the same speaker, and outputs finally obtained separated signals, one for each of the speakers, to a subsequent stage. Description of Sound Source Separation Processing Next, sound source separation processing performed by the signal processing device51will be described with reference to a flowchart inFIG.6. Note that the processing of step S41to step S43is similar to the processing of step S11to step S13inFIG.3, and the description thereof will be omitted. When recursive sound source separation is performed in step S41to step S43and separated signals, one for each of the speakers, are obtained, the separated signals are supplied from the end determination unit22to the same speaker determination unit61, and then the processing proceeds to step S44. That is, if it is determined in step S42that all the pairs satisfy the end condition, the processing proceeds to step S44. In step S44, the same speaker determination unit61calculates a feature value representing a speaker for each of the separated signals on the basis of the speaker identification model that is retained and the separated signals supplied from the end determination unit22. That is, the same speaker determination unit61calculates a feature value representing a speaker for each separated signal by performing a computation using the speaker identification model with the separated signal as an input. In step S45, the same speaker determination unit61determines whether or not there are separated signals of the same speaker on the basis of the feature values obtained in step S44. That is, a same speaker determination is performed. For example, for any two separated signals of all the separated signals, the same speaker determination unit61obtains a distance between the feature values of the two separated signals. If the distance is equal to or less than a predetermined threshold value, it is determined that the two separated signals are those (signals) of the same speaker. For all the separated signals, the same speaker determination unit61determines, for all possible combinations of two separated signals, whether or not the two separated signals are of the same speaker. Then, if a determination result indicating that the two separated signals are not of the same speaker is obtained for all the combinations, the same speaker determination unit61determines in step S45that there are no separated signals of the same speaker. The same speaker determination unit61performs the processing of step S44and step S45described above as clustering processing. If it is determined in step S45that there are separated signals of the same speaker, the same speaker determination unit61synthesizes, from a plurality of separated signals determined to be of the same speaker, a final separated signal of the speaker in step S46. After final separated signals, one for each of the speakers, are synthesized and obtained from the separated signals of the same speaker, the processing proceeds to step S47. On the other hand, if it is determined in step S45that there are no separated signals of the same speaker, separated signals, one for each of the speakers, have already been obtained, so the processing of step S46is skipped, and the processing proceeds to step S47. If it is determined in step S45that there are no separated signals of the same speaker, or if the processing of step S46is performed, the same speaker determination unit61outputs the finally obtained separated signals, one for each of the speakers, to a subsequent stage in step S47, and the sound source separation processing ends. As described above, the signal processing device51recursively performs sound source separation on an input acoustic signal until the end condition is satisfied, and performs clustering of separated signals to perform synthesis from separated signals of the same speaker and obtain final separated signals, one for each of the speakers. In this way, sound source separation can be performed more easily and with sufficient separation performance. In particular, the signal processing device51performs synthesis from separated signals of the same speaker, and this further improves the separation performance as compared with the case of the signal processing device11. Third Embodiment One-to-Many Speaker Separation Model Meanwhile, in the above, an example has been described in which sound source separation is performed by using an m (where m≥2)-speaker separation model learned so as to separate an acoustic signal of a mixed speech including utterances of m speakers into m separated signals, one for each of the speakers. In particular, at the time of sound source separation, there is a possibility that an utterance of a predetermined speaker appears in a dispersed manner in a plurality of separated signals. Therefore, in the second embodiment, an example has been described in which clustering is performed and separated signals are synthesized as appropriate. However, not only such a speaker separation model but also other speaker separation models such as a speaker separation model obtained by performing learning on an uncertain number of speakers (hereinafter, also referred to as a one-to-many speaker separation model) may be used for sound source separation. The one-to-many speaker separation model is a speaker separation model such as a neural network learned to separate an acoustic signal for learning of a mixed speech of any unknown (uncertain) number of speakers into a separated signal including only an utterance (speech) of a predetermined one speaker and a separated signal including utterances of remaining speakers excluding the predetermined one speaker among a plurality of speakers included in the mixed speech. Here, a separation result of sound source separation using the one-to-many speaker separation model, that is, an output of the one-to-many speaker separation model is also referred to as a head. In particular, here, a side on which a separated signal including an utterance of one speaker is output is also referred to as a head1, and a side on which a separated signal including utterances of other remaining speakers is output is also referred to as a head2. Furthermore, in a case where it is not particularly necessary to distinguish between the head1and the head2, they are simply referred to as heads. At the time of learning of the one-to-many speaker separation model, learning is performed so that a loss function L is minimized by using an acoustic signal for learning of the number of speakers m while randomly changing the number of speakers m of the acoustic signal for learning. At this time, the number of speakers m is set to be equal to or less than a maximum number of speakers M. Furthermore, the one-to-many speaker separation model is learned so that a separated signal including only an utterance of one speaker with the smallest loss among the m speakers included in a mixed speech of the acoustic signal for learnings is an output of the head1, and a separated signal including utterances of the remaining (m−1) speakers is an output of the head2at all times. Furthermore, the loss function L at the time of learning of the one-to-many speaker separation model is expressed by, for example, the following Formula (1). [Math.⁢1]⁢L=∑j⁢mini⁢Li1⁢j+Lj2⁢j(1) Note that, in Formula (1), j is an index indicating an acoustic signal for learning, that is, a mixed speech for learning, and i is an index indicating a speaker of an utterance included in a j-th mixed speech. Furthermore, in Formula (1), Li1jrepresents a loss function when an output s′1(xj) of the head1when sound source separation is performed on an acoustic signal for learning xjof the j-th mixed speech is compared with an acoustic signal sijof an utterance of an i-th speaker. The loss function Li1jcan be defined by, for example, a square error shown in the following Formula (2). [Math. 2] L1ji∥s′1(xj)−sij∥2(2) Moreover, Li2jin Formula (1) represents a loss function when an output s′2(xj) of the head2when sound source separation is performed on the acoustic signal for learning xjof the j-th mixed speech is compared with a sum of acoustic signals skjof the remaining speakers k other than the i-th speaker. The loss function Li2jcan be defined by, for example, a square error shown in the following Formula (3). [Math.⁢3]⁢Li2⁢j=1m-1⁢s′⁢⁢2⁡(xj)-∑k≠i⁢skj2(3) In the one-to-many speaker separation model obtained by learning as described above, it is expected that a separated signal of only an utterance of one speaker is obtained as an output of the head1, and a separated signal of utterances of the remaining speakers is obtained as an output of the head2at all times. Therefore, for example, in a similar manner to the example illustrated inFIG.1, it can be expected that separated signals including only utterances, one for each of the speakers, are sequentially separated only by recursively performing sound source separation on an input acoustic signal by using the one-to-many speaker separation model. In a case where the one-to-many speaker separation model is used in this way, for example, a sound source separation unit21of a signal processing device11retains the one-to-many speaker separation model obtained in advance by learning, as a sound source separation model. Then, the signal processing device11performs the sound source separation processing described with reference toFIG.3to obtain separated signals, one for each of the speakers. However, in this case, in step S11or step S13, the sound source separation unit21performs sound source separation on the basis of the one-to-many speaker separation model. At this time, since an output of the head1is a separated signal of an utterance of one speaker, the sound source separation using the one-to-many speaker separation model is recursively performed on an output (separated signal) of the head2. Furthermore, in step S12, in a case where an average level of the output (separated signal) of the head2of the sound source separation performed most recently is equal to or less than a predetermined threshold value, it is determined that the end condition is satisfied, and the processing proceeds to step S14. Note that an example of using a one-to-many speaker separation model in which two heads, that is, two outputs of the head1and the head2are obtained by using one input acoustic signal as an input has been described here. However, this is not restrictive. For example, sound source separation may be performed by using a one-to-many speaker separation model in which outputs of three heads can be obtained. In such a case, for example, learning is performed such that outputs of the head1and the head2, among the head1to a head3, are separated signals, each of which includes only an utterance of one speaker, and an output of the head3is a separated signal including utterances of other remaining speakers. Fourth Embodiment Combination of One-to-Many Speaker Separation Model and Clustering Moreover, even in a case where a one-to-many speaker separation model is used as a sound source separation model, utterances, one for each sound source, that is, one for each speaker, may not always be completely separated. That is, for example, an utterance of a speaker, which should be output to a head1, may slightly leak into an output of a head2. Therefore, in such a case, an utterance of the same speaker is dispersed in a plurality of separated signals obtained by recursive sound source separation as described with reference toFIG.4. However, in this case, the utterance of the speaker included in one separated signal is a slightly leaked component, and has a volume of sound much lower than that of the utterance of the speaker included in the other separated signal. Thus, also in a case where a one-to-many speaker separation model is used as a sound source separation model, clustering may be performed in a similar manner to the second embodiment. In such a case, for example, a sound source separation unit21of a signal processing device51retains a one-to-many speaker separation model obtained in advance by learning, as a sound source separation model. Then, the signal processing device51performs the sound source separation processing described with reference toFIG.6to obtain separated signals, one for each of the speakers. However, in this case, in step S41and step S43, the sound source separation unit21performs the sound source separation on the basis of the one-to-many speaker separation model, as in the case of the third embodiment. Furthermore, in step S44, an output of the speaker identification model or the like described above is calculated as a feature value representing a speaker, and if the distance between the feature values of two separated signals is equal to or less than a threshold value, it is determined that the two separated signals are of the same speaker. In addition, for example, in a case where a temporal energy variation of a separated signal is obtained as a feature value representing a speaker, and a correlation between feature values of two separated signals, that is, a correlation between the energy variations of the separated signals, is equal to or more than a threshold value, the two separated signals may be determined to be of the same speaker. Other Modified Example 1 Use of Single-Speaker Determination Model Meanwhile, in each of the embodiments described above, an example has been described in which it is determined that an end condition of recursive sound source separation is satisfied if an average level (energy) of a separated signal obtained by the sound source separation becomes sufficiently small, that is, if the average level becomes equal to or less than a threshold value. In this case, when sound source separation is performed on a separated signal including only an utterance of a single speaker, a silent separated signal is obtained and it is determined that the end condition is satisfied. Therefore, although a separated signal for each speaker is obtained in the first place at the time when the separated signal including only the utterance of the single speaker is obtained, sound source separation needs to be performed one more time, and thus the number of times of sound source separation processing increases accordingly. Such a situation is not preferable for an application or the like with a limited processing time, for example. Thus, an end determination may be performed by using a single-speaker determination model, which is an acoustic model that receives a separated signal as an input and determines whether the separated signal is an acoustic signal including only an utterance of a single speaker or an acoustic signal of a mixed speech including utterances of a plurality of speakers. In other words, the single-speaker determination model is an acoustic model for determining whether or not the number of speakers of the utterance included in the input separated signal is one. In such an example, for example, a single-speaker determination model obtained in advance by learning is retained in an end determination unit22of a signal processing device11or the signal processing device51. Then, for example, in step S12inFIG.3or step S42inFIG.6, the end determination unit22performs a computation based on the single-speaker determination model that is retained and a separated signal obtained by sound source separation, and determines whether or not the number of speakers of an utterance included in the separated signal is one. In other words, it is determined whether or not the separated signal includes only an utterance of a single speaker. Then, the end determination unit22determines that the end condition is satisfied if an obtained result of the determination indicates that the number of speakers of the utterance included in each of all the separated signals is one, that is, each of the separated signals includes only an utterance of a single speaker. In the determination using such a single-speaker determination model, a task is simplified as compared with estimation using a number-of-speakers estimation model for estimating the number of speakers of an utterance included in a separated signal. Therefore, there is an advantage that a more high-performance acoustic model (single-speaker determination model) can be obtained with a smaller model scale. That is, sound source separation can be performed more easily as compared with a case of using the number-of-speakers estimation model. By using a single-speaker determination model to determine whether the end condition is satisfied as described above, it is possible to reduce the overall processing amount (the number of times of processing) and the processing time of the sound source separation processing described with reference toFIGS.3and6. Furthermore, for example, in a case of using a single-speaker determination model or the like to perform an end determination, in the sound source separation processing described with reference toFIGS.3and6, it is also possible to first perform an end determination, that is, whether or not the end condition is satisfied, and then perform recursive sound source separation in accordance with a result of the determination. In this case, for example, when the single-speaker determination model is used for the end determination, the recursive sound source separation is performed by using the single-speaker determination model on a separated signal determined to be not a separated signal including only an utterance of a single speaker. In addition, the sound source separation unit21may use a number-of-speakers determination model for determining a rough number of speakers to select a sound source separation model for recursive sound source separation. Specifically, for example, a case is assumed in which the sound source separation unit21retains a number-of-speakers determination model for determining whether an input acoustic signal is a signal including utterances of two or less speakers or a signal including utterances of three or more speakers, a two-speaker separation model, and a three-speaker separation model. In this case, the sound source separation unit21determines the number of speakers by using the number-of-speakers determination model on an input acoustic signal or a separated signal obtained by sound source separation, and selects either the two-speaker separation model or the three-speaker separation model as a sound source separation model to be used for sound source separation. That is, for example, for an input acoustic signal or a separated signal determined to be a signal including utterances of three or more speakers, the sound source separation unit21performs sound source separation using the three-speaker separation model. On the other hand, for an input acoustic signal or a separated signal determined to be a signal including utterances of two or less speakers, the sound source separation unit21performs sound source separation using the two-speaker separation model. In this way, an appropriate sound source separation model can be selectively used for sound source separation. Other Modified Example 2 Use of Language Information Furthermore, in the second embodiment or the fourth embodiment, a same speaker determination may be performed on the basis of language information of a plurality of separated signals. In particular, here, an example will be described in which text information indicating contents of a speech (utterance) based on a separated signal is used as the language information. In such a case, for example, a same speaker determination unit61of the signal processing device51performs speech recognition processing on separated signals, one for each of the speakers, supplied from the end determination unit22, and converts speeches of separated signals, one for each of the speakers, into texts. That is, text information indicating contents of an utterance based on the separated signal is generated by the speech recognition processing. Then, in a case where the texts, that is, the contents of the utterance, indicated by the text information of any two or more separated signals are merged (integrated) and the merged text forms a sentence, the same speaker determination unit61determines that the separated signals are of the same speaker. Specifically, for example, in a case where utterances indicated by pieces of text information, one for each of two separated signals, are the same in timing and contents, the two separated signals are assumed to be of the same speaker. Furthermore, for example, in a case where utterances indicated by pieces of text information of two separated signals are different in timing, but these utterances, when integrated into one utterance, form a meaningful sentence, the two separated signals are assumed to be of the same speaker. In this way, using language information such as text information improves the accuracy of determining the same speaker, and thus the separation performance can be improved. Other Modified Example 3 Use of Same Speaker Determination Model Furthermore, in the second embodiment or the fourth embodiment, a same speaker determination may be performed on the basis of a same speaker determination model for determining whether or not each of any two separated signals includes an utterance of the same speaker, that is, whether or not the two separated signals are signals of the same speaker. Here, the same speaker determination model is an acoustic model in which two separated signals are input and a determination result as to whether the speakers of the utterances included one in each of the separated signals are the same or different is output. In such a case, for example, the same speaker determination unit61of the signal processing device51retains a same speaker determination model obtained in advance by learning. On the basis of the same speaker determination model that is retained and separated signals, one for each of the speakers, supplied from the end determination unit22, the same speaker determination unit61determines, for all possible combinations, whether or not the speakers of the utterances included one in each of the two separated signals are the same. In the same speaker determination using such a same speaker determination model, the task is simplified as compared with the case of the speaker identification model described above. Therefore, there is an advantage that a more high-performance acoustic model (same speaker determination model) can be obtained with a smaller model scale. Note that, at the time of determining the same speaker, separated signals of the same speaker may be specified by combining a plurality of optional methods such as the method using the distance between feature values, the method using language information, and the method using a same speaker determination model described above. Configuration Example of Computer Meanwhile, the series of pieces of processing described above can be executed not only by hardware but also by software. In a case where the series of pieces of processing is executed by software, a program constituting the software is installed on a computer. Here, the computer includes a computer incorporated in dedicated hardware, or a general-purpose personal computer capable of executing various functions with various programs installed therein, for example. FIG.7is a block diagram illustrating a configuration example of hardware of a computer that executes the series of pieces of processing described above in accordance with a program. In the computer, a central processing unit (CPU)501, a read only memory (ROM)502, and a random access memory (RAM)503are connected to each other by a bus504. The bus504is further connected with an input/output interface505. The input/output interface505is connected with an input unit506, an output unit507, a recording unit508, a communication unit509, and a drive510. The input unit506includes a keyboard, a mouse, a microphone, an imaging element, or the like. The output unit507includes a display, a speaker, or the like. The recording unit508includes a hard disk, a non-volatile memory, or the like. The communication unit509includes a network interface or the like. The drive510drives a removable recording medium511such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory. To perform the series of pieces of processing described above, the computer having a configuration as described above causes the CPU501to, for example, load a program recorded in the recording unit508into the RAM503via the input/output interface505and the bus504and then execute the program. The program to be executed by the computer (CPU501) can be provided by, for example, being recorded on the removable recording medium511as a package medium or the like. Furthermore, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting. Inserting the removable recording medium511into the drive510allows the computer to install the program into the recording unit508via the input/output interface505. Furthermore, the program can be received by the communication unit509via a wired or wireless transmission medium and installed into the recording unit508. In addition, the program can be installed in advance in the ROM502or the recording unit508. Note that the program to be executed by the computer may be a program that performs the pieces of processing in chronological order as described in the present specification, or may be a program that performs the pieces of processing in parallel or when needed, for example, when the processing is called. Furthermore, embodiments of the present technology are not limited to the embodiments described above but can be modified in various ways within a scope of the present technology. For example, the present technology can have a cloud computing configuration in which a plurality of devices shares one function and collaborates in processing via a network. Furthermore, each step described in the flowcharts described above can be executed by one device or can be shared by a plurality of devices. Moreover, in a case where a plurality of pieces of processing is included in one step, the plurality of pieces of processing included in that step can be executed by one device or can be shared by a plurality of devices. Moreover, the present technology can also have the following configurations. (1) A signal processing device including:a sound source separation unit that recursively performs sound source separation on an input acoustic signal by using a predetermined sound source separation model learned in advance to separate a predetermined sound source from an acoustic signal for learning including the predetermined sound source. (2) The signal processing device according to (1), in whichthe sound source separation unit performs the sound source separation to separate a separated signal of an utterance of a speaker from the acoustic signal. (3) The signal processing device according to (2), in whichthe sound source separation unit performs the sound source separation on the acoustic signal in which the number of speakers is unknown. (4) The signal processing device according to (2) or (3), in whichthe sound source separation model is a speaker separation model learned to separate the acoustic signal for learning including utterances of two speakers into a separated signal including an utterance of one speaker and a separated signal including an utterance of another speaker. (5) The signal processing device according to (2) or (3), in whichthe sound source separation model is a speaker separation model learned to separate the acoustic signal for learning including utterances of three speakers into three separated signals, each of which includes a corresponding one of the utterances of the three speakers. (6) The signal processing device according to (2) or (3), in whichthe sound source separation model is a speaker separation model learned to separate the acoustic signal for learning including utterances of any plurality of speakers into a separated signal including an utterance of one speaker and a separated signal including utterances of remaining speakers excluding the one speaker among the plurality of speakers. (7) The signal processing device according to any one of (2) to (6), in whichthe sound source separation unit recursively performs the sound source separation by using a plurality of sound source separation models that are different one from each other as the predetermined the sound source separation model. (8) The signal processing device according to any one of (2) to (7), further including:an end determination unit that determines whether or not to end the recursive sound source separation on the basis of the separated signal obtained by the sound source separation. (9) The signal processing device according to (8), in whichthe end determination unit determines to end the recursive sound source separation in a case where one of the separated signals obtained by the sound source separation is a silent signal. (10) The signal processing device according to (8), in whichthe end determination unit determines that the recursive sound source separation is to be ended in a case where it is determined, on the basis of a single-speaker determination model for determining whether or not the number of speakers of an utterance included in the separated signal is one and the separated signal, that the number of speakers of the utterance included in the separated signal obtained by the sound source separation is one. (11) The signal processing device according to any one of (2) to (10), further including:a same speaker determination unit that performs a same speaker determination as to whether or not a plurality of the separated signals obtained by the recursive sound source separation is signals of the same speaker, and synthesizes a separated signal from a plurality of the separated signals of the same speaker. (12) The signal processing device according to (11), in whichthe same speaker determination unit performs the same speaker determination by clustering the separated signals. (13) The signal processing device according to (12), in whichthe same speaker determination unit calculates feature values of the separated signals, and determines that, in a case where a distance between the feature values of two of the separated signals is equal to or less than a threshold value, the two separated signals are signals of the same speaker. (14) The signal processing device according to (12), in whichthe same speaker determination unit performs the same speaker determination on the basis of a correlation between temporal energy variations of two of the separated signals. (15) The signal processing device according to (11), in whichthe same speaker determination unit performs the same speaker determination on the basis of language information of a plurality of the separated signals. (16) The signal processing device according to (11), in whichthe same speaker determination unit performs the same speaker determination on the basis of a same speaker determination model for determining whether two of the separated signals are signals of the same speaker. (17) A signal processing method including:recursively performing, by a signal processing device, sound source separation on an input acoustic signal by using a predetermined sound source separation model learned in advance to separate a predetermined sound source from an acoustic signal for learning including the predetermined sound source. (18) A program for causing a computer to execute processing including a step of:recursively performing sound source separation on an input acoustic signal by using a predetermined sound source separation model learned in advance to separate a predetermined sound source from an acoustic signal for learning including the predetermined sound source. REFERENCE SIGNS LIST 11Signal processing device21Sound source separation unit22End determination unit51Signal processing device61Same speaker determination unit
56,948
11862142
Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION FIG.1shows an example text-to-speech conversion system100. The text-to-speech conversion system100is an example of a system implemented as computer programs on one or more computers in one or more locations, in which the systems, components, and techniques described below can be implemented. The system100includes a subsystem102that is configured to receive input text104as an input and to process the input text104to generate speech120as an output. The input text104includes a sequence of characters in a particular natural language. The sequence of characters may include alphabet letters, numbers, punctuation marks, and/or other special characters. The input text104can be a sequence of characters of varying lengths. To process the input text104, the subsystem102is configured to interact with an end-to-end text-to-speech model150that includes a sequence-to-sequence recurrent neural network106(hereafter “seq2seq network106”), a post-processing neural network108, and a waveform synthesizer110. After the subsystem102receives input text104that includes a sequence of characters in a particular natural language, the subsystem102provides the sequence of characters as input to the seq2seq network106. The seq2seq network106is configured to receive the sequence of characters from the subsystem102and to process the sequence of characters to generate a spectrogram of a verbal utterance of the sequence of characters in the particular natural language. In particular, the seq2seq network106processes the sequence of characters using (i) an encoder neural network112, which includes an encoder pre-net neural network114and an encoder CBHG neural network116, and (ii) an attention-based decoder recurrent neural network118. Each character in the sequence of characters can be represented as a one-hot vector and embedded into a continuous vector. That is, the subsystem102can represent each character in the sequence as a one-hot vector and then generate an embedding, i.e., a vector or other ordered collection of numeric values, of the character before providing the sequence as input to the seq2seq network106. The encoder pre-net neural network114is configured to receive a respective embedding of each character in the sequence and process the respective embedding of each character to generate a transformed embedding of the character. For example, the encoder pre-net neural network114can apply a set of non-linear transformations to each embedding to generate a transformed embedding. In some cases, the encoder pre-net neural network114includes a bottleneck neural network layer with dropout to increase convergence speed and improve generalization capability of the system during training. The encoder CBHG neural network116is configured to receive the transformed embeddings from the encoder pre-net neural network206and process the transformed embeddings to generate encoded representations of the sequence of characters. The encoder CBHG neural network112includes a CBHG neural network, which is described in more detail below with respect toFIG.2. The use of the encoder CBHG neural network112as described herein may reduce overfitting. In addition, it may result in fewer mispronunciations when compared to, for instance, a multi-layer RNN encoder. The attention-based decoder recurrent neural network118(herein referred to as “the decoder neural network118”) is configured to receive a sequence of decoder inputs. For each decoder input in the sequence, the decoder neural network118is configured to process the decoder input and the encoded representations generated by the encoder CBHG neural network116to generate multiple frames of the spectrogram of the sequence of characters. That is, instead of generating (predicting) one frame at each decoder step, the decoder neural network118generates r frames of the spectrogram, with r being an integer greater than one. In many cases, there is no overlap between sets of r frames. In particular, at decoder step t, at least the last frame of the r frames generated at decoder step t−1 is fed as input to the decoder neural network118at decoder step t+1. In some implementations, all of the r frames generated at the decoder step t−1 can be fed as input to the decoder neural network118at the decoder step t+1. The decoder input for the first decoder step can be an all-zero frame (i.e. a <GO> frame). Attention over the encoded representations is applied to all decoder steps, e.g., using a conventional attention mechanism. The decoder neural network118may use a fully connected neural network layer with a linear activation to simultaneously predict r frames at a given decoder step. For example, to predict 5 frames, each frame being an 80-D (80-Dimension) vector, the decoder neural network118uses the fully connected neural network layer with the linear activation to predict a 400-D vector and to reshape the 400-D vector to obtain the 5 frames. By generating r frames at each time step, the decoder neural network118divides the total number of decoder steps by r, thus reducing model size, training time, and inference time. Additionally, this technique substantially increases convergence speed, i.e., because it results in a much faster (and more stable) alignment between frames and encoded representations as learned by the attention mechanism. This is because neighboring speech frames are correlated and each character usually corresponds to multiple frames. Emitting multiple frames at a time step allows the decoder neural network118to leverage this quality to quickly learn how to, i.e., be trained to, efficiently attend to the encoded representations during training. The decoder neural network118may include one or more gated recurrent unit neural network layers. To speed up convergence, the decoder neural network118may include one or more vertical residual connections. In some implementations, the spectrogram is a compressed spectrogram such as a mel-scale spectrogram. Using a compressed spectrogram instead of, for instance, a raw spectrogram may reduce redundancy, thereby reducing the computation required during training and inference. The post-processing neural network108is configured to receive the compressed spectrogram and process the compressed spectrogram to generate a waveform synthesizer input. To process the compressed spectrogram, the post-processing neural network108includes a CBHG neural network. In particular, the CBHG neural network includes a 1-D convolutional subnetwork, followed by a highway network, and followed by a bidirectional recurrent neural network. The CBHG neural network may include one or more residual connections. The 1-D convolutional subnetwork may include a bank of 1-D convolutional filters followed by a max pooling along time layer with stride one. In some cases, the bidirectional recurrent neural network is a gated recurrent unit neural network. The CBHG neural network is described in more detail below with reference toFIG.2. In some implementations, the post-processing neural network108has been trained jointly with the sequence-to-sequence recurrent neural network106. That is, during training, the system100(or an external system) trains the post-processing neural network108and the seq2seq network106on the same training dataset using the same neural network training technique, e.g., a gradient descent-based training technique. More specifically, the system100(or an external system) can backpropagate an estimate of a gradient of a loss function to jointly adjust the current values of all network parameters of the post-processing neural network108and the seq2seq network106. Unlike conventional systems that have components that need to be separately trained or pre-trained and thus each component's errors can compound, systems that have the post-processing NN108and seq2seq network106that are jointly trained are more robust (e.g., they have smaller errors and can be trained from scratch). These advantages enable the training of the end-to-end text-to-speech model150on a very large amount of rich, expressive yet often noisy data found in the real world. The waveform synthesizer110is configured to receive the waveform synthesizer input, and process the waveform synthesizer input to generate a waveform of the verbal utterance of the input sequence of characters in the particular natural language. In some implementations, the waveform synthesizer is a Griffin-Lim synthesizer. In some other implementations, the waveform synthesizer is a vocoder. In some other implementations, the waveform synthesizer is a trainable spectrogram to waveform inverter. After the waveform synthesizer110generates the waveform, the subsystem102can generate speech120using the waveform and provide the generated speech120for playback, e.g., on a user device, or provide the generated waveform to another system to allow the other system to generate and play back the speech. FIG.2shows an example CBHG neural network200. The CBHG neural network200can be the CBHG neural network included in the encoder CBHG neural network116or the CBHG neural network included in the post-processing neural network108ofFIG.1. The CBHG neural network200includes a 1-D convolutional subnetwork208, followed by a highway network212, and followed by a bidirectional recurrent neural network214. The CBHG neural network200may include one or more residual connections, e.g., the residual connection210. The 1-D convolutional subnetwork208may include a bank of 1-D convolutional filters204followed by a max pooling along time layer with a stride of one206. The bank of 1-D convolutional filters204may include K sets of 1-D convolutional filters, in which the k-th set includes Ckfilters each having a convolution width of k. The 1-D convolutional subnetwork208is configured to receive an input sequence202, for example, transformed embeddings of a sequence of characters that are generated by an encoder pre-net neural network. The subnetwork208processes the input sequence using the bank of 1-D convolutional filters204to generate convolution outputs of the input sequence202. The subnetwork208then stacks the convolution outputs together and processes the stacked convolution outputs using the max pooling along time layer with stride one206to generate max-pooled outputs. The subnetwork208then processes the max-pooled outputs using one or more fixed-width 1-D convolutional filters to generate subnetwork outputs of the subnetwork208. After the subnetwork outputs are generated, the residual connection210is configured to combine the subnetwork outputs with the original input sequence202to generate convolution outputs. The highway network212and the bidirectional recurrent neural network214are then configured to process the convolution outputs to generate encoded representations of the sequence of characters. In particular, the highway network212is configured to process the convolution outputs to generate high-level feature representations of the sequence of characters. In some implementations, the highway network includes one or more fully-connected neural network layers. The bidirectional recurrent neural network214is configured to process the high-level feature representations to generate sequential feature representations of the sequence of characters. A sequential feature representation represents a local structure of the sequence of characters around a particular character. A sequential feature representation may include a sequence of feature vectors. In some implementations, the bidirectional recurrent neural network is a gated recurrent unit neural network. During training, one or more of the convolutional filters of the 1-D convolutional subnetwork208can be trained using batch normalization method, which is described in detail in S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015. In some implementations, one or more convolutional filters in the CBHG neural network200are non-causal convolutional filters, i.e., convolutional filters that, at a given time step T, can convolve with surrounding inputs in both directions (e.g., . . . , T−1, T−2 and T+1, T+2, . . . etc.). In contrast, a causal convolutional filter can only convolve with previous inputs ( . . . T−1, T−2, etc.). In some other implementations, all convolutional filters in the CBHG neural network200are non-causal convolutional filters. The use of non-causal convolutional filters, batch normalization, residual connections, and max pooling along time layer with stride one improves the generalization capability of the CBHG neural network200on the input sequence and thus enables the text-to-speech conversion system to generate high-quality speech. FIG.3is a flow diagram of an example process300for converting a sequence of characters to speech. For convenience, the process300will be described as being performed by a system of one or more computers located in one or more locations. For example, a text-to-speech conversion system (e.g., the text-to-speech conversion system100ofFIG.1) or a subsystem of a text-to-speech conversion system (e.g., the subsystem102ofFIG.1), appropriately programmed, can perform the process300. The system receives a sequence of characters in a particular natural language (step302). The system then provides the sequence of character as input to a sequence-to-sequence (seq2seq) recurrent neural network to obtain as output a spectrogram of a verbal utterance of the sequence of characters in the particular natural language (step304). In some implementations, the spectrogram is a compressed spectrogram, e.g., a mel-scale spectrogram. In particular, after receiving the sequence of characters from the system, the seq2seq recurrent neural network processes the sequence of characters to generate a respective encoded representation of each of the characters in the sequence using an encoder neural network including an encoder pre-net neural network and an encoder CBHG neural network. More specifically, each character in the sequence of characters can be represented as a one-hot vector and embedded into a continuous vector. The encoder pre-net neural network receives a respective embedding of each character in the sequence and processes the respective embedding of each character in the sequence to generate a transformed embedding of the character using an encoder pre-net neural network. For example, the encoder pre-net neural network can apply a set of non-linear transformations to each embedding to generate a transformed embedding. The encoder CBHG neural network then receives the transformed embeddings from the encoder pre-net neural network and processes the transformed embeddings to generate the encoded representations of the sequence of characters. To generate a spectrogram of a verbal utterance of the sequence of characters, the seq2seq recurrent neural network processes the encoded representations using an attention-based decoder recurrent neural network. In particular, the attention-based decoder recurrent neural network receives a sequence of decoder inputs. The first decoder input in the sequence is a predetermined initial frame. For each decoder input in the sequence, the attention-based decoder recurrent neural network processes the decoder input and the encoded representations to generate r frames of the spectrogram, in which r is an integer greater than one. One or more of the generated r frames can be used as the next decoder input in the sequence. In other words, each other decoder input in the sequence is one or more of the r frames generated by processing a decoder input that precedes the decoder input in the sequence. The output of the attention-based decoder recurrent neural network thus includes multiple sets of frames that form the spectrogram, in which each set includes r frames. In many cases, there is no overlap between sets of r frames. By generating r frames at a time, the total number of decoder steps performed by the attention-based decoder recurrent neural network is reduced by a factor of r, thus reducing training and inference time. This technique also helps to increase convergence speed and learning rate of the attention-based decoder recurrent neural network and the system in general. The system generates speech using the spectrogram of the verbal utterance of the sequence of characters in the particular natural language (step306). In some implementations, when the spectrogram is a compressed spectrogram, the system can generate a waveform from the compressed spectrogram and generate speech using the waveform. Generating speech from a compressed spectrogram is described in more detailed below with reference toFIG.4. The system then provides the generated speech for playback (step308). For example, the system transmits the generated speech to a user device over a data communication network for playback. FIG.4is a flow diagram of an example process400for generating speech from a compressed spectrogram of a verbal utterance of the sequence of characters. For convenience, the process400will be described as being performed by a system of one or more computers located in one or more locations. For example, a text-to-speech conversion system (e.g., the text-to-speech conversion system100ofFIG.1) or a subsystem of a text-to-speech conversion system (e.g., the subsystem102ofFIG.1), appropriately programmed, can perform the process400. The system receives a compressed spectrogram of a verbal utterance of a sequence of characters in a particular natural language (step402). The system then provides the compressed spectrogram as input to a post-processing neural network to obtain a waveform synthesizer input (step404). In some cases, the waveform synthesizer input is a linear-scale spectrogram of the verbal utterance of the input sequence of characters in the particular natural language. After obtaining the waveform synthesizer input, the system provides the waveform synthesizer input as input to a waveform synthesizer (step406). The waveform synthesizer processes the waveform synthesizer input to generate a waveform. In some implementations, the waveform synthesizer is a Griffin-Lim synthesizer that uses Griffin-Lim algorithm to synthesize the waveform from the waveform synthesizer input such as a linear-scale spectrogram. In some other implementations, the waveform synthesizer is a vocoder. In some other implementations, the waveform synthesizer is a trainable spectrogram to waveform inverter. The system then generates speech using the waveform, i.e., generates the sounds that are represented by the waveform (step408). The system may then provide the generated speech for playback, e.g., on a user device. In some implementations, the system may provide the waveform to another system to allow the other system to generate and play back the speech. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions. Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. The computer storage medium is not, however, a propagated signal. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. As used in this specification, an “engine,” or “software engine,” refers to a software implemented input/output system that provides an output that is different from the input. An engine can be an encoded block of functionality, such as a library, a platform, a software development kit (“SDK”), or an object. Each engine can be implemented on any appropriate type of computing device, e.g., servers, mobile phones, tablet computers, notebook computers, music players, e-book readers, laptop or desktop computers, PDAs, smart phones, or other stationary or portable devices, that includes one or more processors and computer readable media. Additionally, two or more of the engines may be implemented on the same computing device, or on different computing devices. The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). For example, the processes and logic flows can be performed by and apparatus can also be implemented as a graphics processing unit (GPU). Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
29,868
11862143
DETAILED DESCRIPTION The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise,” “comprises,” and/or “comprising,” “include,” “includes,” and/or “including,” when used in this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that the terms “system,” “engine,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, section, or assembly of different level in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose. These and other features, and characteristics of the present disclosure, as well as the methods of operations and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale. The flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments of the present disclosure. It is to be expressly understood, the operations of the flowcharts may be implemented not in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts. The embodiments of the present disclosure can be applied to different transportation systems, for example, a taxi, a special car, a ride-hailing car, a bus, a designated driving, etc. The terms “passenger,” “requester,” “requestor,” “service requester,” “service requestor,” and “customer” in the present disclosure are used interchangeably to refer to an individual, an entity, or a tool that may request or order a service. Also, the terms “driver,” “provider,” and “service provider” in the present disclosure are used interchangeably to refer to an individual, an entity, or a tool that may provide a service or facilitate the providing of the service. The term “user” may refer to an individual, an entity or a tool that may request a service, order a service, provide a service, or facilitate the providing of the service. The terms “service request,” “request for a service,” “request,” and “order” in the present disclosure are used interchangeably to refer to a request that may be initiated by a passenger, a service requester, a customer, a driver, a provider, a service provider, or the like, or any combination thereof. The service request may be accepted by any one of a passenger, a service requester, a customer, a driver, a provider, or a service provider. The service request may be chargeable or free. The terms “service provider terminal,” “terminal of a service provider,” “provider terminal,” and “driver terminal” in the present disclosure are used interchangeably to refer to a mobile terminal that is used by a service provider to provide a service or facilitate the providing of the service. The terms “service requester terminal,” “terminal of a service requester,” “requester terminal,” and “passenger terminal” in the present disclosure are used interchangeably to refer to a mobile terminal that is used by a service requester to request or order a service. An aspect of the present disclosure relates to systems and methods for processing speech dialogue. According to some systems and methods of the present disclosure, a processing device may obtain target speech dialogue data. The processing device may obtain a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence by performing a vector transformation on the target speech dialogue data based on a text embedding model, a phonetic symbol embedding model, and a role embedding model, respectively. The processing device may determine a representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence into a trained speech dialogue coding model. The processing device may determine a summary of the target speech dialogue data by inputting the representation vector into a classification model. According to the present disclosure, by merging text information, phonetic symbol information, and role information of target speech dialogue data, the accuracy of semantic understanding of the target speech dialogue data can be improved. In addition, a representation vector corresponding to the target speech dialogue data may be determined based on a trained speech dialogue coding model and a summary of the target speech dialogue data may be determined based on the representation vector according to a classification model, which can improve the accuracy of speech dialogue data processing. FIG.1is a schematic diagram of an exemplary speech dialogue processing system according to some embodiments of the present disclosure. The speech dialogue processing system100may be applied to various scenarios, for example, an intelligent customer service, a robot judgment, etc. Take an online transportation service scenario as an example, if a service disagreement occurs between a driver and a passenger, the speech dialogue processing system100may generate a processing result by processing a speech dialogue between the driver and the passenger and judge the responsibility of the driver and/or the passenger based on the processing result. In some embodiments, as shown inFIG.1, the speech dialogue processing system100may include a user terminal110, a first processing device120, and a second processing device130. The user terminal110may be a device for a user to request or provide an online to offline service. The online-to-offline service may include a transportation service (e.g., a taxi service), a shopping service, a meal ordering service, a courier service, etc. The user may use the user terminal110to send a speech request or conduct a speech dialogue with other users. For example, take a transportation service scenario as an example, the user terminal110may include a driver terminal and a passenger terminal, and a driver and a passenger may conduct a speech dialogue via the driver terminal and the passenger terminal respectively to communicate service contents (e.g., a pickup location, a departure time). In some embodiments, the user terminal110may include a mobile device110-1, a tablet computer110-2, a laptop computer110-3, or the like, or any combination thereof. In some embodiments, the mobile device110-1may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. The smart home device may include a smart lighting device, a control device of an intelligent electrical apparatus, a smart monitoring device, a smart television, a smart video camera, an interphone, or the like, or any combination thereof. The wearable device may include a bracelet, footgear, glasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof. The smart mobile device may include a mobile phone, a personal digital assistance (PDA), a gaming device, a navigation device, a point of sale (POS) device, a laptop, a desktop, or the like, or any combination thereof. The virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include a Google Glass™, a RiftCon™, a Fragments™, a Gear VR™, etc. The first processing device120and the second processing device130may process information and/or data to perform one or more functions described in the present disclosure. In some embodiments, the first processing device120and the second processing device130may be any devices with data processing capabilities, such as a processor, a server, etc. In some embodiments, the first processing device120and the second processing device130may be a same processing device or different processing devices. In some embodiments, the first processing device120and/or the second processing device130may include one or more processing engines (e.g., single-core processing engine(s) or multi-core processor(s)). Merely by way of example, the first processing device120and/or the second processing device130may include one or more hardware processors, such as a central processing unit (CPU), an application-specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP), a graphics processing unit (GPU), a physics processing unit (PPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), a programmable logic device (PLD), a controller, a microcontroller unit, a reduced instruction-set computer (RISC), a microprocessor, or the like, or any combination thereof. In some embodiments, the first processing device120and/or the second processing device130may include a storage device configured to store data and/or instructions. In some embodiments, the storage device may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. In some embodiments, the first processing device120and/or the second processing device130may include a data bus, a communication interface, etc. for an internal connection and/or an external connection. In some embodiments, the first processing device120and/or the second processing device130may include an input device (e.g., a keyboard, a mouse, a microphones), an output device (e.g., a display, a player), etc. In some embodiments, the first processing device120and/or the second processing device130may be integrated in a same processing device. In some embodiments, the first processing device120and/or the second processing device130may be executed on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. In some embodiments, the first processing device120may train a speech dialogue coding model based on sample speech dialogue data. Specifically, the first processing device120may obtain training data123and determine a trained model125(e.g., a trained speech dialogue coding model) by training a preliminary model124(e.g., a preliminary speech dialogue coding model) based on the training data123. The training data123may be historical speech dialogue data (e.g., historical speech dialogue data between drivers and passengers) among users involved in online to offline services. In some embodiments, the first processing device120may obtain the training data123from the user terminal110or a storage device (not shown inFIG.1). In some embodiments, the training data123may include data without annotation121and data with annotation122(e.g., historical speech dialogue data with annotation). In some embodiments, the annotation may be a summary of the historical speech dialogue data. In some embodiments, the annotation may be a classification result of the historical speech dialogue data. For example, take a transportation service scenario as an example, if there is a service disagreement between a driver and a passenger, a responsibility judgment may be made for the service disagreement, and the annotation may be a result of the responsibility judgment (e.g., “driver responsibility,” “passenger responsibility,” “both the driver and the passenger has no responsibility,” “responsibility cannot be judged”). More descriptions regarding training the preliminary model may be found elsewhere in the present disclosure (e.g.,FIGS.2-4and descriptions thereof). In some embodiments, the second processing device130may obtain target speech dialogue data and determine a processing result (e.g., a summary, an intention classification) of the target speech dialogue data based on the trained speech dialogue coding model. Specifically, the second processing device130may obtain the target speech dialogue data (e.g., a speech dialogue between a driver and a passenger) from the user terminal110or a storage device (not shown inFIG.1) and determine a representation vector of the target speech dialogue data based on the trained model125. Further, the second processing device130may input the representation vector to a classification model and determine a processing result131of the target speech dialogue data based on the classification model. More descriptions regarding determining the processing result of the target speech dialogue data may be found elsewhere in the present disclosure (e.g.,FIGS.5,6A,6B, and descriptions thereof). In some embodiments, the speech dialogue processing system100may include a network (not shown inFIG.1) to facilitate the exchange of data and/or information between various components. In some embodiments, the network may be any type of wired or wireless network, or a combination thereof. Merely by way of example, the network may include a cable network, a wireline network, an optical fiber network, a telecommunications network, an intranet, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a wide area network (WAN), a public telephone switched network (PSTN), a Bluetooth™ network, a ZigBee network, a near field communication (NFC) network, or the like, or any combination thereof. It should be noted that the speech dialogue processing system100is merely provided for the purposes of illustration, and is not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations or modifications may be made under the teachings of the present disclosure. For example, the speech dialogue processing system100may further include a database, an information source, etc. As another example, the speech dialogue processing system100may be implemented on other devices to realize similar or different functions. However, those variations and modifications do not depart from the scope of the present disclosure. FIG.2is a block diagram illustrating an exemplary first processing device according to some embodiments of the present disclosure. In some embodiments, the first processing device120may include an acquisition module210, a determination module220, and a training module230. The acquisition module210may obtain sample speech dialogue data. In some embodiments, the acquisition module210may obtain sample speech dialogue data from one or more components (e.g., the user terminal110) of the speech dialogue processing system100or an external storage device. More descriptions for obtaining the sample speech dialogue data may be found elsewhere in the present disclosure (e.g.,FIGS.3A,3B, and descriptions thereof). The determination module220may obtain a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence by performing a vector transformation on sample speech dialogue data based on a text embedding model, a phonetic symbol embedding model, and a role embedding model, respectively. The determination module220may obtain at least one of a dialect vector representation sequence, an emotion vector representation sequence, or a background text vector representation sequence corresponding to the sample speech dialogue data. The dialect vector representation sequence may be determined by performing a vector transformation on the sample speech dialogue data based on a dialect embedding model. The emotion vector representation sequence may be determined by performing a vector transformation on the sample speech dialogue data based on an emotion embedding model. The background text vector representation sequence may be determined by performing a vector transformation on a background text of the sample speech dialogue data based on a background text embedding model. More descriptions for obtaining the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, the dialect vector representation sequence, the emotion vector representation sequence, the background text vector representation sequence may be found elsewhere in the present disclosure (e.g.,FIGS.3A,3B, and descriptions thereof). The training module230may obtain a pre-trained speech dialogue coding model by pre-training a speech dialogue coding model in a self-supervised learning manner based on a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence. The training module230may obtain a pre-trained speech dialogue coding model by pre-training a speech dialogue coding model in a self-supervised learning manner based on a text vector representation sequence, a phonetic symbol vector representation sequence, a role vector representation sequence, and at least one of a dialect vector representation sequence, a emotion vector representation sequence, and a background text vector representation sequence. More descriptions for obtaining the pre-trained speech dialogue coding model may be found elsewhere in the present disclosure (e.g.,FIGS.3A,3B, and descriptions thereof). It should be noted that the first processing device120may be implemented in various ways, for example, implemented by hardware, software, or a combination of the software and the hardware. The hardware may be implemented by using dedicated logic. The software may be stored in a memory, and implemented by a microprocessor or dedicated design hardware. For persons having ordinary skills in the art, it should be understood that the first processing device120and the modules may be implemented by using a computer-executable instruction and/or a control code included in a processor. For example, a code may be provided on a carrier medium such as a disk, a CD or a DVD-ROM, a programmable memory such as a read-only memory (e.g., a firmware), or a data carrier such as an optical or electronic signal carrier. The first processing device120and the modules may not only be implemented by a hardware circuit, such as a super-large-scale integration, a gate array, a semiconductor such as a logic chip and a transistor, or a programmable hardware device such as a field-programmable gate array and a programmable logic device, etc., it may also be implemented by software executed by various types of processors, or a combination of the hardware circuit and the software (e.g., a firmware). It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the acquisition module210and the determination module220may be integrated into a single module. As another example, each module of the first processing device120may share a single storage module, or each module of the first processing device120may include a corresponding storage unit. FIG.3Ais a schematic diagram illustrating an exemplary process for training a speech dialogue coding model according to some embodiments of the present disclosure.FIG.3Bis a schematic diagram illustrating an exemplary process for training a speech dialogue coding model according to some embodiments of the present disclosure. The process300A and/or the process300B may be executed by the speech dialogue processing system100. For example, the process300A and/or the process300B may be stored in a storage device (e.g., a ROM730, a RAM740, a storage890) as a form of instructions, and invoked and/or executed by a processing device (e.g., the first processing device120, a processor720of a computing device700illustrated inFIG.7, a CPU840of a mobile device800illustrated inFIG.8, one or more modules shown inFIG.2). The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process300A and/or the process300B may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process300A as illustrated inFIG.3Aand/or the process300B as illustrated inFIG.3Band described below is not intended to be limiting. In302, the first processing device120(e.g., the acquisition module210) may obtain sample speech dialogue data. The sample speech dialogue data may be historical dialogue data among users. For example, in a transportation service scenario, the sample speech dialogue data may include historical speech dialogue data between drivers and passengers, historical speech dialogue data between drivers (or passengers) and customer service staffs, etc. As another example, in a shopping service scenario, the sample speech dialogue data may include historical speech dialogue data between customers and online shopping service staffs. As still another example, in a daily life scenario, the sample speech dialogue data may be historical speech dialogue data among friends, speech dialogue data among relatives, etc. In some embodiments, the sample speech dialogue data may be in any form (e.g., a voice form, a video form, a picture form, a text form). For example, a voice collection mode may be activated on the user terminal(s)110to obtain voice speech dialogue data from users, which may be further used as sample speech dialogue data. As another example, a text input mode may be activated on the user terminal(s)110to obtain text speech dialogue data, which may be further used as sample speech dialogue data. In some embodiments, the first processing device120may obtain the sample speech dialogue data from one or more components (e.g., the user terminal110, a storage device) of the speech dialogue processing system100or an external storage device. For example, in a transportation service scenario, the user terminal110may record speech dialogue data between a passenger and a driver in real-time and store the speech dialogue data in a storage device of the speech dialogue processing system100or an external storage device. Accordingly, the first processing device120may obtain the speech dialogue data (i.e., the sample speech dialogue data) from the storage device, the user terminal110, or the external storage device. In304, the first processing device120(e.g., the determination module220) may obtain a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence by performing a vector transformation on the sample speech dialogue data based on a text embedding model, a phonetic symbol embedding model, and a role embedding model, respectively. As used herein, a vector representation sequence may refer to a sequence including a set of vectors in a vector space. In some embodiments, the sample speech dialogue data may correspond to a dialogue text including one or more words (or phrases) and/or one or more paragraphs. Accordingly, the text vector representation sequence may refer to a vector representation sequence determined by performing a vector transformation on the one or more words (or phrases) and/or the one or more paragraphs in the dialogue text. Specifically, the first processing device120may transform the sample speech dialogue data to the dialogue text according to a speech recognition technology (e.g., an automatic speech recognition (ASR)) and input the dialogue text into a text embedding model31to obtain the text vector representation sequence. In some embodiments, the text embedding model31may include a word embedding sub-model, a position embedding sub-model, a paragraph embedding sub-model, or the like, or any combination thereof. The word embedding sub-model may be configured to determine a word vector representation sequence by performing a vectorization on the one or more words (or phrases) in the dialogue text. For example, take a specific word in the dialogue text as an example, a word vector of the specific word may be obtained by performing a vector encoding on the specific word. Accordingly, the word vector representation sequence may be a comprehensive result (e.g., a splicing result) of word vectors corresponding to all words in the dialogue text. The position embedding sub-model may be configured to determine a position vector representation sequence by performing a vectorization on one or more positions of the one or more words (or phrases) in the dialogue text. For example, take a specific word in the dialogue text as an example, it is assumed that the specific word is in a first position of the dialogue text, a position vector corresponding to the specific word may be a vector representing “first position” and a length of the position vector may be equal to a length of a word vector of the specific word. Accordingly, the position vector representation sequence may be a comprehensive result (e.g., a splicing result) of position vectors corresponding to all words in the dialogue text. According to some embodiments of the present disclosure, by using the position vector representation sequence, the accuracy of the semantic understanding of the dialogue text by the model (e.g., the text embedding model31) can be improved. For example, it is assumed that there are two dialogue texts: “he like this movie because it doesn't have an overhead history” and “he doesn't like this movie because it has an overhead history,” wherein a main difference between the two dialogue texts is that positions of the words “like” and “doesn't” are different, if only a word vector corresponding to each word in the dialogue text is considered, the semantic difference between the two dialogue texts cannot be accurately determined; whereas if a position vector corresponding to the each word in the dialogue text is considered, the semantic difference between the two dialogue texts may be accurately determined (i.e., the emotional orientations expressed in the two dialogue texts are opposite). The paragraph embedding sub-model may be configured to determine a paragraph vector representation sequence by performing a vectorization on the one or more paragraphs in the dialogue text. For example, take a specific paragraph in the dialogue text as an example, a paragraph vector of the specific paragraph may be obtained by performing a vector encoding on the specific paragraph. Accordingly, the paragraph vector representation sequence may be a comprehensive result (e.g., a splicing result) of paragraph vectors corresponding to all paragraphs in the dialogue text. In some embodiments, the word vector representation sequence, the position vector representation sequence, and the paragraph vectors representation sequence may be obtained by performing a vector transformation on the dialogue text via the word embedding sub-model, the position embedding sub-model, and the paragraph embedding sub-model according to a one-hot manner, a word2Vec manner, etc. In some embodiments, the text vector representation sequence of the sample speech dialogue data may be obtained by merging (e.g., splicing) the word vector representation sequence, the position vector representation sequence, and the paragraph vector representation sequence. In some embodiments, the word(s) (or the phrase(s)) included in the dialogue text corresponding to the sample speech dialogue data may correspond to phonetic symbol information. Accordingly, the phonetic symbol vector representation sequence may refer to a vector representation sequence determined by performing a vector transformation on the phonetic symbol information of the word(s) (or phrase(s)) in the dialogue text. In some embodiments, the phonetic symbol information may include Chinese Pinyin or phonetic symbols and/or phonograms of other languages, for example, English phonetic symbols, Japanese phonograms, Spanish letters (letters in Spanish correspond to fixed pronunciations, which may be directly used as phonetic symbols), etc. Specifically, the first processing device120may transform the dialogue text to a phonetic text by using a Hidden Markov Model, a conditional random field, a neural network, a transformer, or other models or statistical methods. Further, the first processing device120may input the phonetic text into the phonetic symbol embedding model32to obtain the phonetic symbol vector representation sequence. According to some embodiments of the present disclosure, by using the phonetic symbol vector representation sequence, a speech recognition error rate caused by tone or pronunciation may be effectively reduced. For example, there may be polyphonic characters in Chinese language and a same Chinese character may correspond to different pronunciations in different scenes. As another example, Chinese Pinyin may have a level tone (i.e., a first tone), a rising tone (i.e., a second tone), a falling-rising tone (i.e., a third tone), and a falling tone (i.e., a fourth tone). Similar pronunciations (e.g., “ma” and “ma”) may correspond to different meanings due to their different tones. As still another example, similar pronunciations may correspond to different English words. Accordingly, the phonetic symbol vector sequence may reflect the tone and the pronunciation of each word in the dialogue text, which may reduce the speech recognition error rate. In some embodiments, the phonetic symbol embedding model32may obtain the phonetic symbol vector representation sequence by performing a vector transformation on the phonetic symbol text according to a one-hot manner, a word2Vec manner, etc. In some embodiments, the sample speech dialogue data may relate to roles (e.g., a passenger, a driver) who conducted the speech dialogue. Accordingly, the role vector representation sequence may refer to a vector representation sequence determined by performing a vector transformation on role information related to sample speech dialogue data. In some embodiments, the role information related to the sample speed dialogue data may be determined and added into the dialogue text when the sample speech dialogue data is transformed to the dialogue text. Accordingly, the first processing device120may input the role information into the role embedding model33to obtain the role vector representation sequence. Take a transportation service scenario as an example, it is assumed that a portion of the dialogue text is “hello sir, I want to cancel the order” of which a speaker is a passenger. Accordingly, the role information of the portion of the dialogue text may be determined as “passenger.” According to some embodiments of the present disclosure, by using the role vector representation sequence, the role information of the speaker in the sample speech dialogue data can be considered, which can help to understand the logic of the speech dialogue data. For example, in the above example, the party responsible for canceling the order may be determined as the “passenger” based on the role information. In some embodiments, the role information may be determined by performing a channel identification operation on the sample speech dialogue data. In some embodiments, one channel may correspond to one role. For example, take a transportation service scenario as an example, “driver,” “passenger,” and “customer service staff” may correspond to three different channels. In some embodiments, one channel may correspond to a plurality of roles. For example, also take the transportation service scenario as an example, “driver” and “passenger” may correspond to a same channel and “customer service staff” may correspond to another channel. In some embodiments, the role embedding model33may obtain the role vector representation sequence by performing a vector transformation on the role information according to a one-hot manner, etc. In some embodiments, the text embedding model, the phonetic embedding model, and/or the role embedding model may be an embedding model, a Word2vec model, etc. In some embodiments, the word(s) (or the phrase(s)) included in the dialogue text corresponding to the sample speech dialogue data may correspond to dialect information. For example, the sample speech dialogue data may be conducted by a speak using a dialect. Accordingly, as shown in operation308inFIG.3B, the first processing device120(e.g., the determination module220) may also obtain a dialect vector representation sequence by performing a vector transformation on the sample speech dialogue data based on a dialect embedding model (e.g., a dialect embedding model34). In some embodiments, the dialect information may include a type of the dialect (e.g., Cantonese, Minnan dialect, Henan dialect), a pronunciation of a word or a phrase in the dialect, a meaning of a word or a phrase in the dialect, or the like, or any combination thereof. For example, the pronunciation of a word “” in a phrase “” in Nanjing dialect is in a three tone. As another example, a phrase “” in Cantonese means “(i.e., no problem).” In some embodiments, the dialect information may be determined based on a dialect recognition model. Specifically, the sample speech dialogue data may be inputted into the dialect recognition model and the dialect information may be outputted by the dialect recognition model. In some embodiments, the dialect recognition model may be a neural network model, a logistic regression model, a support vector machine, a random forest, etc. In some embodiments, the dialect recognition model may be trained based on training data with annotations. Specifically, the training data with annotations may be inputted into a preliminary dialect recognition model and one or more parameters of the preliminary dialect recognition model may be updated iteratively until the training process is completed. In some embodiments, the preliminary dialect recognition model may be trained according to one or more model training algorithms (e.g., a gradient descent algorithm). In some embodiments, the training data may be sample speech dialogue data and the annotation may be the dialect information of the sample speech dialogue data. In some embodiments, the annotation of the sample speech dialogue data may be manually added by a user or automatically added by one or more components (e.g., the first processing device120) of the speech dialogue processing system100. In some embodiments, the dialect information may be determined and added in the dialogue text when the sample speech dialogue data is transformed to the dialogue text. The first processing device120may input the dialect information into the dialect embedding model34to obtain the dialect vector representation sequence. In some embodiments, the dialect embedding model34may obtain the role vector representation sequence by performing a vector transformation on the dialect information according to a one-hot manner, etc. In some embodiments, the dialect embedding model34may be an embedding model, a Word2vec model, etc. According to some embodiments of the present disclosure, by using the dialect vector representation sequence, the regional language characteristics of the speaker associated with the sample speech dialogue data can be incorporated, which can help understand the logic of the dialogue. In some embodiments, the sample speech dialogue data may include emotion information of speakers who conducted the speech dialogue. Accordingly, as shown in operation309inFIG.3B, the first processing device120(e.g., the determination module220) may also obtain an emotion vector representation sequence by performing a vector transformation on the sample speech dialogue data based on an emotion embedding model (e.g., an emotion embedding model35). In some embodiments, the emotion information may include a positive emotion, a negative emotion, a neutral emotion, etc. For example, take a transportation service scenario as an example, the emotion information of a speech dialogue in which a passenger expresses gratitude to a driver may be determined as the positive emotion; the emotion information of a speech dialogue in which a passenger complain about a driver may be determined as the positive emotion; and the emotion information of a speech dialogue such as “ok” or “I got it” may be determined as the neutral emotion. In some embodiments, the emotion information may be determined based on an emotion recognition model. Specifically, the sample speech dialogue data may be inputted into the emotion recognition model and the emotion information may be outputted by the emotion recognition model. In some embodiments, the emotion recognition model may be a neural network model, a logistic regression model, a support vector machine, a random forest, etc. In some embodiments, the emotion recognition model may be trained based on training data with annotations. Specifically, the training data with annotations may be inputted into a preliminary emotion recognition model and one or more parameters of the preliminary emotion recognition model may be updated iteratively until the training process is completed. In some embodiments, the preliminary emotion recognition model may be trained according to one or more model training algorithms (e.g., a gradient descent algorithm). In some embodiments, the training data may be sample speech dialogue data and the annotation may be the emotion information of the sample speech dialogue data. In some embodiments, the annotation of the sample speech dialogue data may be manually added by a user or automatically added by one or more components (e.g., the first processing device120) of the speech dialogue processing system100. In some embodiments, the emotion information may be determined and added in the dialogue text when the sample speech dialogue data is transformed to the dialogue text. The first processing device120may input the emotion information into the emotion embedding model35to obtain the emotion vector representation sequence. In some embodiments, the emotion embedding model35may obtain the emotion vector representation sequence by performing a vector transformation on the emotion information according to a one-hot manner, etc. In some embodiments, the emotion embedding model35may be an embedding model, a Word2vec model, etc. According to some embodiments of the present disclosure, by using the emotion vector representation sequence, the emotion information of the speakers associated with the sample speech dialogue data can be incorporated, which can help understand the logic of the dialogue. For example, the emotion information may be used to determine an attitude a driver or a passenger, which can help to judge the responsibility of the driver and/or the passenger in a complaint case. In some embodiments, as shown in operation310inFIG.3B, the first processing device120may also obtain a background text of the sample speech dialogue data, which may reflect background information of the sample speech dialogue data. The background information may include a location where the speech dialogue is conducted, a time when the speech dialogue data is conducted, a feature (e.g., a name, the age, the gender, an occupation) of a speaker of the speech dialogue data, or the like, or a combination thereof. Take a transportation service scenario as an example, the background text may include a city where a transportation service corresponding to the sample speed dialogue data is provided, a pick-up time of the transportation service, a pickup location of the transportation service, a destination location of the transportation service, or the like, or a combination thereof. Further, the first processing device120may obtain a background text vector representation sequence by performing a vector transformation on the background text of the sample speech dialogue data based on a background text embedding model (e.g., a background text embedding model36). For example, the first processing device120may input the background text into the background text embedding model36to obtain the background text vector representation sequence. In some embodiments, the background text vector representation sequence may include a plurality of background text vectors corresponding to different types of background information. The plurality of background text vectors may be divided by a separator [SEP]. For example, the background text vector representation sequence may be represented as “Beijing [SEP] Haidian District [SEP] 20200721 [SEP] 5 years driving experience.” In some embodiments, the background text embedding model36may obtain the background text vector representation sequence by performing a vector transformation on the background text according to a one-hot manner, etc. In some embodiments, the background text embedding model36may be an embedding model, a Word2vec model, etc. According to some embodiments of the present disclosure, by using the background text vector representation sequence, the background information of the sample speech dialogue data can be incorporated, which can help understand the logic of the dialogue. In306, the first processing device120(e.g., the training module230) may obtain a pre-trained speech dialogue coding model by pre-training the speech dialogue coding model in a self-supervised learning manner based on the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence. In some embodiments, the first processing device120may input the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence into a preliminary speech dialogue coding model, and pre-train the preliminary speech dialogue coding model in the self-supervised learning manner. In some embodiments, the first processing device120may input the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence into the preliminary speech dialogue coding model, respectively. In some embodiments, the first processing device120may merge the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence and input the merged vector representation sequence into the preliminary speech dialogue coding model. In some embodiments, as shown in operation312inFIG.3B, the first processing device120(e.g., the training module230) may obtain the pre-trained speech dialogue coding model by pre-training the speech dialogue coding model in the self-supervised learning manner based on the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, and at least one of the background text vector representation sequence, the dialect vector representation sequence, or the emotion vector representation sequence. In some embodiments, the first processing device120may input the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, and at least one of the background text vector representation sequence, the dialect vector representation sequence, or the emotion vector representation sequence into a preliminary speech dialogue coding model, and pre-train the preliminary speech dialogue coding model in the self-supervised learning manner. In some embodiments, the first processing device120may merge the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, and at least one of the background text vector representation sequence, the dialect vector representation sequence, or the emotion vector representation sequence, and input the merged vector representation sequence into the preliminary speech dialogue coding model. As used herein, “merging” may refer to superposition, concatenation, weighting, transformation, or the like, or a combination thereof. For example, the first processing device120may merge the above mentioned vector representation sequences (e.g., the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, the background text vector representation sequence, the dialect vector representation sequence, and/or the emotion vector representation sequence) according to a linear transformation or a neural network transformation, thereby integrating different functions of features embodied in the vector representation sequences. In some embodiments, one or more parameters for merging the above mentioned vector sequences may be default values or determined by one or more components (e.g., the first processing device120) of the speech dialogue processing system100according to different situations. Additionally or alternatively, the one or more parameters for merging the above mentioned vector sequences may be determined by jointly training two or more of the text embedding model31, the phonetic symbol embedding model32, the role embedding model33, the background text embedding model36, the dialect embedding model34, and/or the emotion embedding model35. According to some embodiments of the present disclosure, by merging a plurality of vector representation sequences, the speech dialogue coding model can learn the dialogue text information, the phonetic symbol information, the role information, the background information, the dialect information, and/or the emotion information from the merged vector representation sequence simultaneously. The meaning of the word(s) in the speech dialogue data can be determined accurately based on the dialogue text information, the phonetic symbol information, the dialect information, and the emotion information of the speech dialogue data. In addition, the role information and the background information may help determine the logic of the speech dialogue, thereby making the understanding of the speech dialogue accurate. In some embodiments, the speech dialogue coding model may be a transformer model. The transformer model may encode contextual information of an inputted vector representation sequence and generate a contextual representation sequence. As used herein, the contextual representation sequence may refer to a vector sequence that combines the contextual information of the inputted vector representation sequence. For example, if there is a sentence “I have a dog, it is cute,” it may be determined that “it” refers to “dog” by learning the contextual information for the word “it.” In some embodiments, the transformer model may include an encoder and a decoder. The encoder may be an encoding component including a plurality of encoders and the decoder may be a decoding component including a plurality of decoders. Each of the plurality of encoders may include a self-attention layer and a feedforward neural network layer. Each of the plurality of decoders may include a self-attention layer, an encoding-decoding attention layer, and a feedforward neural network layer. The transformer model may process all elements in an inputted vector representation sequence in parallel, and merge the contextual information with one or more distant elements by using the attention layer structure. As an example, the transformer model may include 12-layer encoder and decoder, with 768 hidden size, and 12 attention heads, which contains about 110 M parameters. In some embodiments, the speech dialogue coding model may be a bidirectional encoder representations from transformers (BERT) model, an XLNet model, a generative pretrained transformer 2 (GPT-2), a text-to-text transfer transformer (T5) constructed based on a transformer technology, a neural network model, or the like, or a combination thereof. In some embodiments, at least one or at least part of the text embedding model31, the phonetic symbol embedding model32, the role embedding model33, the background text embedding model36, the dialect embedding model34, and/or the emotion embedding model35may be jointly pre-trained with the speech dialogue coding model. For example, the paragraph embedding sub-model of the text embedding model31may not involve in the jointly pre-training process, and the word embedding sub-model and the position embedding sub-model may involve in the jointly pre-training process. In some embodiments, after the pre-training of the speech dialogue coding model is completed, the speech dialogue coding model may be adjusted based on a downstream task model (e.g., a classification model, a summary extraction model, a translation model) and sample data with annotations (which correspond to a corresponding downstream task, for example, “category,” “summary,” “translation”), which may improve the processing effect of a downstream task. More descriptions regarding the self-supervised learning manner may be found elsewhere in the present disclosure (e.g.,FIG.4and descriptions thereof). In some scenarios, the dialogue text obtained by performing an automatic speech recognition on speech dialogue data may be not smooth and may correspond to a noisy spoken language and/or a complex dialogue logic, which may cause a serious interference to a subsequent speech dialogue processing (e.g., classification result determination, summary extraction, machine translation). According to some embodiments of the present disclosure, a vectorization may be performed the sample speech dialogue data using a plurality of embedding models (e.g., the phonetic symbol embedding model, the role embedding model, the text embedding model, the background text embedding model, the dialect embedding model, and the emotion embedding model). Accordingly, the role information, the phonetic symbol information, the background information, the dialect information, and/or the emotion information can be considered during the training process of the speech dialogue coding model, which may reduce errors caused by the automatic speech recognition and help understand the logic of complex dialogues. Accordingly, the performance of the trained speech dialogue coding model may be improved. For example, take a transportation service scenario as an example, if there is a service disagreement (e.g., a responsibility for canceling an order) between a driver and a passenger, a responsibility judgment may be determined based on speech dialogue data between the driver, the passenger, and a customer service staff. The speech dialogue data may be associated with a plurality of roles (e.g., the driver, the passenger, the customer service staff), which may complicate the responsibility judgment. For example, the speech dialogue data may be “the driver said: he asked me to make the cancellation; the passenger said: I did not ask him to cancel the order; the customer service said: who made the cancellation?” The pronoun appears many times in the speech dialogue data, which may be difficult for a model to understand the dialogue logic. By considering the role information of the speech dialogue data, the logic of the speech dialogue data may be determined clearly. For example, “he” mentioned by the driver may refer to the passenger, “he” mentioned by the passenger may refer to the driver, and “who” mentioned by the customer service staff may refer to the driver or the passenger. In addition, the speech dialogue data may be related to tone and pronunciation. By considering the phonetic symbol information and/or the dialect information of the speech dialogue data, the semantics of the speech dialogue data may be determined accurately. For example, “” and “” have similar pronunciations but different tones, “” has a first tone, and “” has a fourth tone. By inputting the phonetic symbol information of the speech dialogue data into the model (e.g., a speech dialogue coding model), the model can determine that the semantic of the speech dialogue data is “cancel an order” instead of “make fun of someone.” As another example, “” in Nanjing dialect have a similar pronunciation with “” in Mandarin. By inputting the dialect information of the speech dialogue data into the model (e.g., a speech dialogue coding model), the model can determine that the semantic of the speech dialogue data is “time” instead of “practice.” In order to evaluate the performance of the speech dialogue coding model described in the present disclosure, an express dataset and a premier dataset were used to compare the processing effect of a CNN, a HAN, a BERT, and the speech dialogue coding model in a downstream task (e.g., classification). The experimental results are illustrated in Table 1 below. As used in Table 1, “random” refers to word embedding randomly initialized, “W2v” refers to word embedding initialized by word2vec, and “Elmo” refers to word embedding initialized by Elmo. TABLE 1The experimental results of Express andPremier datasets using different modelsExpression datasetPremier datasetModel(Accuracy/%)(Accuracy/%)CNN-random78.879.2CNN-W2v80.180.3CNN-Elmo82.682.8HAN-rand80.180.2HAN- W2v81.381.5HAN- rand83.984.2BERT-base81.681.8speech dialogue coding model86.586.5in the present disclosure As shown in Table 1, it can be seen that the performance of the speech dialogue coding model is better than other models in downstream tasks. Furthermore, to further validate the effectiveness of the phonetic symbol information and the role information, a plurality of ablation experiments were conducted on the two datasets. The experimental results are illustrated in Table 2: As used in Table 2, “-phonetic” refers to that a phonetic symbol embedding model is removed, that is, phonetic symbol information is not considered in the speech dialogue coding model; “-role” refers to that a role embedding model is also removed with the phonetic symbol embedding model, that is, both the role information and the phonetic symbol information are not considered in the speech dialogue coding model. TABLE 2The effectiveness of phonetic symbolinformation and role informationExpression datasetPremier data set(accuracy/%)(accuracy/%)speech dialogue coding model86.386.5-phonetic85.285.4-role83.683.9 As shown in Table 2, it can be seen that the performance of the speech dialogue coding model in downstream tasks can be improved evidently with the role information and the phonetic symbol information taken into consideration. It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more operations may be added or omitted. In some embodiments, one or more other optional operations (e.g., a storage operation, a preprocessing operation) may be added in the process300A and/or the process300B. For example, before operation304, the first processing device120may determine the dialogue text and the phonetic text of the sample speech dialogue data by preprocessing the sample speech dialogue data and then determine the text vector representation sequence and the phonetic symbol vector representation sequence based on the dialogue text and the phonetic text, respectively. FIG.4is a schematic diagram illustrating an exemplary process for training a speech dialogue coding model in a self-supervised learning manner according to some embodiments of the present disclosure. In some embodiments, the process400may be executed by the speech dialogue processing system100. For example, the process400may be stored in a storage device (e.g., a ROM730, a RAM740, a storage890) as a form of instructions, and invoked and/or executed by a processing device (e.g., the first processing device120, a processor720of a computing device700illustrated inFIG.7, a CPU840of a mobile device800illustrated inFIG.8, one or more modules shown inFIG.2). The operations of the illustrated process presented below are intended to be illustrative. As used herein, a self-supervised learning may refer to that a model (e.g., a speech dialogue coding model) is trained based on training data without predetermined annotations. For example, an order of sentences in sample data without annotations may be randomly disrupted, and disrupted sentences may be used as an input of the model. Then the model may learn the order of the sentences in the self-supervising learning manner. In this case, the correct order of the sentences may be determined as an “annotation.” According to the self-supervised learning manner, the dependence on training data with annotations can be effectively reduced during model training. In some embodiments, during the pre-training of the speech dialogue coding model in the self-supervised learning manner, at least portion of at least one of the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, the background text vector representation sequence, the dialect vector representation sequence, or the emotion vector representation sequence may be designated as an annotation. In some embodiments, the annotation may include at least a portion of elements in the role vector representation sequence. For example, as shown inFIG.4,410represents a portion of the vector representation sequence (e.g., a role vector representation sequence) inputted into the speech dialogue coding model, A represents a vector representation of a role a, and B represents a vector representation of a role b. Then a portion (i.e.,420) of the vector representation sequence410may be randomly selected, a portion of elements (e.g.,402) thereof may be masked, and value(s) of the masked element(s) (e.g.,402) may be designated as annotation(s) for training. Merely by way of example, it is assumed that a predicted value of402is “Y1,” the annotation of402is “A,” and a loss value404may be determined based on the predicted value and the annotation. Then one or more parameters of the model may be adjusted based on the loss value404. For example, the one or more parameters may be adjusted based on the loss value404according to a gradient descent algorithm. In some embodiments, the annotation may also include one or more keywords in a text vector representation sequence. The keyword(s) may be pre-set word(s) or a randomly set word(s). Accordingly, a training task can be considered as a “keyword prediction task.” Specifically, a portion of elements in the text vector representation sequence may be masked according to a preset keyword list and value(s) of the masked element(s) may be designated as annotation(s) for training. Further, a loss value may be determined based on predicted values and annotations. Then one or more parameters of the model may be adjusted based on the loss value. By masking the keyword(s), contextual information and/or phrase expressions can be learnt more effectively by the speech dialogue coding model. In some embodiments, the annotation may further include an order of sentences embodied in the text vector representation sequence. Accordingly, the training task can be considered as a “sentence order prediction task.” For example, it is assumed that the dialogue text includes three sentences A, B, and C and the order of the sentences is “sentence A is before sentence B and sentence B is after sentence A and before sentence C.” Compared with a task for only predicting a sentence next to a specific sentence, the sentence order prediction task can focus on the coherence of the sentences and improve the performance of the model. FIG.5is a block diagram illustrating an exemplary second processing device according to some embodiments of the present disclosure. As shown inFIG.5, the second processing device130may include an acquisition module510, a determination module520, an input module530, and a processing module540. The acquisition module510may obtain target speech dialogue data. In some embodiments, the acquisition module510may obtain target speech dialogue data from one or more components (e.g., the user terminal110) of the speech dialogue processing system100or an external storage device. More descriptions for obtaining the target speech dialogue data may be found elsewhere in the present disclosure (e.g.,FIGS.6A,6B, and descriptions thereof). The determination module520may obtain a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence by performing a vector transformation on target speech dialogue data based on a text embedding model, a phonetic symbol embedding model, and a role embedding model, respectively. The determination module520may obtain at least one of a dialect vector representation sequence, an emotion vector representation sequence, or a background text vector representation sequence corresponding to the target speech dialogue data. The dialect vector representation sequence may be determined by performing a vector transformation on the target speech dialogue data based on a dialect embedding model. The emotion vector representation sequence may be determined by performing a vector transformation on the target speech dialogue data based on an emotion embedding model. The background text vector representation sequence may be determined by performing a vector transformation on a background text of the target speech dialogue data based on a background text embedding model. More descriptions for obtaining the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, the dialect vector representation sequence, the emotion vector representation sequence, the background text vector representation sequence may be found elsewhere in the present disclosure (e.g.,FIGS.6A,6B, and descriptions thereof). The input module530may determine a representation vector corresponding to target speech dialogue data by inputting a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence into a trained speech dialogue coding model. For example, the input module530may determine a representation vector corresponding to target speech dialogue data by inputting a text vector representation sequence, a phonetic symbol vector representation sequence, a role vector representation sequence, and at least one of a dialect vector representation sequence, a emotion vector representation sequence, and a background text vector representation sequence into a trained speech dialogue coding model. More descriptions for determining the representation vector corresponding to the target speech dialogue data may be found elsewhere in the present disclosure (e.g.,FIGS.6A,6B, and descriptions thereof). The processing module540may determine a summary of target speech dialogue data by inputting a representation vector into a classification model. More descriptions for determining the summary of the target speech dialogue data may be found elsewhere in the present disclosure (e.g.,FIGS.6A,6B, and descriptions thereof). It should be noted that the second processing device130may be implemented in various ways, for example, implemented by hardware, software, or a combination of the software and the hardware. The hardware may be implemented by using dedicated logic. The software may be stored in a memory, and implemented by a microprocessor or dedicated design hardware. For persons having ordinary skills in the art, it should be understood that the second processing device130and the modules may be implemented by using a computer-executable instruction and/or a control code included in a processor. For example, a code may be provided on a carrier medium such as a disk, a CD or a DVD-ROM, a programmable memory such as a read-only memory (e.g., a firmware), or a data carrier such as an optical or electronic signal carrier. The second processing device130and the modules may not only be implemented by a hardware circuit, such as a super-large-scale integration, a gate array, a semiconductor such as a logic chip and a transistor, or a programmable hardware device such as a field-programmable gate array and a programmable logic device, etc., it may also be implemented by software executed by various types of processors, or a combination of the hardware circuit and the software (e.g., a firmware). It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the acquisition module510and the determination module520may be integrated into a single module. As another example, each module of the second processing device130may share a single storage module, or each module of the second processing device130may have a corresponding storage module. FIG.6Ais a schematic diagram illustrating an exemplary process for extracting a summary of target speech dialogue data according to some embodiments of the present disclosure.FIG.6Bis a schematic diagram illustrating an exemplary process for extracting a summary of target speech dialogue data according to some embodiments of the present disclosure. The process600A and/or the process600B may be executed by the speech dialogue processing system100. For example, the process600A and/or the process600B may be stored in a storage device (e.g., a ROM730, a RAM740, a storage890) as a form of instructions, and invoked and/or executed by a processing device (e.g., the second processing device130, a processor720of a computing device700illustrated inFIG.7, a CPU840of a mobile device800illustrated inFIG.8, one or more modules shown inFIG.2). The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process600A and/or the process600B may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process600A as illustrated inFIG.6Aand/or the process600B as illustrated inFIG.6Band described below is not intended to be limiting. In602, the second processing device130(e.g., the acquisition module510) may obtain target speech dialogue data. Take a transportation service scenario as an example, the target speech dialogue data may include speech dialogue data between a driver and a passenger, speech dialogue data between a driver (or a passenger) and a customer service staff, speech dialogue data between the driver, the passenger, and the customer service staff, etc. In some embodiments, similar to the sample speed dialogue data, the target speech dialogue data may be in any form (e.g., a voice form, a video form, a picture form, a text form). As described in connection with operation302, the second processing device130may obtain the target speech dialogue data from one or more components (e.g., the user terminal110, a storage device) of the speech dialogue processing system100or an external storage device. For example, in a transportation service scenario, the user terminal110may record speech dialogue data between a passenger and a driver in real-time and store the speech dialogue data in a storage device of the speech dialogue processing system100or an external storage device. Accordingly, the second processing device130may obtain the speech dialogue data (i.e., the target speech dialogue data) from the storage device, the user terminal110, or the external storage device. In604, the second processing device130(e.g., the determination module520) may obtain a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence by performing a vector transformation on the target speech dialogue data based on a text embedding model, a phonetic symbol embedding model, and a role embedding model, respectively. In some embodiments, as shown in operation610inFIG.6B, the second processing device130(e.g., the determination module520) may also obtain a dialect vector representation sequence by performing a vector transformation on the target sample speech dialogue data based on a dialect embedding model. In some embodiments, as shown in operation611inFIG.6B, the second processing device130(e.g., the determination module520) may also obtain an emotion vector representation sequence by performing a vector transformation on the target sample speech dialogue data based on an emotion embedding model. In some embodiments, as shown in operation612inFIG.6B, the second processing device130(e.g., the determination module520) may also obtain a background text of the target speech dialogue data and obtain a background text vector representation sequence by performing a vector transformation on the background text of the target speech dialogue data based on a background text embedding model. More descriptions regarding obtaining the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, the dialect vector representation sequence, the emotion vector representation sequence, and/or the background text vector representation sequence may be found elsewhere in the present disclosure (e.g.,FIG.3A,FIG.3B, and descriptions thereof). In606, the second processing device130(e.g., the input module530) may determine a representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence into a trained speech dialogue coding model. In some embodiments, as shown in operation613inFIG.6B, the second processing device130(e.g., the input module530) may determine the representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, and at least one of the dialect vector representation sequence, the emotion vector representation sequence, or the background text vector representation sequence into the trained speech dialogue coding model. In some embodiments, the second processing device130may merge the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence and input the merged vector representation sequence into the trained speech dialogue coding model. In some embodiments, the second processing device130may merge the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, and at least one of the dialect vector representation sequence, the emotion vector representation sequence, or the background text vector representation sequence and input the merged vector representation sequence into the trained speech dialogue coding model. More descriptions regarding merging the vector representation sequences may be found elsewhere in the present disclosure (e.g.,FIG.3A,FIG.3B, and descriptions thereof). As described elsewhere in the present disclosure, according to the trained speech dialogue coding model, text information, phonetic symbol information, role information, and at least one of the dialect information, the emotion information, or the background text information of the target speech dialogue data can be comprehensively considered, accordingly semantic information thereof can be accurately understood, and a corresponding representation vector can be outputted. In some embodiments, the representation vector may be a plurality of vector sequences including a symbol [CLS] and a separator [SEP]. In608, the second processing device130(e.g., the processing module540) may determine a summary of the target speech dialogue data by inputting the representation vector into a classification model. As used herein, a summary of speech dialogue data may refer to content (e.g., a keyword, a key sentence, a key paragraph) reflecting key semantic information of the speech dialogue data. In some embodiments, the representation vector may include separators used to distinguish different sentences, accordingly, the representation vector may include a plurality of sub-vectors corresponding different sentences. For each of the plurality of sub-vectors, the second processing device130may classify the sub-vector based on the classification model to determine whether a sentence corresponding to the sub-vector is the summary of the target speech dialogue data. In response to determining that the sentence corresponding to the sub-vector is the summary, the second processing device130may output “1,” that is, the second processing device130may designate the sentence as the summary of the target speech dialogue data. In response to determining that the sentence corresponding to the sub-vector is not the summary, the second processing device130may output “0,” that is, the second processing device130does not designate the sentence as the summary of target speech dialogue data. In some embodiments, the second processing device130may input the representation vector into the classification model and generate the summary of the target speech dialogue data. In this situation, the summary of the target speech dialogue data may be a content not the same as the original contents (e.g., sentences) of the target speech dialogue data. For example, it is assumed that the representation vector of the target speech dialogue data is “[CLS]driver[SEP]hello sir[SEP] may I ask when to leave” and the summary of the may be target speech dialogue data “departure time,” “start time,” etc. In some embodiments, the classification model may be a neural network model, a logistic regression model, a support vector machine, a random forest, etc. In some embodiments, the classification model may be trained based on training data with annotations. In some embodiments, the training data may be sample speech dialogue data and the annotation may be a summary of the sample speech dialogue data. In some embodiments, the annotation of the sample speech dialogue data may be manually added by a user or automatically added by one or more components (e.g., the first processing device120) of the speech dialogue processing system100. Specifically, the training data with annotations may be inputted into a preliminary classification model and one or more parameters of the preliminary classification model may be updated iteratively until the training process is completed. In some embodiments, the preliminary classification model may be trained according to one or more model algorithms (e.g., a gradient descent algorithm). In some embodiments, the classification model may be jointly trained with a pre-trained speech dialogue coding model (e.g., the pre-trained speech dialogue coding model described inFIG.3AandFIG.3A). In the jointly training process, the pre-trained speech dialogue coding model may be further adjusted and/or updated (e.g., fine-tuned). For example, training data with annotations may be inputted into the pre-trained speech dialogue coding model, a representation vector outputted from the pre-trained speech dialogue coding model may be inputted into the classification model, and the classification model may output a classification result. Further, both the parameters of the pre-trained speech dialogue coding model and the parameters of the classification model may be updated based on the classification result until the jointly training process is completed. In some embodiments, the second processing device130may obtain a sentence text of the summary and perform a grammatical correction operation on the sentence text. As used herein, the grammatical correction operation may refer to correcting spelling errors and/or grammatical errors (e.g., lack of a subject, a mismatch between a predicate and a subject) in the sentence text of the summary. For example, “” in the summary sentence may be corrected to “” As another example, in a transportation service scenario, it is assumed that a speech dialogue between a driver and a passenger is that: the passenger said: “sir, when will you arrive?” the driver said: “right now,” and the summary of the speech dialogue may be determined as “right now.” In this case, the second processing device130may correct the summary as “I will pick you up right now.” In some embodiments, the second processing device130may perform the grammatical correction operation based on a grammar correction model. In some embodiments, the grammar correction model may include a neural network model, an N-gram model, or the like, or a combination thereof. In some embodiments, the present disclosure may also provide a method for classifying a speech dialogue. Specifically, the second processing device130(e.g., the acquisition module510) may obtain target speech dialogue data. More descriptions regarding obtaining the target speech dialogue data may be found elsewhere in the present disclosure (e.g., operation602and the description thereof). The second processing device130(e.g., the determination module520) may obtain a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence by performing a vector transformation on the target speech dialogue data based on the text embedding model, the phonetic symbol embedding model, and the role embedding model, respectively. The second processing device130(e.g., the determination module520) may also obtain at least one of a dialect vector representation sequence, an emotion vector representation sequence, or a background text vector representation sequence corresponding to the target speech dialogue data. More descriptions regarding determining the vector representation sequences may be found elsewhere in the present disclosure (e.g., operations604,610,611, and612and the descriptions thereof). The second processing device130(e.g., the input module530) may determine a representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence into a trained speech dialogue coding model. In some embodiments, the second processing device130(e.g., the input module530) may determine the representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, and at least one of the dialect vector representation sequence, the emotion vector representation sequence, or the background text vector representation sequence into the trained speech dialogue coding model. More descriptions regarding determining the representation vector corresponding to the target speech dialogue data may be found elsewhere in the present disclosure (e.g., operations606and613and the descriptions thereof). Further, the second processing device130(e.g., the processing module540) may determine an intention classification result of the target speech dialogue data by inputting the representation vector into a classification model (e.g., an intention classification model). As used herein, an intention classification result of a speech dialogue data may refer to a classification of thoughts and/or semantics of users (e.g., a passenger, a driver) associated with the speech dialogue data. For example, take a transportation service scenario as an example, if there is a service disagreement between a driver and a passenger, a responsibility judgment may be made based on the target speech dialogue data. In this case, the intention classification result may be “driver responsibility,” “passenger responsibility,” “both the driver and the passenger has no responsibility,” “responsibility cannot be judged,” etc. In some embodiments, the intention classification model may be a neural network model, a logistic regression model, a support vector machine, a random forest, or the like, or a combination thereof. In some embodiments, the intention classification model may be trained based on training data with annotations. In some embodiments, the training data may be sample speech dialogue data and the annotation may be an intention classification result of the sample speech dialogue data. In some embodiments, the annotation of the training data may be manually added by a user or automatically added by one or more components (e.g., the first processing device120) of the speech dialogue processing system100. In some embodiments, the training process of the intention classification model may be similar to the training process of the classification model and details are not repeated here. In some embodiments, the present disclosure may also provide a method for determining an answer (e.g., an answer to a question) in a speech dialogue. Specifically, the second processing device130(e.g., the acquisition module510) may obtain target speech dialogue data (e.g., a question). More descriptions regarding obtaining the target speech dialogue data may be found elsewhere in the present disclosure (e.g., operation602and the description thereof). The second processing device130(e.g., the determination module520) may obtain a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence by performing a vector transformation on the target speech dialogue data based on a text embedding model, a phonetic symbol embedding model, and a role embedding model, respectively. The second processing device130(e.g., the determination module520) may also obtain at least one of a dialect vector representation sequence, an emotion vector representation sequence, or a background text vector representation sequence corresponding to the target speech dialogue data. More descriptions regarding determining the vector representation sequences may be found elsewhere in the present disclosure (e.g., operations604,610,611, and612and the descriptions thereof). The second processing device130(e.g., the input module530) may determine a representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence into a trained speech dialogue coding model. In some embodiments, the second processing device130(e.g., the input module530) may determine the representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, and at least one of the dialect vector representation sequence, the emotion vector representation sequence, or the background text vector representation sequence into the trained speech dialogue coding model. More descriptions regarding determining the representation vector corresponding to the target speech dialogue data may be found elsewhere in the present disclosure (e.g., operations606and613and the descriptions thereof). Further, the second processing device130(e.g., the processing module540) may determine an answer for the target speech dialogue data (e.g., a question in the target speech dialogue data) by inputting the representation vector into a question-answer (QA) model. In some embodiments, the QA model may include a retrieval sub-model (e.g., a BM25 model) and an answer determination sub-model. Specifically, the second processing device130may determine a plurality of candidate answers for the target speech dialogue data based on the retrieval sub-model and identify a target answer from the plurality of candidate answers based on the answer determination sub-model. According to some embodiments of the present disclosure, various information (e.g., the dialogue text information, the phonetic symbol information, the role information, the background information, the dialect information, and/or the emotion information) is taken into consideration, accordingly, the QA model can provide answers in various expressions in response to different speech dialogue data. For example, if a user asks a question in Cantonese dialect, the QA model may output an answer in Cantonese dialect. As another example, if the emotion of a user that asks the question is relatively down, the QA model may output the answer using a comforting language. In some embodiments, the QA model may be a text matching model. For example, the QA model may be a BERT model. In some embodiments, the QA model may be trained based on training data with annotations. In some embodiments, the training data may be sample speech dialogue data (e.g., a sample question) and the annotation may be a sample answer for the sample speech dialogue data. In some embodiments, the annotation of the training data may be manually added by a user or automatically added by one or more components (e.g., the first processing device120) of the speech dialogue processing system100. In some embodiments, the training process of the QA model may be similar to the training process of the classification model and details are not repeated here. In some embodiments, the present disclosure may also provide a method for translating a speech dialogue. Specifically, the second processing device130(e.g., the acquisition module510) may obtain target speech dialogue data. More descriptions regarding obtaining the target speech dialogue data may be found elsewhere in the present disclosure (e.g., operation602and the description thereof). The second processing device130(e.g., the determination module520) may obtain a text vector representation sequence, a phonetic symbol vector representation sequence, and a role vector representation sequence by performing a vector transformation on the target speech dialogue data based on a text embedding model, a phonetic symbol embedding model, and a role embedding model, respectively. The second processing device130(e.g., the determination module520) may also obtain at least one of a dialect vector representation sequence, an emotion vector representation sequence, or a background text vector representation sequence corresponding to the target speech dialogue data. More descriptions regarding determining the vector representation sequences may be found elsewhere in the present disclosure (e.g., operations604,610,611, and612and the descriptions thereof). The second processing device130(e.g., the input module530) may determine a representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, and the role vector representation sequence into a trained speech dialogue coding model. In some embodiments, the second processing device130(e.g., the input module530) may determine the representation vector corresponding to the target speech dialogue data by inputting the text vector representation sequence, the phonetic symbol vector representation sequence, the role vector representation sequence, and at least one of the dialect vector representation sequence, the emotion vector representation sequence, or the background text vector representation sequence into the trained speech dialogue coding model. More descriptions regarding determining the representation vector corresponding to the target speech dialogue data may be found elsewhere in the present disclosure (e.g., operations606and613and the descriptions thereof). Further, the second processing device130(e.g., the processing module540) may determine a translation result of the target speech dialogue data by inputting the representation vector into a translation model. For example, if the target speech dialogue data is in Chinese, the translation model may output a translation result of the target speech dialogue data in English. According to some embodiments of the present disclosure, various information (e.g., the dialogue text information, the phonetic symbol information, the role information, the background information, the dialect information, and/or the emotion information) is taken into consideration, accordingly, the logic of the target speech dialogue data can be understood accurately and the accuracy of the translation result can be improved. In some embodiments, the translation model may be a transform model, a long short-term memory (LSTM) model, etc. In some embodiments, the translation model may be trained based on training data with annotations. In some embodiments, the training data may be sample speech dialogue data and the annotation may be a sample translation result of the sample speech dialogue data. In some embodiments, the annotation of the training data may be manually added by a user or automatically added by one or more components (e.g., the first processing device120) of the speech dialogue processing system100. In some embodiments, the training process of the translation model may be similar to the training process of the classification model and details are not repeated here. It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more operations may be added or omitted. FIG.7is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure. In some embodiments, the first processing device120, the second processing device130, and/or the user terminal110may be implemented on the computing device700. For example, the second processing device130may be implemented on the computing device700and configured to perform functions of the second processing device130disclosed in this disclosure. The computing device700may be used to implement any component of the speech dialogue processing system100as described herein. For example, the second processing device130may be implemented on the computing device700, via its hardware, software program, firmware, or a combination thereof. Although only one such computer is shown, for convenience, the computer functions relating to the online service as described herein may be implemented in a distributed fashion on a number of similar platforms to distribute the processing load. The computing device700may include COM ports750connected to and from a network connected thereto to facilitate data communications. The computing device700may also include a processor720, in the form of one or more, e.g., logic circuits, for executing program instructions. For example, the processor720may include interface circuits and processing circuits therein. The interface circuits may be configured to receive electronic signals from a bus710, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process. The processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus710. The computing device700may further include program storage and data storage of different forms including, for example, a disk770, a read only memory (ROM)730, or a random access memory (RAM)740, for storing various data files to be processed and/or transmitted by the computing device700. The computing device700may also include program instructions stored in the ROM730, RAM740, and/or another type of non-transitory storage medium to be executed by the processor720. The methods and/or processes of the present disclosure may be implemented as the program instructions. The computing device700may also include an I/O component760, supporting input/output between the computer and other components. The computing device700may also receive programming and data via network communications. Merely for illustration, only one processor is described inFIG.7. Multiple processors are also contemplated, thus operations and/or steps performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors. For example, if in the present disclosure the processor of the computing device700executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two different CPUs and/or processors jointly or separately in the computing device700(e.g., the first processor executes operation A and the second processor executes operation B, or the first and second processors jointly execute operations A and B). FIG.8is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure. In some embodiments, the user terminal110may be implemented on the mobile device800. As illustrated inFIG.8, the mobile device800may include a communication platform810, a display820, a graphic processing unit (GPU)830, a central processing unit (CPU)840, an I/O850, a memory860, a mobile operating system (OS)870, and a storage890. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown), may also be included in the mobile device800. In some embodiments, the mobile operating system870(e.g., iOS™, Android™, Windows Phone™) and one or more applications880may be loaded into the memory860from the storage890in order to be executed by the CPU840. The applications880may include a browser or any other suitable mobile app for receiving and rendering information in the speech dialogue processing system100. User interactions with the information stream may be achieved via the I/O850and provided to the first processing device120, the second processing device130, and/or other components of the speech dialogue processing system100. To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device. A computer may also act as a server if appropriately programmed. The beneficial effects of the present disclosure may include but not limited to: (1) by merging phonetic symbol information, role information, dialect information, and/or emotion information of speech dialogue data, the accuracy of semantic understanding of the speech dialogue data can be improved; (2) when a plurality of vector representation sequences (e.g., a text vector representation sequence, a phonetic symbol vector representation sequence, a role vector representation sequence, a dialect vector representation sequence, an emotion vector representation sequence, a background text vector representation sequence) are inputted into a speech dialogue coding model, the plurality of vector sequences may be merged and transformed in various manners, which can integrate different functions of features embodied by the plurality of vector representation sequences; (3) in a training process of a speech dialogue coding model, the performance (e.g., the accuracy) of the speech dialogue coding model can be improved by a role prediction task, a keyword prediction task, and/or a sentence order prediction task; (4) a representation vector corresponding to the target speech dialogue data may be determined based on a trained speech dialogue coding model, and then a summary, an intention result, an answer, and/or a translation result of the target speech dialogue data may be determined based on the representation vector according to a corresponding model (e.g., a classification model, an intension classification model, a QA model, a translation model), which can improve the accuracy of speech dialogue data processing. It should be noted that different embodiments may have different beneficial effects. In different embodiments, the beneficial effects may be any one or a combination of the above beneficial effects, or any other beneficial effects that may be obtained. Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure. Moreover, certain terminology has been configured to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment,” “one embodiment,” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure. Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “block,” “module,” “engine,” “unit,” “component,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 1703, Perl, COBOL 1702, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a software as a service (SaaS). Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations, therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software-only solution—e.g., an installation on an existing server or mobile device. Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, claimed subject matter may lie in less than all features of a single foregoing disclosed embodiment.
105,278
11862144
DETAILED DESCRIPTION E2E ASR, with the goal of directly mapping input speech features to output token sequences, has achieved state-of-the-art performance on a variety of tasks. Recently, E2E ASR has become increasingly used as an alternative to conventional hybrid speech recognition solutions. These hybrid solutions typically include a plurality of separate models, such as, for example, a lexicon model, an acoustic model, and a language model. Hybrid ASR solutions use these models in combination to perform ASR. In contrast, in an E2E AI model for ASR, these models are merged into one, i.e. only one model is trained to map an input audio signal directly to a word sequence that represents the speech content of the input audio signal. E2E AI models have the advantage of simplified training and decoding processes compared to hybrid approaches. However, E2E approaches to ASR have several potential challenges. Firstly, adaptation of an E2E model to a new domain may be difficult or expensive compared to a traditional hybrid ASR approach. For example, the training data for adapting an E2E model to a new domain potentially requires paired audio and text data. In contrast, a hybrid approach may potentially be adapted to a new domain using only text data for the new domain to train the language model. Gathering audio data that is paired with text data for a new domain is typically more expensive compared to only text data. As a large amount of in-domain (audio-text) paired data is typically used to train an E2E AI model for ASR, adapting the E2E AI model to a new domain may potentially be time consuming and expensive. As used herein, the term “domain” may be defined as a logical group of utterances that share common characteristics. Several example domains may include application specific domains such as a video conferencing application domain, an email dictation application domain, etc. Each of these domains may share common speech patterns, similar words, and grammar. These domains may also share other types of audio characteristics such as background noise, codec used to encode a waveform, etc. Another challenge of the E2E approach to ASR is that E2E AI models for ASR trained by conventional means are typically not as robust as the hybrid approach to diverse environments and scenarios. For example, an E2E model that is trained with data that contains clean audio with minimal background noise for a specific word in a first domain, and also trained with noisy audio for the same word in a second domain, the trained E2E model may potentially only accurately learn how to map a speech input to that word with clean speech input for the first domain and noisy speech input for the second domain. Thus, the E2E model may potentially be inaccurate in mapping noisy speech input for the first domain and clean speech input for the second domain. To address these issues,FIG.1illustrates an example computer system10that implements an audio concatenation technique to generate paired audio and text data for training an E2E model. The paired audio and text data generated by the techniques described herein may potentially reduce the high resources costs and time consumption typically required to gather large amounts of paired audio and text data for training an E2E AI model. The audio training data generated by the audio concatenation, which will be described in more detail below, may be used to train any suitable type of E2E AI model, such as, for example, recurrent neural network transducer (RNN-T) and attention-based encoder-decoder (AED) models. However, it should be appreciated that the systems and processes described herein may also be implemented with other types of E2E models, and is not limited to the specific examples described herein. The computer system10may include a computer device12, and in some examples, another computer device14which may take the form of a user computer device configured to communicate with the computer device12. The computer device12may take the form of a personal computer, a server computer, or another suitable type of computer device. In one example, the computer device12may take the form of a plurality of server computer devices configured to operate in a cloud computing configuration. The computer device12may include one or more processors16, memory devices18, input devices20, and other suitable computer components. The memory devices18may include volatile and non-volatile storage devices. The processor16of the computer device12may be configured to execute an augmented data generation module22that implements an audio concatenation technique to generate new audio training data by manipulating audio segments for words or other speech units, like phrases, word pieces, syllables, etc., from existing audio data. For example, in order to generate audio data for an elevator domain text statement such as “elevator, open door”, the augmented data generation module22may be configured to extract the audio segments of each word in the statement (“elevator”, “open”, and “floor”) from existing audio training data stored on the computer device12. The augmented data generation module22may be configured to concatenate those extracted audio segments together to generate a new audio segment that may be paired with the elevator domain text statement. As will be described in more specific detail below, the audio segments associated with each of the words or phrases may not necessarily be recorded for the same speaker or the same acoustic environment. While the concatenated audio may sound odd to a human ear, the concatenated audio does not cause substantive inaccuracies in the training of an E2E AI model. By using the augmented data generation techniques described herein the computer device12may generate a large amount of new paired audio and text training data that may be used to provide potential improvements in the training of E2E AI models, such as RNN-T and AED models for ASR. In one improvement, a robustness of a general domain for the E2E AI model may be improved by replacing the audio or acoustic feature of one word or other speech unit in one utterance with the audio of the same word from other utterance for the training data. That is, even though the general domain includes training data that has paired audio and text data for the statement “Please open the door” that is uttered by a single speaker, the robustness of the general domain may be improved by concatenated audio from multiple people to generate different variations of audio that may be paired with the same text statement. The augmented training data may provide the potential benefit of causing the E2E AI model to learn the real discriminative information for the word content, and ignore the variations that may be caused by different speakers or acoustic environments. In a second improvement, an E2E AI model may be adapted to a target domain that does not have paired audio and text training data, or a target domain where gathering paired audio and text training data would be cost or time prohibitive. For example, large amounts of text training data may be gathered for the target domain. Even though the computer device12does not include audio data that matches the content of the text training data, the computer device12may generate new audio data that does match the text training data in the target domain by concatenated suitable audio segments from the general audio data stored on the computer device12. As illustrated inFIG.1, the computer device12may be configured to store a set of audio training data24that is general audio data for the computer device12. The set of audio training data24is recognized audio, and may be paired with corresponding text training data. The set of audio training data24may include one or more subsets of audio training data26. The subsets of audio training data26may cover multiple domains, different acoustic environments, different speakers, etc. In one example, each subset of audio training data26may be recorded for different acoustic parameters28, which may include, for example, a background noise parameter, an audio quality parameter, and a speech accent parameter. The background noise parameter may indicate whether the recorded audio for the subset of audio training data26is “clean” or “noisy”. As a specific example, audio from an audio book recorded may potentially be recorded in a studio with minimal amounts of background noise, and would be “clean” audio. On the other hand, audio from an online meeting application may potentially include large amounts of background noise and audio artifacts from users in the meeting, and would be “noisy” audio. The audio quality parameter may indicate whether the recorded audio for the subset of audio training data26is “poor” or “excellent” audio quality. For example, audio recorded by a studio grade microphone and enterprise audio applications may have less distortion and other audio artifacts. On the other hand, audio recorded by a user's webcam microphone for an online meeting or a cellphone, or audio that has been compressed, may potentially include a large amount of audio artifacts. The speech accent parameter may indicate an accent or geographical location for a speaker that was recorded for the audio segment of the subset of audio training data26. The audio training data24may include a plurality of subsets of audio training data26that cover a range of different accents for a particular language. The subsets of audio training data26may also cover a plurality of different speakers that may have different accents or speech styles. By covering a wide range of speakers, the set of audio training data24may be used to train a more robust E2E AI model that is more accurate when processing input audio for users that may cover a range of accents and speech styles. As illustrated inFIG.1, each subset of audio training data26includes a plurality of audio segments30. Each audio segment30includes an audio signal for a particular word or phrase. In one example, the set of audio training data24is recognized audio, and the word content of the audio is already known. From example, the set of audio training data26may be paired with corresponding text training data. As another example, the audio segments30may be manually recognized and labeled by a user of the computer device12. In either example, each audio segment30may include or be associated with metadata32that indicates a word or phrase associated with that audio segment30. That is, an audio segment30for an utterance of “open” may include metadata32indicating the word “open” for that audio segment30. The metadata32is indexed on the computer device12and searchable by the augmented data generation module22. The plurality of subsets of audio training data26may include overlap in the word content of the audio segments30recorded with different acoustic parameters28. For example, the plurality of subsets of audio data26include a plurality of audio segments30that are associated with a same word or phrase and different acoustic parameters28. As a specific example, the plurality of subsets of audio training data26may include audio segments30for the word “open” that is uttered by a plurality of different people, or recorded in a plurality of different acoustic environments, or otherwise having different acoustic parameters28. In this manner, the set of audio training data24may include audio segments30for the same word or phrase, but for a range of different domains, speakers, acoustic environments, and other parameters. The computer device12may be configured to receive a set of structured text data34that includes one or more target training statements36. It will be appreciated that the structured text data may include meta data such as word boundaries, sentence boundaries, pronunciations, and part of speech tags. The computer device12may be configured to use the metadata of the structured text data34to parse the text content and extract the one or more target training statements36. Each of the target training statements36may include a plurality of text segments38comprising a word or phrase. However, it should be appreciated that the text segments38may comprise other linguistic units such as a sub-word, a phoneme, etc. The structured text data34is structured such that the training statements36are machine readable and include boundaries between each training statement36. Each training statement36of the structured text data34may be processed to generate a paired concatenated audio corresponding to that training statement36, as will be discussed in more detail below. In the example illustrated inFIG.1, the structured text data34is sent from another computer device14to the computer device12via a computer network, such as a wide area network (WAN). In this example, a user of the other computer device14may be developing or updating an ASR application39that uses an E2E AI model. It may be valuable to the user to update the E2E AI model used by the ASR application to be trained with a target set of text training data or a target domain. Thus, the user may generate the structured text data34to target a suitable training regime for an E2E AI model. For example, the user may collect and gather the structured text data34in the target domain, and send the structured text data34to the computer device12. However, it should be appreciated that in other examples, the structured text data34is generated on the computer device12. In one example, in order to improve a robustness of an E2E AI model40being trained by the computer device12, the computer device12may use stored structured text data34from the paired audio and text training data in the general or source domain stored on the computer device12. It should be appreciated that other examples of structured text data34not specifically described herein may also be used by the computer device12. The structured text data34is sent to the augmented data generation module22executed by the processor16of the computer device12. The augmented data generation module22may be configured to process each training statement36of the structured text data34to generate a suitable paired audio signal. Specifically, in one example, for a target training statement36of the set of structured text data34, the processor16may be configured to generate a concatenated audio signal42that matches a word content of the target training statement36. Generating the concatenated audio signal42may include several steps. At step (1), the processor16may be configured to compare the words or phrases of the plurality of text segments38of the target training statement36to respective words or phrases of audio segments30of the stored set of audio training data24. For example, the processor16may be configured to extract the letters for the text segments38from the target training statement36, and perform a string comparison, or another suitable type of comparison, with the metadata32associated with each audio segment30of the set of audio training data24. Based on the comparison, the processor16may be configured to determine whether there are any matches between the text segments38and the audio segments30. At step (2), the processor16may be configured to select a plurality of audio segments44from the set of audio training data24based on a match in the words or phrases between the plurality of text segments38of the target training statement36and the selected plurality of audio segments44. The processor16may be configured to also consider the acoustic parameters28when selecting the plurality of audio segments. For example, in order to improve a robustness of the trained E2E model40, the processor16may be configured to select audio segments30that have a same word content, but different acoustic parameters28. At step (3), the processor16may be configured to concatenate the selected plurality of audio segments44into the concatenated audio signal42. Concatenating the selected plurality of audio segments44may include extracting the audio signal from each of the plurality of audio segments, and merging the extracted audio signals into a single audio signal. In some example, the processor16may also be configured to perform further audio post-processing on the concatenated audio signal42, such as volume equalization, audio smoothing, etc. The processor16may be configured to perform steps (1)-(3) for each training statement36in the structured text data34until a corresponding concatenated audio signal42is generated for each training statement36. The processor16may then be configured to generate an augmented set of training data46that includes the set of structured text data34paired with respective concatenated audio signals42. It should be appreciated that the augmented set of training data46may be generated according to the process described above without requiring new audio data to be collected, thus providing the potential benefit of reduced costs in resources and time. In one example, the computer device12may be configured to send the augmented set of training data46to the other computer device14. In this manner, a user of the other computer device14may send the structured text data34to the computer device12, and receive the augmented set of training data46that includes paired concatenated audio for the uploaded structured text data34. The user of the other computer device14may then use the received augmented set of training data46to train an in-house E2E AI model for their application. In some examples, the augmented set of training data46may be used in addition to other training data, such as in-house training data of the user, to train the E2E AI model. In another example, the processor16of the computer device12may be configured to execute a training module48configured to train the E2E AI model for ASR40using the augmented set of training data46. In one example, the training module48may be configured to use the augmented set of training data46together with all or a portion of the set of audio training data24and corresponding text training data, or other sources of general training data stored by the computer device12to train the E2E AI model for ASR40. In this manner, the augmented set of training data46may augment other sources of training data to improve the robustness of the E2E AI model for ASR. It should be appreciated that the augmented set of training data46includes paired audio and text training data, and may thus be used to train any suitable E2E AI model, such as an RNN-T or AED model, using any suitable training process. In some examples, an updated E2E AI model41that has been trained using the augmented set of training data46may be sent to the other computer device14. The other computer device14, which may be a computer device for a user that is developing or updating an ASR application39, may receive the updated E2E AI model41that has been trained using the augmented set of training data46, and thus updated based on the structured text data34gathered by the other computer device14. The other computer device14may then cause the ASR application39to execute using the updated E2E AI model41. FIG.2illustrates an example of concatenating an audio signal for target domain adaptation of an E2E AI model. In the example ofFIG.2, the set of audio training data24includes audio segments30from one or more domains including a first domain, second domain, and third domain. It should be appreciated that in other examples, the audio segments30may be selected from the same domain, or any number of domains. The set of structured text data34that includes the target training statement36is from a target domain that is different than the one or more domains of the set of audio training data24. As the target domain is different, in this example the target training statement36includes a sequence of words that is new relative to the stored training data on the computer device12. That is, the stored audio and text training data on the computer device12does not include paired audio and text data for the target training statement36. As discussed above, collecting new audio to be paired with the target training statement36may potentially be expensive. To address this issue, the computer device12may be configured to concatenate selected audio segments30from the set of audio training data24to generate the concatenated audio signal42that corresponds to the target training statement36. In the specific example illustrated inFIG.2, the target training statement36is “Cortana open door”. While the set of audio training data24does not include this full statement, the set of audio training data24does include audio segments30for the phrases “Cortana turn on the radio for me”, “Please open window”, and “Door close”. By performing the comparison at step (1) discussed above, the computer device12may select audio segments44associated with words “Cortana”, “Open”, and “Door” from the set of audio training data24. The computer device12may then concatenate the selected audio segments44to generate the concatenated audio signal42. This process may be performed for each training statement36in the structured text data34, and compiled into the augmented set of training data46. The augmented set of straining data46generated using this process may be used for adapting an E2E AI model to the target domain of the structured text data34. FIG.3illustrates an example of concatenating audio segments having different acoustic parameters. In this example, the set of audio training data24and the set of structured text data34may be for the same domain. In the illustrated example, the target training statement36is the same phrase as the clean audio segments of “Please open window”. That is, the target training statement36already exists in the general train data and has paired audio data. However, as discussed above, the computer device12may be configured to generate an augmented set of training data46that includes concatenated audio segments for different acoustics parameters. In the specific example ofFIG.3, audio segments30from a subset of audio having a “clean audio” acoustic parameter are being concatenated with audio segments30from a subset of audio having a “noisy audio” acoustic parameter. To generate the concatenated audio signal42, the processor16may be configured to match the word or phrase of the target training statement36(e.g. Please open window) to the plurality of audio segments30across the plurality of subsets of audio data, and select the plurality of audio segments44from the set of audio training data24to include at least two audio segments30that are selected from different subsets of audio data26for different acoustic parameters28. In the example ofFIG.3, even though the subset of audio data26for the “clean audio” acoustic parameter includes an audio segment30corresponding to the word “open”, the computer device12may be configured to instead select the audio segment30from the subset of audio data26for the “noisy audio” acoustic parameter corresponding to the word “open”. The computer device12may then concatenate the audio segment30for “open” with the “noisy audio” acoustic parameter with the audio segments30for “please” and “window” with the “clean audio” acoustic parameter. The result is a concatenated audio signal42that matches the target training statement36and has intermixed acoustic parameters28. It should be appreciated that a similar process may be used to generate a concatenated audio signal for a target training statement36of “Keep door open” that also includes audio segments with different acoustic parameters28. Additionally, it should be appreciated that audio for more than two types of acoustic parameters28may be concatenated. In one example, the processor16may be further configured to select the plurality of audio segments30from the set of audio training data24based on a distribution parameter that biases the selection for a target acoustic parameter28. In the example illustrated inFIG.3, the distribution parameter may be set to bias the concatenated audio signal42to include more audio segments30from the “clean audio” than the “noisy audio”. It should be appreciated that the distribution parameter may bias the selection to more than one type of target acoustic parameter28, such as, for example, to a clean audio acoustic parameter and an American English audio acoustic parameter simultaneously. These example distributions for the distribution parameter are merely exemplary, and it should be appreciated that the selection of the audio segments30may be biased toward different acoustic parameters28using other distributions. In this manner, a user may build an augmented set of training data46from concatenated audio signals that include audio segments30of the specified distribution for each of the specified acoustic parameters28. FIG.4illustrates an example graphical user interface (GUI)100of the system ofFIG.1. The GUI100includes a first selector102configured to receive user input specifying a path to the set of audio training data24, a second selector104configured to receive a path to the training statements36, a third selector106configured to receive a user input of a language recognition module that may be used to convert the training statements from text to structured text data34, and a fourth selector108configured to receive user input of specified acoustic parameters28that are to be included in the augmented set of training data46, and a fifth selector110by which a user may raise or lower affordances110A to adjust the relative percentage of a particular acoustic28parameter to be included in the augmented data set of training data46. In the illustrated example GUI100, the user has selected to include 25% clear audio segments and 75% noisy audio segments, 50% loud and 50% soft voice level audio segments, 25% child and 75% adult age speakers in the audio segments, and 25% British English, 25% American English, and 50% Non-Native English in the audio segments used to form the augmented set of training data42. This example GUI100is provided as an example, and it should be considered that numerous variations are possible. FIG.5shows a flowchart for an example method400of concatenating audio segments from a set of audio data for generating augmented training data. The following description of method400is provided with reference to the software and hardware components described above and shown inFIG.1. It should be appreciated that method400also can be performed in other contexts using other suitable hardware and software components. At402, the method400may include storing a set of audio training data that includes a plurality of audio segments and metadata indicating a word or phrase associated with each audio segment. The set of audio training data may include audio data for one or more domains, which may be referred to herein as a general domain or source domain of the computer system. The set of audio training data is recognized audio, and the metadata data indicates the word or phrase that corresponds to each audio segment. The metadata is indexed and computer searchable. In one example, set of audio data includes a plurality of subsets of audio data for different acoustic parameters which may include a background noise parameter, an audio quality parameter, and a speech accent parameter. The plurality of subsets of audio data include a plurality of audio segments that are associated with a same word or phrase and different acoustic parameters. In another example, the plurality of subsets of audio data are recorded from a plurality of different speakers. At404, the method400may include receiving a set of structured text data that includes one or more target training statements that each include a plurality of text segments comprising a word or phrase. In one example, the set of structured text data may be for a target domain that is different than the one or more domains of the set of audio training data. As used herein, the term “domain” may be defined as a logical group of utterances that share common characteristics. Several example domains may include application specific domains such as a video conferencing application domain, an email dictation application domain, etc. Each of these domains may share common speech patterns, similar words, and grammar. These domains may also share other types of audio characteristics such as background noise, codec used to encode a waveform, etc. In another example, the set of audio training data and the set of structured text data are for a same domain. The set of structured text data may include boundary data indicating a separation between the one or more included training statements. Words and phrases form the structured text data are extractable and comparable to the metadata associated with the set of audio data. At406, the method400may include, for a target training statement of the set of structured text data, generating a concatenated audio signal that matches a word content of the target training statement. Step406may include steps408-412. At408, the method400may include comparing the words or phrases of the plurality of text segments of the target training statement to respective words or phrases of audio segments of the stored set of audio training data. At410, the method400may include selecting a plurality of audio segments from the set of audio training data based on a match in the words or phrases between the plurality of text segments of the target training statement and the selected plurality of audio segments. In one example, step410may include selecting the plurality of audio segments from the set of audio training data to include at least two audio segments that are selected from different subsets of audio data for different acoustic parameters. Step410may also include selecting the plurality of audio segments from the set of audio training data based on a distribution parameter that biases the selection for a target acoustic parameter. At412, the method400may include concatenating the selected plurality of audio segments into the concatenated audio signal. Steps408-412may be completed for each target training statement of the set of structured text data. At414, the method400may include generating an augmented set of training data that includes the set of structured text data paired with respective concatenated audio signals. The set of audio training data used to generate the concatenated audio signals may be collected from a multitude of sources. As one example, the set of audio training data may be collected from audio book repositories that include both a speaker reading a book and the text of the book. As yet another example, the set of audio training data may be collected from sources of closed captioning or subtitles that are paired with audio of a video. As yet another example, the set of audio training data may be collected from dictation software that converts a user's speech input into text that is subsequently implicitly or explicitly verified by the user. As yet another example, the user's commands to a speech recognition enabled device, which are implicitly or explicitly verified by the user, could be used to collect the set of audio training data. It should be appreciated that other sources of audio and text data may be used to collect the set of audio training data. Prior to such data collection, participant's prior authorization for collection and use of the data is obtained, after informing the participants of the purposes to which the data will be used. The data is collected in a manner that does not associate personally identifiable information with the participants. Additionally, the training statements of the training data may also be collected from a multitude of text sources. In one example, the training statements may be collected from a text source associated with a target domain, such as, text content from emails for an email dictation domain if data collection of those emails is allowed by the user via a data sharing agreement. In another example, the training statements may be manually written by a user to target specific words or phrases. As a specific example, elevator control applications may target specific phrases such as “Floor one”, “Please close the door”, etc. As another example, the training statements may be collected from the paired audio and text data of the already existing training data for E2E AI models. In this example, to increase the robustness of the E2E AI model, a multitude of variations of concatenated audio signals may be generated using the techniques described herein for each training statement. That is, rather than a single pairing of a single audio signal with a single text statement, a multitude of variations of the audio signal may be generated by concatenating appropriate audio segments from different domains that have different acoustic parameters. In this manner, thousands of variations of audio signals that correspond to the same text statement may be synthetically generated and paired with that text statement. All of those variations may be collected into the augmented set of training data and used optionally or additionally to other training data such as the set of audio training data24and corresponding text training data to train an E2E AI model at step416, which may result in a trained model that is more robust to variations in acoustic conditions and speech patterns than an E2E AI model that is trained using conventional training data. At416, the method400may include training an end-to-end artificial intelligence model for automatic speech recognition using the generated augmented set of training data. The above described systems and method can be used to reduce the cost in resources and time typically required to achieve an increased variety in the paired audio and text training data set. The augmented training data generated by the described systems and method may be used to train a classifier that can recognize run time input speech under a variety of anticipated circumstances, such as containing audio that is uttered by non-native speakers, uttered by children vs. adults, uttered in a loud or soft voice, etc. This enables an ASR system trained using the augmented training data to more reliably understand the voice commands given to it, and properly process those commands, in an efficient manner that is pleasing to the user. Additionally, the augmented set of training data may be generated to improve different aspects of the E2E AI model. In one improvement, a robustness of a general domain for the E2E AI model may be improved by replacing the audio or acoustic feature of one word or other speech unit in one utterance with the audio of the same word from other utterance for the training data. The augmented training data may provide the potential benefit of causing the E2E AI model to learn the real discriminative information for the word content, and ignore the variations that may be caused by different speakers or acoustic environments. In a second improvement, an E2E AI model may be adapted to a target domain that does not have paired audio and text training data, or a target domain where gathering paired audio and text training data would be cost or time prohibitive. In some embodiments, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product. FIG.6schematically shows a non-limiting embodiment of a computing system500that can enact one or more of the methods and processes described above. Computing system500is shown in simplified form. Computing system500may embody the computer system10described above and illustrated inFIG.1. Computing system500may take the form of one or more personal computers, server computers, tablet computers, home-entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phone), and/or other computing devices, and wearable computing devices such as smart wristwatches and head mounted augmented reality devices. Computing system500includes a logic processor502volatile memory504, and a non-volatile storage device506. Computing system500may optionally include a display subsystem508, input subsystem510, communication subsystem512, and/or other components not shown inFIG.6. Logic processor502includes one or more physical devices configured to execute instructions. For example, the logic processor may be configured to execute instructions that are part of one or more applications, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result. The logic processor may include one or more physical processors (hardware) configured to execute software instructions. Additionally or alternatively, the logic processor may include one or more hardware logic circuits or firmware devices configured to execute hardware-implemented logic or firmware instructions. Processors of the logic processor502may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic processor optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic processor may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration. In such a case, these virtualized aspects are run on different physical logic processors of various different machines, it will be understood. Non-volatile storage device506includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein. When such methods and processes are implemented, the state of non-volatile storage device506may be transformed—e.g., to hold different data. Non-volatile storage device506may include physical devices that are removable and/or built-in. Non-volatile storage device506may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., ROM, EPROM, EEPROM, FLASH memory, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), or other mass storage device technology. Non-volatile storage device506may include nonvolatile, dynamic, static, read/write, read-only, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. It will be appreciated that non-volatile storage device506is configured to hold instructions even when power is cut to the non-volatile storage device506. Volatile memory504may include physical devices that include random access memory. Volatile memory504is typically utilized by logic processor502to temporarily store information during processing of software instructions. It will be appreciated that volatile memory504typically does not continue to store instructions when power is cut to the volatile memory504. Aspects of logic processor502, volatile memory504, and non-volatile storage device506may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example. The terms “module,” “program,” and “engine” may be used to describe an aspect of computing system500typically implemented in software by a processor to perform a particular function using portions of volatile memory, which function involves transformative processing that specially configures the processor to perform the function. Thus, a module, program, or engine may be instantiated via logic processor502executing instructions held by non-volatile storage device506, using portions of volatile memory504. It will be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc. When included, display subsystem508may be used to present a visual representation of data held by non-volatile storage device506. The visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the non-volatile storage device, and thus transform the state of the non-volatile storage device, the state of display subsystem508may likewise be transformed to visually represent changes in the underlying data. Display subsystem508may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic processor502, volatile memory504, and/or non-volatile storage device506in a shared enclosure, or such display devices may be peripheral display devices. When included, input subsystem510may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity; and/or any other suitable sensor. When included, communication subsystem512may be configured to communicatively couple various computing devices described herein with each other, and with other devices. Communication subsystem512may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network, such as a HDMI over Wi-Fi connection. In some embodiments, the communication subsystem may allow computing system500to send and/or receive messages to and/or from other devices via a network such as the Internet. The following paragraphs provide additional support for the claims of the subject application. One aspect provides a computer system comprising a processor configured to store a set of audio training data that includes a plurality of audio segments and metadata indicating a word or phrase associated with each audio segment, and receive a set of structured text data that includes one or more target training statements that each include a plurality of text segments comprising a word or phrase. For a target training statement of the set of structured text data, the processor is configured to generate a concatenated audio signal that matches a word content of the target training statement by: comparing the words or phrases of the plurality of text segments of the target training statement to respective words or phrases of audio segments of the stored set of audio training data, selecting a plurality of audio segments from the set of audio training data based on a match in the words or phrases between the plurality of text segments of the target training statement and the selected plurality of audio segments, and concatenating the selected plurality of audio segments into the concatenated audio signal. The processor is further configured to generate an augmented set of training data that includes the set of structured text data paired with respective concatenated audio signals. In this aspect, additionally or alternatively, the processor may be further configured to train an end-to-end artificial intelligence model for automatic speech recognition using the generated augmented set of training data. In this aspect, additionally or alternatively, the set of audio training data may be for one or more domains, and the set of structured text data may be for a target domain that is different than the one or more domains of the set of audio training data. In this aspect, additionally or alternatively, the set of audio training data and the set of structured text data may be for a same domain. In this aspect, additionally or alternatively, the set of audio data may include a plurality of subsets of audio data for different acoustic parameters, wherein the plurality of subsets of audio data may include a plurality of audio segments that are associated with a same word or phrase and different acoustic parameters. In this aspect, additionally or alternatively, the different acoustic parameters may be selected from the group consisting of a background noise parameter, an audio quality parameter, and a speech accent parameter. In this aspect, additionally or alternatively, the plurality of subsets of audio data may be recorded from a plurality of different speakers. In this aspect, additionally or alternatively, the processor may be further configured to train the end-to-end artificial intelligence model using the generated augmented set of training data that includes concatenated audio signals comprising concatenated audio segments recorded from the plurality of different speakers. In this aspect, additionally or alternatively, the processor may be further configured to generate the concatenated audio signal that matches the word or phrase of the target training statement by selecting the plurality of audio segments from the set of audio training data to include at least two audio segments that are selected from different subsets of audio data for different acoustic parameters. In this aspect, additionally or alternatively, the processor may be further configured to select the plurality of audio segments from the set of audio training data based on a distribution parameter that biases the selection for a target acoustic parameter. Another aspect provides a method comprising, at a processor of a computer device, storing a set of audio training data that includes a plurality of audio segments and metadata indicating a word or phrase associated with each audio segment, and receiving a set of structured text data that includes one or more target training statements that each include a plurality of text segments comprising a word or phrase. For a target training statement of the set of structured text data, the method includes generating a concatenated audio signal that matches a word content of the target training statement by: comparing the words or phrases of the plurality of text segments of the target training statement to respective words or phrases of audio segments of the stored set of audio training data, selecting a plurality of audio segments from the set of audio training data based on a match in the words or phrases between the plurality of text segments of the target training statement and the selected plurality of audio segments, and concatenating the selected plurality of audio segments into the concatenated audio signal. The method further includes generating an augmented set of training data that includes the set of structured text data paired with respective concatenated audio signals. In this aspect, additionally or alternatively, the method may further comprise training an end-to-end artificial intelligence model for automatic speech recognition using the generated augmented set of training data. In this aspect, additionally or alternatively, the set of audio training data may be for one or more domains, and the set of structured text data may be for a target domain that is different than the one or more domains of the set of audio training data. In this aspect, additionally or alternatively, the set of audio training data and the set of structured text data may be for a same domain. In this aspect, additionally or alternatively, the set of audio data may include a plurality of subsets of audio data for different acoustic parameters, wherein the plurality of subsets of audio data may include a plurality of audio segments that are associated with a same word or phrase and different acoustic parameters. In this aspect, additionally or alternatively, the different acoustic parameters may be selected from the group consisting of a background noise parameter, an audio quality parameter, and a speech accent parameter. In this aspect, additionally or alternatively, the plurality of subsets of audio data may be recorded from a plurality of different speakers. In this aspect, additionally or alternatively, the method may further comprise generating the concatenated audio signal that matches the word or phrase of the target training statement by selecting the plurality of audio segments from the set of audio training data to include at least two audio segments that are selected from different subsets of audio data for different acoustic parameters. In this aspect, additionally or alternatively, the method may further comprise selecting the plurality of audio segments from the set of audio training data based on a distribution parameter that biases the selection for a target acoustic parameter. Another aspect provides a computer device comprising a processor configured to determine a set of structured text data for training an end-to-end artificial intelligence model that is used by an automatic speech recognition application. The set of structured text data includes one or more target training states that each include a plurality of text segments comprising a word or phrase. The processor is configured to send the set of structured text data to a server device to cause the server device to generate an augmented set of training data that includes the set of structured text data paired with respective concatenated audio signals. A concatenated audio signal that matches a word content of a target training statement of the set of structured text data is generated by comparing the words or phrases of the plurality of text segments of the target training statement to respective words or phrases of audio segments of a stored set of audio training data that includes audio segments and metadata indicating a word or phrase associated with each audio segment, selecting a plurality of audio segments from the set of audio training data based on a match in the words or phrases between the plurality of text segments of the target training statement and the selected plurality of audio segments, and concatenating the selected plurality of audio segments into the concatenated audio signal. In this aspect, additionally or alternatively, the processor may be further configured to receive an updated end-to-end artificial intelligence model that has been trained using the augmented set of training data, and cause the automatic speech recognition application to execute using the updated end-to-end artificial intelligence model. It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed. The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
52,997
11862145
DETAILED DESCRIPTION 1 Overview Referring toFIG.1, a deep fusion system100has inputs in two or more modes, represented in the figure as a representative mode 1 and mode 2. For example, the input in mode 1 is text input, for example, represented by words, or alternatively tokenized representations derived from the words or subword units, with the text in some examples being derived by speech recognition from an audio input. Mode 2 is speech input, represented as an acoustic waveform, spectrogram, of other signal analysis of the audio signal. A first processing path processes the mode 1 input in a series of stages, represented as processing stage 1 (111), stage 2 (112), up to stage N (119). For example, the stages may correspond to different time scales, such as syllables, words, sentences, conversational turns, and the like, or deep representation at the same time scale, with each stage processing the input at a more aggregated level. Similarly, a second mode is processed by the system, as represented by the input in mode 2 in the figure. The processing stages 1 to N (121,122,129) process the audio input at corresponding levels to the processing of the mode 1 text input. Rather than each of the modes being processed to reach a corresponding mode-specific decision, and then forming some sort of fused decision from those mode-specific decisions, multiple (e.g., two or more, all) levels of processing in multiple modes pass their state or other output to corresponding fusion modules (191,192,199), shown in the figure with each level having a corresponding fusion module. Each fusion module at one level passes its output to the fusion module a the next level, with the final fusion module providing the overall decision output. In one example, the first mode is a text mode in which processing stage 1 (111) operates at a word level. In particular, the stage implements a bidirectional long short-term memory (BiLSTM) neural network, a special case of a Recurrent Neural Network (RNN). The values provided from the mode 1 stage 1 processor (111) to the fused stage 1 processor (191) includes the state maintained in the LSTM structure. The second mode is an audio mode such that mode 2 stage 1 (121) processes audio samples, or alternatively spectrogram representations or other features extracted from the audio samples, in a manner aligned with the text tokens processed in the mode 1 stage 1 processor. That is, the operation of the stage 1 processors is synchronized. For example, the mode 2 stage 1 processor may implement an RNN/BiLSTM, and again the state of the LSTM used in the signal passed from the mode 2 level 1 stage to the fused level 1 stage. The fused level 1 stage also implements an LSTM structure. The utterance stage 2, and the high-level stage 3 have similar structures. The structure shown inFIG.1is trained using any of a variety of neural network weight estimation techniques. In some examples, each of the modes is first pretrained independently, before combined to perform fused training. 2 Preferred Embodiment As shown inFIG.2, the proposed architecture consists of three parts 1) a text encoder 2) an audio encoder and 3) a Deep Hierarchical Fusion (DHF) network. The two independent modal encoders supply the DHF network with features at each neural layer shown as vertical arrows inFIG.2. The DHF network fuses the information in multiple interconnected levels and finally feeds its output to a classifier that performs sentiment analysis. There are two directions of the flow of the information in the architecture. The first one, illustrated by the vertical arrows, has already been described and depicts the different level representations which are supplied to the DHF. The second one, denoted by the horizontal arrows simulates the forward propagation of the information through the deep network. For the specific task of performing sentiment analysis on spoken sentences, the fusion of textual and acoustic information is performed in three stages. The word-level accepts as inputs two independent modality representations from the encoders. The derived fused representation is then fed-forward to the sentence level which exploits not only the prior fused information, but also re-uses audio and text features, introducing multiple learning paths to the overall architecture. Our DHF network ends up with the high level fusion representation that resides in a (more abstract) multimodal representation space. 2.1 Text Encoder To extract text representations, we use bidirectional LSTM layers [1], which process an input sequentially and are able to capture time-dependencies of language representations. Bidirectional stands for processing an input both forward and backwards. The hidden state giof the BiLSTM, at each timestep can be viewed as: gi={right arrow over (gi)}∥,i=1, . . . ,N(1) where N is the sequence length, ∥ denotes concatenation and {right arrow over (gi)},∈Dare the forward and backward hidden state representations for the i-th word in the sequence. Since elements of the input sequence do not contribute equally to the expression of the sentiment in a message, we use an attention mechanism that aggregates all hidden states gi, using their relative importance biby putting emphasis on the impactful components of the sequence [2]. This structure is described as follows: ei=tanh⁡(Wg⁢gi+bg),ei∈[-1,1](2)bi=exp⁡(ei)∑i=1N⁢exp⁡(et),∑i=1N⁢bi=1(3)g=∑i=1N⁢bi⁢gi,g∈2⁢D(4) where the attention weights Wg, bgadapt during training. Formally, the attention mechanism feeds every hidden state gito a nonlinear network that assigns an energy value eito every element (2). These values are then normalized via (3), to form a probability distribution and a weight biis attached to each hidden representation. We compute the representation g of the whole message as the sum (4) of the weighted representations. Since the sequential information is modeled, a fully connected network is applied to perform the classification task. The high-level representation {tilde over (g)}∈2Dextracted by the fully connected layers can be described as: {tilde over (g)}=Wtg+bt(5) where Wtand btare the trainable parameters. After the training procedure we strip the output layer off and we use the text subnetwork as the text encoder, as it can be seen inFIG.2. This encoder provides the DHF network with three different high-level representations, namely word-level features b1:N, g1:N, sentence-level representations g and high-level features {tilde over (g)}. 2.2 Audio Encoder A similar approach is followed regarding the acoustic module, since speech features are aligned in word-level and then averaged, resulting in an audio representation for each word. We use a BiLSTM (6): hi={right arrow over (hi)}∥,i=1, . . . ,N(6) where hi∈Hdescribes the hidden unit of the i-th timestep. An attention mechanism (2), (3), (7) is also applied: h=∑i=1N⁢ai⁢hi,h∈2⁢H(7) with the respective attention layer parameters denoted as Whand bh. Similarly to the text encoder 2.1, a high-level audio representation {tilde over (h)}∈2His learned, via a fully connected network with trainable weight parameters Wa, ba. This representation is, in turn, given to an output softmax layer which performs the classification. After the learning process, the softmax layer of the speech classifier is no longer considered as part of the network. The remaining sub-modules form the audio encoder ofFIG.2and the word-level a1:N, h1:N, sentence-level h and high-representation-level {tilde over (h)} features are fed to the DHF. 2.3 DHF As shown inFIG.2, the DHF network is made up of three hierarchical levels, which are described in the following subsections. 2.3.1 Word-Level Fusion Module The word-level is the first fusion stage and aims to capture the time-dependent cross-modal correlations. This subnetwork accepts as inputs the word-level features a1:N, h1:Nand b1:N, g1:Nfrom audio and text encoder respectively. At every i-th timestep, we apply the following fusion-rule: ci=aihi∥bigi∥hi⊙gi,  (8) where ⊙ denotes the Hadamard product and ci∈2(2H+D)is the fused time-step representation. These representations form a sequence of length N and are passed to a BiLSTM network with an attention mechanism (2), (3), (9), which outputs the word-level fused representations: fW=∑i=1N⁢ki⁢fi,f∈2⁢W(9) where kiis the fused attention weight at i-th timestep and fiis the concatenation of hidden states {right arrow over (fi)},which belong to a W-dimensional space. We consider Wfand bfas the respective attention trainable parameters. 2.3.2 Sentence-Level Fusion Module This is the second level in the fusion hierarchy and as stated by its name, it fuses sentence-level representations. This module accepts as inputs three information flows 1) sentence-level representation g from the text encoder, 2) sentence-level representation h from the audio encoder and 3) the previous-level fused representation fW. The architecture of the network consists of three fully connected layers. Instead of directly fusing g with h, we apply two fully connected networks which learn some intermediate representations which are then fused with fWthrough a third network and produce a new fused representation fU∈2W. 2.3.3 High-Level Fusion Module The last fusion hierarchy level combines the high-level representations of the textual and acoustic modalities, {tilde over (g)} and {tilde over (h)}, with the sentence-level fused representation fU. This high-dimensional representation is passed through a Deep Neural Network (DNN), which outputs the sentiment level representation fS∈M. The goal of this module is to project this concatenated representation to a common multimodal space. 2.4 Output Layer After the multimodal information is propagated through the DHF network, we get a high-level representation fSfor every spoken sentence. The role of the linear output layer is to transform this representation to a sentiment prediction. Consequently, this module varies according to the task. For binary classification, we use a single sigmoid function with binary cross entropy loss, whereas a softmax function with a cross entropy loss is applied in the multi-class case. 3 Experimental Methodology Our experiments were carried out in the CMU-MOSI [3] database, a collection of online videos in which a speaker is expressing an opinion towards a movie. Every video consists of multiple clips, where each clip contains a single opinion which is expressed in one or more spoken sentences. MOSI database contains 2199 opinion segments with a unique continuous sentiment label in the interval [−3, +3]. We make use of binary, five-scale and seven-scale labels. 3.1 Data Preprocessing We preprocess our data with CMU-Multimodal SDK (mmsdk) [4] tool, which provides us with an easy way for downloading, preprocessing, aligning and extracting acoustic and textual features. For the text input we use GloVe embeddings [5]. Specifically, each spoken sentence is represented as a sequence of 300-dimensional vectors. As for the acoustic input, useful features such as MFCCs, pitch tracking and voiced/unvoiced segmenting [6] are used. All acoustic features (72-dimensional vectors) are provided by mmsdk-tool, which uses COVAREP [7] framework. Word-alignment is also performed with mmsdk tool through P2FA [8] to get the exact time-stamp for every word. The alignment is completed by obtaining the average acoustic vector over every spoken word. 3.2 Baseline Models We briefly describe the baseline models which our proposed approach is compared to.C-MKL [9]: uses a CNN structure to capture high-level features and feeds them to a multiple kernel learning classifier.TFN [6]: uses Kronecker products to capture unimodal, bimodal and trimodal feature interactions. Authors use the same feature set with the one described in subsection 3.1.FAF [10]: uses hierarchical attention with bidirectional gated recurrent units at word level and a fine tuning attention mechanism at each extracted representation. The extracted feature vector is passed to a CNN which performs the final decision.MFM [11]: is a GAN, which defines a joint distribution over multimodal data. It takes into account both the generative and the discriminative aspect and aims to generate missing modality values, while projecting them into a common learned space. The feature set in this study is the same with the one we describe in 3.1. 3.3 Experimental Setup The hidden state hyperparameters H, D, W are chosen as 128, 32, 256, respectively. A 0.25 dropout rate is picked for all attention layers. Furthermore, fully connected layers in both encoders use Rectified Linear Units (ReLU) and dropout with 0.5 value is applied to the audio encoder. The DHF hyperparameter M is chosen as 64 and all its fully connected layers use ReLU activation functions and a 0.15 dropout probability. Moreover, a gradient clipping value of 5 is applied, as a safety measure against exploding gradients [12]. Our architecture's trainable parameters are optimized using Adam [13] with 1e−3 learning rate and 1e−5 as weight decay regularization value. For all models, the same 80-20 training-testing split is used and we further separate 20% of the training dataset for validation. A 5-fold cross validation is used. All models are implemented using PyTorch [14] framework. 4 Results As shown in Table 1, the proposed method consistently outperforms other well-known approaches. Specifically in binary classification task, which is the most well-studied, the proposed architecture outperforms by a small 0.5% margin all other models. As for the five and seven class task we outperform other approaches by 5.87% and 2.14% respectively, which imply the efficacy of the DHF model. Missing values indicate non reported performance measure in the corresponding papers. Table 2 illustrates a comparison between the text, the audio and the fusion classifier within the proposed model. Every column describes a unique approach. The most interesting part of our experiments is that the proposed method achieves larger performance gains, ΔFusion, than the other proposed approaches, as it can be seen in Table 2. Even though the unimodal classifiers for the binary task are not as accurate as in other approaches (FAF, TFN), the DHF boosts the TABLE 1Binary5 class7 classTaskAcc (%)F1Acc (%)Acc (%)CMK-L73.675.2——TFN75.276.039.6—FAF76.476.8——MFM76.476.3—35.0DHF76.976.945.4737.14 performance enough to outperform them in the multimodal classification. Specifically, the results indicate that our method improves the performance by 3.1%, whereas the state-of-the-art approach FAF shows a relative improvement of 1.4%. TABLE 2FAFTFNDHFModelAcc (%)Acc (%)Acc (%)Text75.074.873.8Audio60.265.163.3Fusion76.475.276.9ΔFusion↑1.4↑0.4↑3.1 Table 3 shows the results of an ablation study regarding the contribution of different DHF modules. Three experiments are carried out and in each one, a level of hierarchy is being subtracted. Specifically the first row corresponds to a DHF architecture without the High-Level Fusion module (seeFIG.2). The Sentence-Level representation is fed to a softmax classifier in this case. The next two rows describe the DHF without the Sentence-Level and Word-Level Fusion modules respectively. We notice that higher hierarchy levels are more important for the model performance. This demonstrates that the impact of the earlier levels of hierarchy is being decreased as new representations are extracted in the following levels, denoting that the model deepens its learning on feature representations. Finally, we tested the robustness of the proposed model, by adding Gaussian noise upon the input data. The first two columns of table 4 detail the noise deviation Tstd, Astdon the text and audio data respectively. The next three columns describe each classifier's accuracy. We notice that a 4.7% performance decay in the text classifier, yields a 4.1% decay in the fusion method. This is expected while the input noise affects both the text and multimodal classifier. Additionally, the TABLE 3ModelAccuracy (%)F1DHFNo High-Level75.074.8DHFNo Sent-Level75.575.4DHFNo Word-Level75.775.6DHF76.976.9 third row shows a 4% and 8.3% reduction in text and audio performance respectively, while fusion model only shows a 6.5% decay. It can be observed that for reasonable amounts of input data noise, the DHF outperforms the textual classifier. TABLE 4NoiseAccuracies (%)TstdAstdTextAudioDHF0.00.073.8163.3376.910.30.069.0563.3372.860.30.0169.325570.48 5 Alternatives and Implementations Generalizations of the data fusion can include the following:The method and system can be applied for a variety of machine inference tasks including, but not limited to:Estimating and tracking human emotional states and behaviors from human generated signals (e.g., speech, language, physical movements, eye gaze, physiological signals, etc.).Estimation of human traits such as identity, age, gender, and personality aspects from human generated signals (e.g., speech, language, physical movements, eye gaze, physiological signals, etc.).Event prediction and tracking in multimedia signals such as automated scene analysis, advertisement identification, character tracking, and demographic information (gender, race, age, etc.) from signals generated within the multimedia (e.g., audio including speech and musical scores; video signal; and closed captions/scripts).Security applications.Fusion can produce classification results at different time-scales (this problem is known as tracking or recognition, e.g., speech recognition at the word level time-scale) for example for fusion of text and speech modalities one can produce results at the phone, syllable, word, sentence, paragraph or document level. For example to do emotion recognition at the word level one can do away with the utterance fusion layers (for both speech and text) in the proposed architecture.The information signals that are fused could be features extracted from the same signals (multiple feature streams), from multiple measurements of the same signal, e.g., multiple microphones or biosensors, from two different modalities that are not dependent or fully synchronous (crossmodal), or from the same signal that contains multiple modalities (multimodal).Unimodal applications of this method include, but are not limited to, a system that exploits multiple views of a single signal from multiple measurement sources for example a system which identifies a speaker from multiple microphones. Another example is fusing different feature streams extracted from the same signal via our proposed method.Cross-modal application of this method include, but are not limited to, fusion of two or more information streams that are independent to each other or not totally in synchrony, for example audio recordings and biological signals, or a video signal and the associated emotional reaction it induces on a human subject.Multimodal applications of this method include, but are not limited to, fusion of two or more multimodal information streams such as audio and text or audio, text, and visual signals, e.g., in a video clip.The models associated with each information signal and the fused representation can have arbitrary depth.In general, the depth of the deep hierarchical fusion system could relate to the desired (or required) levels of abstraction. For example, a system fusing speech and text information streams for a classification task being performed for a spoken document could fuse information at the word level, at the utterance level, then at the document level.Following the fusion layer that corresponds to the desired output time-scale, e.g., for emotion tracking the word level, for document classification the document level, for sentiment analysis the utterance level, an arbitrary number of layers can be added to both the information signal and fused models. These layers can be trained in an unsupervised manner to learn hierarchical representations as is customary in deep learning models.An arbitrary level of layers can be also added at each time scale, e.g., the word level, to further improve the ability of the method to represent hierarchically the information and fused signals.The fusion at each level of abstraction between various information streams can be achieved in a variety of ways including, but not limited to, concatenation, averaging, pooling, conditioning, product, transformation by forward or recursive neural network layers.The fusion of the representation at each level of abstraction may also include a cross-modal attention module that takes as input the information signal representations and weighs their contribution towards the fused representation. This can be achieved by using attention information from one information stream to provide greater weight to certain segments from another information stream, such an attention mechanism could also perform the function of synchronizing information streams, e.g., synchronization of speech and physiological signals which are vary and are measured at different rates. Furthermore, in another alternative, the single modal encoders are not frozen but instead allow for weight adaptation, potentially by using different optimizers for each modality. In some examples, different neural architectures for the single-modality encoders are used, such as pretrained CNNs that are able to extract high-level audio features. In some examples, different synchronization of the different layers of the single-modality encoders is used, as are deeper architectures. Implementations of the approaches described above may be implemented in software, with processor instructions being stored on a non-transitory machine-readable medium and executed by one or more processing systems. The processing systems may include general purpose processors, array processors, graphical processing units (GPUs), and the like. Certain modules may be implemented in hardware, for example, using application-specific integrated circuits (ASICs). For instance, a runtime implementation may use of a hardware or partially hardware implementation, while a training implementation may use a software implementation using general purpose processors and/or GPUs. REFERENCES [1] S. Hochreiter and J. Schmidhuber, “Long short-term memory,”Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.[2] D. Bandanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,”arXiv preprint arXiv:1409.0473, 2014.[3] A. Zadeh, R. Zellers, E. Pincus, and L.-P. Morency, “Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos,”arXiv preprint arXiv:1606.06259, 2016.[4] A. Zadeh, P. P. Liang, S. Poria, P. Vij, E. Cambria, and L.-P. Morency, “Multi-attention recurrent network for human communication comprehension,” inThirty-Second AAAI Conference on Artificial Intelligence,2018.[5] J. Pennington, R. Socher, and C. Manning, “Glove: Global vectors for word representation,” inProceedings of the2014conference on empirical methods in natural language processing(EMNLP), 2014, pp. 1532-1543.[6] A. Zadeh, M. Chen, S. Poria, E. Cambria, and L.-P. Morency, “Tensor fusion network for multimodal sentiment analysis,” inProceedings of the2017Conference on Empirical Methods in Natural Language Processing,2017, pp. 1103-1114.[7] G. Degottex, J. Kane, T. Drugman, T. Raitio, and S. Scherer, “Covarep a collaborative voice analysis repository for speech technologies,” in 2014ieee international conference on acoustics, speech and signal processing(icassp). IEEE, 2014, pp. 960-964.[8] J. Yuan and M. Liberman, “Speaker identification on the scotus corpus,”Journal of the Acoustical Society of America, vol. 123, no. 5, p. 3878, 2008.[9] S. Poria, E. Cambria, and A. Gelbukh, “Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis,” inProceedings of the2015conference on empirical methods in natural language processing,2015, pp. 2539-2544.[10] Y. Gu, K. Yang, S. Fu, S. Chen, X. Li, and I. Marsic, “Multimodal affective analysis using hierarchical attention strategy with word-level alignment.” inProceedings of the conference. Association for Computational Linguistics. Meeting, vol. 2018, 2018, p. 2225.[11] Y.-H. H. Tsai, P. P. Liang, A. Zadeh, L.-P. Morency, and R. Salakhutdinov, “Learning factorized multimodal representations,”arXiv preprint arXiv:1806.06176, 2018.[12] R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty of training recurrent neural networks,” inInternational conference on machine learning,2013, pp. 1310-1318.[13] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”arXiv preprint arXiv:1412.6980, 2014.[14] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017.
24,992
11862146
DETAILED DESCRIPTION An acoustic model may process an audio signal (or features computed from an audio signal) and provide information about speech units (e.g., phonemes or other linguistic units) that are present in the audio signal and correspond to speech of a person. For example, in some implementations, an acoustic model may output a vector of scores where elements of the vector correspond to speech units and the score indicates a likelihood or probability that a corresponding speech unit is present in a portion of the audio signal. The methods and applications described herein may be implemented using any of the techniques described herein. FIG.1is an example system100for implementing an application of processing an audio signal with an acoustic model. InFIG.1, the example application is speech recognition, but the techniques described herein are not limited to this application and may be used with any appropriate application. As used herein, an audio signal will refer to a digital audio signal as opposed to an analog audio signal. The process of converting analog signals to digital signals is well known to one of skill in the art. An audio signal may be represented as a sequence of digital samples, such as samples obtained at a sampling rate of 48 kHz. The audio signal may be processed by feature computation component110to produce a sequence of feature vectors that represent the audio signal. Any appropriate feature vectors may be used, such as Mel-frequency cepstral coefficients, linear prediction coefficients, linear prediction cepstral coefficients, line spectral frequencies, discrete wavelet transform coefficients, or perceptual linear prediction coefficients. The rate of the feature vectors may be the same or different from the sampling rate. For example, feature computation component110may compute a feature vector for each 10 milliseconds of the audio signal. In some implementations, feature computation component110may compute features using unsupervised pretraining, such as using the wav2vec algorithm. In some implementations, feature computation component110may perform additional processing in computing the sequence of feature vectors. For example, feature computation component110may process any of the quantities described above with one or more layers of a neural network (such as a convolutional neural network (CNN)). For example, feature computation component110may compute features by processing a sequence of Mel-frequency cepstral coefficient vectors with one or more layers of a convolutional neural network. Acoustic model component120may receive the sequence of feature vectors from feature computation component110and compute a sequence of speech unit score vectors from the feature vectors. Each speech unit score vector may indicate speech units that are likely present in a portion of the audio signal. The rate of the speech unit score vectors may be the same or different from the rate of the feature vectors. As used herein, a speech unit represents any appropriate portion of an acoustic aspect of a speech signal. For example, a speech unit may correspond to a linguistic unit (e.g., a phoneme, triphone, quinphone, or syllable) or a portion of a linguistic unit (e.g., a first state of a triphone modelled by a sequence of states). A speech unit may also represent a group or cluster of linguistic units or portions of linguistic units, such as tied states. Language model component130may receive the sequence of speech unit score vectors from acoustic model component120, determine words that are present in speech of the audio signal, and output text corresponding to speech. Language model component130may use any appropriate language model, such as a neural network language model or an n-gram language model. The text may then be used for any appropriate application, such as a transcription application to determine words spoken in the audio signal. FIG.2is an example system200for implementing an acoustic model by processing audio features using multiple streams. InFIG.2, the input to system200may be a sequence of feature vectors and the output may be a sequence of speech unit score vectors. The processing may be performed using any number of streams, such as first stream210, second stream220, and third stream230. Each of the streams may process the sequence of feature vectors in parallel with each other. Each stream may process the feature vectors using a different dilation rate. For example, first stream210may process the feature vectors with a first dilation rate, second stream220may process the feature vectors with a second dilation rate, and third stream230may process the feature vectors with a third dilation rate. In some implementations, the second dilation rate may be different from the first dilation rate, and the third dilation rate may be different from both first dilation rate and the second dilation rate. In some implementations, the dilation rates may be related to each other or to other aspects of the processing. For example, the second and third dilations rates may be multiples of the first dilation rate. For another example, the dilation rates may be multiples of subsampling rates that are used when computing feature vectors. A stream may use any appropriate techniques to process feature vectors with a corresponding dilation rate. In some implementations, a stream may process the feature vectors with one or more instances of a neural network layer having the corresponding dilation rate. Any appropriate neural network layer may be used, such as a convolutional neural network layer. In some implementations, each stream may include one or more instances of a convolutional neural network layer having a corresponding dilation rate. For example, first stream210may have one or more instances of a first CNN layer with a first dilation rate, such as first CNN layer211, first CNN layer212, and first CNN layer213. Second stream220may have one or more instances of a second CNN layer with a second dilation rate, such as second CNN layer221, second CNN layer222, and second CNN layer223. Third stream230may have one or more instances of a third CNN layer with a third dilation rate, such as third CNN layer231, third CNN layer232, and third CNN layer233. In some implementations, the dilation rate of each stream may be different from the other streams. For example, the first dilation rate, the second dilation rate, and the third dilation rate may each be different from the others. The different instances of a CNN layer may have different parameters from each other. For example, first CNN layer211, first CNN layer212, and first CNN layer213may all have a first dilation rate, but the parameters of each instance of first CNN layer may be different from each other. FIGS.3A-Cillustrate processing a sequence of inputs with different dilation rates. InFIGS.3A-C, the inputs may be the sequence of feature vectors or the output of a previous neural network layer and the outputs are the output of a neural network layer. In the example ofFIGS.3A-C, each output is computed using 7 inputs, but any appropriate number of inputs may be used to compute an output. InFIG.3A, the inputs are processed with a dilation rate of 1. Accordingly, 7 consecutive inputs are used to compute an output. The solid lines indicate the 7 inputs used to compute a first output, and the dashed lines indicate the 7 inputs used to compute a second output. Other outputs may be computed in a similar manner. InFIG.3B, the inputs are processed with a dilation rate of 2. Accordingly, 7 inputs are used to compute an output, but since every other input is used, the span of time corresponding to the inputs is increased. InFIG.3C, the inputs are processed with a dilation rate of 3. Accordingly, 7 inputs are used to compute an output, but since every third input is used, the span of time corresponding to the inputs is increased even further. Any appropriate dilation rates may be used in addition to or instead of the dilation rates illustrated inFIGS.3A-C. The different dilation rates of the streams may improve the performance of the acoustic model. Different aspects of speech may occur over different time frames, and the different dilation rates may allow the processing to better capture information at different time frames. Some aspects of speech may occur over short time frames, such as stop consonants, and smaller dilation rates may allow for improved processing of these aspects. Some aspects of speech may occur over longer time frames, such as diphthongs, and longer dilation rates may allow for improved processing of these aspects. The combination of different dilation rates may thus provide improved performance over acoustic models that don't include multiple streams with different dilation rates in different streams. In some implementations, the streams may include other processing in addition to the processing described above. For example, in some implementations, a stream may include processing with a self-attention component.FIG.4illustrates an example of a stream that includes a self-attention component. InFIG.4, the sequence of feature vectors may be processed by one or more convolutional neural network layers, as described above. The output of the final convolutional neural network layer (e.g., first CNN layer213) may then be processed by self-attention component410. Self-attention component410may perform any appropriate processing such as one or more of the following operations: computing self-attention heads, concatenating self-attention heads, performing layer normalization, processing with a factorized feed forward neural network layer, a skip connection, or dropout. In some implementations, streams may be implemented using other types of neural networks. For example, a stream may be implemented using a sequence of one or more or a time-delay neural network layer and/or a factorized time-delay neural network layer. Returning toFIG.2, stream combination component240may process the stream vectors output of the streams to compute a sequence of speech unit score vectors. For example, stream combination component240may process a first stream vector computed by first stream210, a second stream vector computed by second stream220, and a third stream vector computed by third stream230. Stream combination component240may use any appropriate techniques when processing stream vectors. In some implementations, stream combination component240may perform one or more of the following operations: concatenating the stream vectors; performing batch normalization; or processing with dropout. The sequence of speech unit score vectors may then be used for any appropriate application of processing audio signals, such as performing speech recognition. FIG.5is a flowchart of an example method for implementing an acoustic model by processing audio features using multiple streams. At step510, a sequence of feature vectors is computed from an audio signal. Any appropriate techniques may be used, such as any of the techniques described herein. For example, the sequence of feature vectors may be computed by processing a sequence of Mel-frequency cepstral coefficient vectors with one or more convolutional neural network layers. At step520, a first stream vector is computed by processing the sequence of feature vectors with a first stream having a first dilation rate. Any appropriate techniques may be used, such as any of the techniques described herein. In some implementations, the first stream may process the sequence of feature vectors with one or more instances of a first convolutional neural network layer having a first dilation rate. In some implementations, the first stream may process the sequence of feature vectors with three or more instances of a first convolutional neural network layer having a first dilation rate. In some implementations, the first stream may include additional processing, such as a self-attention component. At step530, a second stream vector is computed by processing the sequence of feature vectors with a second stream having a second dilation rate. In some implementations, the second dilation rate may be different from the first dilation rate. Any appropriate techniques may be used, such as any of the techniques described herein. In some implementations, the second stream may process the sequence of feature vectors with one or more instances of a second convolutional neural network layer having a second dilation rate. In some implementations, the second stream may process the sequence of feature vectors with three or more instances of a second convolutional neural network layer having a second dilation rate. In some implementations, the second stream may include additional processing, such as a self-attention component. In some implementations, additional stream vectors may be computed, such as a third stream vector or a fourth stream vector. Any appropriate number of stream vectors may be computed. In some implementations, each of the stream vectors may be computed using a different dilation rate. At step540, a speech unit score vector is computed by processing the first stream vector and the second stream vector (and optionally other stream vectors). Any appropriate techniques may be used, such as any of the techniques described herein. Each element of the speech unit score vector may indicate a likelihood or probability of a corresponding speech unit being present in a portion of the audio signal. At step550, the speech unit score vector is used for a speech application, such as automatic speech recognition. Any appropriate speech application may be used. In some implementations, step520may compute a sequence of first stream vectors, step530may compute a sequence of second stream vectors, and step540may compute a sequence of speech unit score vectors. The sequence of speech unit score vectors may then be used in a speech application. The acoustic models described herein may be trained using any appropriate techniques, such as supervised training or unsupervised training. In some implementations, the acoustic models may be trained as part of a speech application (e.g., speech recognition). In some implementations, the acoustic models may be directly trained outside of a speech processing application. In some implementations, the acoustic models may be trained using a labelled corpus of speech training data (e.g., transcribed speech). The training process may include performing a forward pass to compute the output of the acoustic model (or a larger neural network that contains the acoustic model), an error may be computed using a label from the training data, and back propagation may be performed with stochastic gradient descent to update the model parameters. This process may be performed iteratively over a corpus of training data until the model parameters have converged. FIG.6illustrates components of one implementation of a computing device600for implementing any of the techniques described herein. InFIG.6, the components are shown as being on a single computing device, but the components may be distributed among multiple computing devices, such as a system of computing devices, including, for example, an end-user computing device (e.g., a smart phone or a tablet) and/or a server computing device (e.g., cloud computing). Computing device600may include any components typical of a computing device, such as volatile or nonvolatile memory610, one or more processors611, and one or more network interfaces612. Computing device600may also include any input and output components, such as displays, keyboards, and touch screens. Computing device600may also include a variety of components or modules providing specific functionality, and these components or modules may be implemented in software, hardware, or a combination thereof. Below, several examples of components are described for one example implementation, and other implementations may include additional components or exclude some of the components described below. Computing device600may have a feature computation component620that process an audio signal to compute feature vectors using any of the techniques described herein. Computing device600may have an acoustic model component621that processes feature vectors to compute speech unit scores using any of the techniques described herein. Computing device600may have a language model component622that processes speech unit scores to determine words in an audio signal using any of the techniques described herein. Computing device600may have a stream component623that processes feature vectors to compute a stream vector using any of the techniques described herein. Computing device600may have a stream combination component624that computes a speech unit score vector from stream vectors using any of the techniques described herein. Computing device600may have a speech application component625that implements a speech application with an acoustic model using any of the techniques described herein. Computing device600may include or have access to various data stores. Data stores may use any known storage technology such as files, relational databases, non-relational databases, or any non-transitory computer-readable media. Computing device600may have a speech data store640that stores speech data, such as speech data that may be used to train an acoustic model. Multistream CNN for Robust Acoustic Modeling Multistream CNN is a neural network architecture for robust acoustic modeling in speech recognition tasks. The proposed architecture accommodates diverse temporal resolutions in multiple streams to achieve the robustness. For the diversity of temporal resolution in embedding processing, consider a dilation on factorized time-delay neural network (TDNN-F), a variant of 1D-CNN. Each stream may stack narrower TDNN-F layers whose kernel has a stream-specific dilation rate when processing input speech frames in parallel. It may better represent acoustic events without the increase of model complexity. The effectiveness of the proposed multistream CNN architecture is validated by showing improvement across various data sets. Trained with the data augmentation methods, multistream CNN improves the word error rate (WER) of the test-other set in the LibriSpeech corpus by 12% (relative). On the custom data from a production system for a call center, it records the relative WER improvement of 11% from the customer channel audios (10% on average for the agent and customer channel recordings) to show the superiority of the proposed model architecture even in the wild. In terms of Real-Time Factor (RTF), multistream CNN outperforms the normal TDNN-F by 15%, which also suggests its practicality on production systems or applications. Automatic speech recognition (ASR) with processing speech inputs in multiple streams, namely multistream ASR, has been researched mostly for robust speech recognition tasks in noisy environments. The multistream ASR framework was proposed based on the analysis of human perception and decoding of speech, where acoustic signals enter into the cochlea and are broken into multiple frequency bands such that the information in each band can be processed in parallel in the human brain. This approach worked quite well in the form of multi-band ASR where band-limited noises dominate signal corruption. Later, further development was made in regards with multistream ASR in the areas of spectrum modulation and multi-resolution based feature processing and stream fusion or combination. With the advent of deep learning, another branch of research activities has been stretched out from the framework of multistream ASR, where multiple streams of encoders process embedding vectors in parallel in deep neural network (DNN) architectures. Although some forms of artificial neural networks like multilayer perceptron (MLP) had already been utilized in the literature, they were shallow and their usage was limited to fusing posterior outputs from a classifier in each stream. The recent DNN architectures for multistream ASR instead perform more unified functions, not only processing information in parallel but combining the stream information to classify at once. The multistream architecture may be simplified into one neural network where a binary switch was randomly applied to each feature stream when concatenating the multistream features as the neural network input. In decoding, the tree search algorithm was utilized to find the best stream combination. The stream attention mechanism inspired by the hierarchical attention network was proposed to the multi-encoder neural networks that can accommodate diverse viewpoints when processing embeddings. These multi-encoder architectures were successful in the data sets recorded with multiple microphones. As multihead self-attention became more popular, the multistream self-attention architectures were also investigated to further enhance the diversity of the embedding processes inside the networks by applying unique strides or dilation rates to the neural layers of the streams. The state-of-the-art result was reported on the test clean set of the LibriSpeech corpus using this structure. The proposed architecture accommodates diverse temporal resolutions in multiple streams to achieve the robustness. For the diversity of temporal resolution in embedding processing, dilation on TDNN-F, a variant of 1D-CNN, is considered. TDNN-F stands for factorized time-delay neural network. The convolution matrix in TDNN-F is decomposed into two factors with the orthonormal constraint, followed by the skip connection, batch normalization and dropout layer in the order. Each stream may stack narrower TDNN-F layers whose kernel has a stream-specific dilation rate when processing input speech frames in parallel. Features of multistream CNN include: it was inspired by the multistream self-attention architecture, but without the multi-head self-attention layers; the dilation rate for the TDNN-F layers in each stream is chosen from a multiple of the default subsampling rate (3 frames); it can offer a seamless integration with the training and decoding process where it is applied to sub-sample input speech frames. With the spectral augmentation (SpecAug) method, multistream CNN can provide more robustness against challenging audio. Its relative WER improvement is 11% on the test-other set in the LibriSpeech corpus. The LibriSpeech corpus is a collection of approximately 1,000 hr read speech (16 kHz) from the audio books that are part of the LibriVox project. The training data is split into 3 partitions of 100 hrs, 360 hrs, and 500 hrs while the dev and test data are split into the ‘clean’ and ‘other’ categories, respectively. Each of the dev and test sets is around 5 hr in audio length. This corpus provides the n-gram language models trained on the 800M token texts. The Switchboard-1 Release 2 (LDC97S62) and Fisher English Training Part 1 and 2 (LDC2004S13, LDC2004T19, LDC2005S13, LDC2005T19) corpora total 2,000 hrs of 8 kHz telephony speech and are used to train the seed acoustic model and language model, which are further updated with custom telephony data collected from a call center. The seed telephony model evaluation is conducted on the HUB5 eva12000 set (LDC2002S09, LDC2002T43). Roughly 500 hrs of 8 kHz audio were collected from a call center production ASR system and transcribed. The audio streams from the agent and customer channels are separately recorded. The eval set was made in the same way to the 10 hr audio collection with the balanced distribution of agent and customer channel recordings. For the Librispeech model training, the default Kaldi recipe was followed. The neural networks for the acoustic model were trained on the total 960 hr training set with the learning rate decay starting from 10−3to 10−5over the span of 6 epochs. The minibatch size was 64. The n-gram language models provided by the LibriSpeech corpus were used for the 1st pass decoding and rescoring, and a neural network language model for further rescoring was not applied. Regarding the telephony seed model training, the Kaldi recipe was also used. The models were trained with the 2,000 hr training data. For the neural network acoustic models, the learning rate was exponentially decayed from 10−3until 10−4during 6 epochs. The minibatch size was 128. The default language models produced by the recipe were used for the seed model evaluation. To fine-tune the seed telephony models with the ASAPP custom data, the learning rate decay was adjusted ranging from 10−1until 10−7for 6 epochs with the same minibatch size. The PocoLM tool was used to train the 4-gram language model for the 1st-pass decoding in the evaluation. The proposed multistream CNN architecture branches multiple streams out of given input speech frames after going through a few initial CNN layers in a single stream where CNNs could be TDNN-F or 2D CNN (in a case of applying the SpecAug layer). After being branched out, the stacked TDNN-F layers in each stream may process the output of the single stream CNNs with a dilation rate. Consider the embedding vector xicomes out of the single stream CNN layers at the given time step of i. The output vector yimfrom the stream m going through the stack of TDNN-F layers with the dilation rate rmcan be written as: yim=Stacked-TDNN-Fm(xi|[−rm,rm]) where [−rm,rm] means the 3×1 kernel given the dilation rate of rm. The output embeddings from the multiple streams may then be concatenated and followed by the batch normalization and dropout layer: zi=Dropout[BatchNorm{Concat(yi1,yi2, . . . ,yiM)}] which is projected to the output layer via a couple of fully connected layers at the end of the network. In the subsequent subsections, the effect of our design choices in the proposed multistream CNN architecture in terms of WER on the LibriSpeech dev and test sets was analyzed. Neural network language models for rescoring was not considered and the original n-gram language models provided by the LibriSpeech corpus was used. Unless specified, the complexity of the models compared is in the similar range of 20M parameters for fair comparison. The baseline is based on the 17-layer TDNN-F model in the recipe for LibriSpeech of the Kaldi toolkit (egs/librispeech/s5/local/chain/runtdnn.sh). The 5 layers of TDNN-F were positioned in a single stream to process input MFCC features before splitting to multiple streams. In each stream, after being branched out from the single stream TDNN-F layers, a ReLu, batch normalization and dropout layer are rolled out and then 17 TDNN-F layers are stacked, totaling the layer number of the entire network to 23. To constrain the model complexity of around 20M parameters, the more streams are considered, the smaller embedding size is used correspondingly. The multistream CNNs were compared against the baseline model as the number of streams was increased with incrementing the dilation rate by 1. For example, 1-2-3-4-5 indicates that the dilation rates of 1, 2, 3, 4, and 5 are applied to the respective streams over the total 5 streams in the multistream CNN model. As noted, the dimension of embedding vectors for TDNN-F was adjusted to constrain the model complexity to the range of 20M parameters. With the similar model complexity, the proposed multistream CNN architecture is shown to improve the WERs of the “other” data sets more noticeably as the number of streams was increased to 9. The careful selection of dilation rates was shown to achieve lower WERs across evaluation sets even with smaller numbers of streams. The multiples of the default subsampling rate (3 frames) was chosen as dilation rates in order to make the selection better streamlined with the training and decoding process where input speech frames are subsampled. Juxtaposing the baseline WERs and the counter performances by the multistream CNN models with the sets of dilation rates chosen from the multiples of the subsampling rate over 3 streams, the 6-9-12 configuration was observed to present the relative WER improvements of 3.8% and 5.7% on the testclean and test-other set, respectively. Considering the overall degradation by the multistream CNN model with the 1-2-3 configuration over the same number of streams, the suggested selection policy of dilation rates is a factor for the proposed model architecture to perform properly. The diversity of streams in terms of temporal resolution seems another factor for the multistream CNN architecture to achieve the expected performance. Comparing the WERs by the models with the 1-3-6-9-12-15 and (1-3-6)2configuration, which is the same with 1-3-6-1-3-6, the model with the unique dilation rate in each stream is shown to be superior to the model with the overlapped temporal resolutions in the streams. The similar observation can be made by considering the WERs of the model with the 1-2-3-4-5-6-7-8-9 configuration and those with the configurations of (1-3-6)3, (3-6-9)3and (6-9-12)3, all of which have the multistream architecture with the total 9 streams. Without the stream diversity in terms of temporal resolution, it is shown that the models configured with the multiples of the subsampling rate for the dilation rates of TDNN-F would not outperform the models configured not that way. Since its introduction, the SpecAug data augmentation method of masking some random bands from input speech spectra in both frequency and time has been adopted by both hybrid and end-to-end ASR systems, in order to prevent the neural network model training from being overfit, thus enabling the trained model to become more robust to unseen testing data. To apply this method on top of the proposed multistream CNN architecture, the 5 layers of TDNN-F in the single stream are replaced with the 5 layers of 2D-CNNs to accommodate log-mel spectra as the input of the model rather than MFCCs. For the 2D CNN layers, 3×3 kernels are used with the filter size of 256 except for the first layer with the filter size of 128. Every other layer applies frequency band subsampling with the rate of 2. The WERs between the original multistream CNN model with the 6-9-12 configuration and the equally configured model with the SpecAug layer are compared. Not much difference is seen overall between TDNN-F and 2D-CNN in the single stream part of the proposed architecture (i.e., 1st and 2nd row of the multistream CNN section in the table). It is apparent that the SpecAug layer can enhance the robustness of the multistream CNN model in the tough acoustic conditions of the “other” sets. The relative WER improvement of 11.8% on the test-other set against the baseline performance demonstrates the superiority of the proposed multistream CNN architecture. The feasibility of the proposed architecture in real-world scenarios is now considered. Custom training data (500 hrs) from a call center client is used to update the seed models that were trained on the SWBD/Fisher corpora mentioned above. The seed model performances on the HUB5 eval2000 set consisting of SWBD and CH (i.e., CallHome) exhibit the mixed results where for SWBD the baseline model performs better while the multistream CNN model appears to outmatch in CH. A noteworthy observation is that the proposed model architecture continues to excel in the challenging data set. This is further highlighted where the two models (baseline and multistream CNN) are evaluated on the custom eval set of 10 hrs described above after being fine-tuned with the aforementioned custom training set. The relative WER improvement of 11.4% on the customer channel recordings declares the robustness of the multistream CNN architecture in the wild. The customer channel audio are by far challenging for ASR systems due to various contributing factors including noisier acoustic environments, non-native & accented speech, multiple talkers, etc. In addition, the relative RTF improvement of 15.1% as compared to the baseline TDNN-F model shows the practicality of the proposed model architecture in real-world applications, especially where online inference is critical. The novel neural network architecture of multistream CNN for robust speech recognition are proposed. To validate the proposed idea, ablation evaluations were conducted for the design choices of the model architecture. Also, the multistream CNN models were tested against various data sets including custom data collected from the call center domain to demonstrate the strength of the models in both robustness to challenging acoustics and RTF. Trained with the SpecAug method, multistream CNN improved the WER of the test-other set in the LibriSpeech corpus by 12% (relative). On the custom data, it achieved relative WER improvement of 11% from the customer channel audios. Multistream CNN also outperformed the baseline TDNN-F model in the custom data evaluation by 15% (relative) in terms of RTF. All the results suggest the superiority of the proposed model architecture. Given its robustness and feasibility, multistream CNN is promising in a number of ASR applications and frameworks. State-of-the-Art Speech Recognition Using Multi-Stream Self-Attention with Dilated 1D Convolutions Self-attention has been a huge success for many downstream tasks in natural language processing (NLP), which led to exploration of applying self-attention to speech problems as well. The efficacy of self-attention in speech applications, however, seems not fully known since it is challenging to handle highly correlated speech frames in the context of self-attention. A new neural network model architecture is proposed, namely multi-stream self-attention, to address the issue thus make the self-attention mechanism more effective for speech recognition. The proposed model architecture consists of parallel streams of self-attention encoders, and each stream has layers of 1D convolutions with dilated kernels whose dilation rates may be different, followed by a self-attention layer. The self-attention mechanism in each stream pays attention to only one resolution of input speech frames and the attentive computation can be more efficient. In a later stage, outputs from all the streams are concatenated then linearly projected to the final embedding. By stacking the proposed multi-stream self-attention encoder blocks and rescoring the resultant lattices with neural network language models, the word error rate of 2.2% is achieved on the test-clean dataset of the LibriSpeech corpus, the best number reported thus far on the dataset. Self-attention is the core component of the neural network architectures recently proposed in NLP to achieve the state-of-the art performances in a number of downstream tasks. Transformer successfully replaced recurrent neural networks such as LSTMs with sinusoidal positional encoding and the self-attention mechanism to be context-aware on input word embeddings. BERT took the benefit from the success of Transformer to extend it to the autoencoding based pretraining model, which can be fine-tuned to reach the state-of-the-art performances for various downstream tasks. XLNet, as the very latest state-of-the-art pretraining model, outperformed BERT in a number of downstream tasks from question answering to document ranking, thanks to model training with targets being aware and relative positional encoding like its ancestor of Transformer-XL. With the huge success in NLP, self-attention has been actively investigated for speech recognition as well. Time-restricted self-attention has been introduced with a one-hot vector representation being exploited as relative positional encoding for given restricted contexts of speech frames. The well-known Listen, Attend and Spell (LAS) ASR model has employed the multi-head approach to the attention mechanism to further improve its already state-of-the-art accuracy on the large-scale voice search data. Several approaches have been explored for better application of self-attention to speech recognition in the LAS framework, e.g., speech frame handling strategies or attention biasing to restrict the locality of the self-attention mechanism. The CTC loss have been applied to optimize the Transformer encoder structure for ASR. The entire encoder-decoder structure of the original Transformer has been examined in the context of Mandarin Chinese speech recognition tasks. The challenge in terms of applying self-attention to speech recognition is that individual speech frames are not like lexical units such as words. Speech frames do not convey distinct meanings or perform unique functions, which makes it hard for the self-attention mechanism to compute proper attentive weights on speech frames. Considering that adjacent speech frames could form a chunk to represent more meaningful units like phonemes, some sort of pre-processing mechanisms such as convolutions to capture an embedding for a group of nearby speech frames would be helpful for self-attention. In addition, a multi-resolution approach could be beneficial as well since boundaries for such meaningful chunks of speech frames are dependent of many factors, e.g., the type of phonemes (vowel vs. consonant) and the way they are pronounced, affected by gender, speaker, co-articulation and so on. Based on this reasoning, a new neural network model architecture is proposed for better self-attention, namely multi-stream self-attention. The proposed architecture consists of parallel streams of self-attention. In each stream, input speech frames are processed with a distinct resolution by multiple layers of 1D convolutions with a unique dilation rate, and the convoluted embeddings are fed to a subsequent multi-head self-attention layer. In a later stage, the attentive embeddings from all the streams are concatenated then linearly projected to the final embedding. State-of-the-art performance is achieved on the LibriSpeech corpus by stacking up these multi-stream self-attention blocks and rescoring the resultant lattices with powerful neural language models. WERs were obtained on the dev-clean and test-clean sets of 1.8% and 2.2%, respectively, and appear to be the best reported numbers thus far on the datasets. The proposed multi-stream self-attention block, where each stream may consist of multiple layers of 1D convolutions, one layer of multi-head self-attention, and a feed forward layer sandwiched by two layer normalizations. Each layer normalization may have a skip connection with the input of its previous layer. The embeddings from all the streams are projected to the final embedding in a later stage. Each component of the multi-stream self-attention architecture is now described. Time delay neural networks (TDNNs) have been one of the most popular neural network models for speech recognition. It was introduced to capture the long range temporal dependencies of acoustic events in speech signals by exploiting a modular and incremental design from subcomponents. The modified version was recently proposed for better efficiency using the layer-wise subsampling methods. A TDNN is basically a 1D convolution. This convolution layer and its kernels with various dilation rates are used to control the resolution of input speech frames being processed in the parallel streams. In each stream, layers of 1D convolutions with a dilation rate process speech frames in the specified resolution. This can reduce burden on the self-attention mechanism to enable the attentive computation to focus on only one resolution of speech frames in the given stream. Examples of the 1D convolutions which the 3×1 kernels may include dilation rates of 1, 2, and 3. In order to make the convolution layers more efficient, the factorized TDNN may be used. Singular Value Decomposition (SVD) may be used to factorize a learned weight matrix into two low-rank factors and reduce the model complexity of neural networks. The factorized TDNN or factorized 1D convolution (1D Conv-F) layers also utilizes SVD to factorize a 1D convolution parameter matrix into two low-rank matrices. The kernel for each factorized 1D convolution is set to 2×1. One of the two factorized convolutions is constrained by the semi-orthogonal condition during training. Consider U as one of the factors in the original parameter matrix W after SVD. The semi-orthogonal constraint puts the condition to minimize a function ƒ: ƒ=Trace(QQT) where Q=P−I. P is defined by UUTand I is the identity matrix. This way of factorization with the semi-orthogonal constraint leads to less model complexity overall and also results in better modeling power. After the factorization, the rectified linear unit (ReLu), batch normalization, and dropout are followed by a skip connection between the scaled input embedding and the output of the dropout layer. The scale value is a hyper-parameter. In a given stream of s, the time-restricted self-attention mechanism may be formulated in a mathematical manner as follows. An input embedding matrix to the stream s, may be defined as Xs∈RN×dmodel, where N is the total number of input embeddings restricted by the left and right context and dmodelis the dimension of embeddings used inside the self-attention mechanism. Note downsampling is applied to the input that embeddings and the sampling rate is matched to the specified dilation rate (rs) of the 1D Conv-F layers in the stream. For the projected query, key and value matrices, Qsi, Kisand Visin the stream s, the output for the ithhead is computed as follows: Headis=Softmax(Qis⁢Kis⁢Tdk)⁢Vis where Qis=XsWis,Q, Wis,Q∈dmodel×dq, Kis=XsWis,K∈dmodel×dk, Vis=XsWis,V, Wis,V∈dmodel×dv, and dq, dk, and dvare the dimensions of query, key and value embeddings, respectively. The multihead outputs are concatenated and linearly projected, then layer normalization is applied to the projected embedding that is skip-connected with the input embedding: MultiHeadProjs=Concat(Head1s, . . . ,Headnhss)Ws,0 MidLayers=LayerNorm(MultiHeadProjs+Xs) where nshis the number of heads in the stream s and Wls,0∈R(nh×dv)×dmodel. The value nsh=nh/S, given nh is a fixed value for the total number of multi-heads across the self-attention components of the whole streams. The multiple streams in the proposed architecture could increase the model complexity significantly as opposed to a single stream approach. To retain the model complexity to a reasonable level as well as avoid the loss of modeling power, the factorized feed forward networks with a bottleneck layer in between given stream may be used. The semi-orthogonal constraint discussed above is also applied here to one of the factorized matrices during training. After the skip connection, the encoder output in the stream s can be written as below: Factorizeds=Factorized-FF(MidLayers) Encoders=LayerNorm(Factorizeds+MidLayers) Note that the dimension dffof the embedding layer between the feed forward networks of the original Transformer encoder is either 1,024 or 2,048. The model complexity of the feed forward component may be reduced by the factor of 8 or 16 if the choice of the bottleneck layer in the factorized version was 128. The final embedding layer concatenates the encoder output from each stream and linearly projects the concatenated vector to the final embedding. The ReLu non-linear activation, batch normalization, and dropout follows before feeding out the final embedding as the output: MultiEncProj=Concat(Encoder1, . . . ,Encoders)WO Final=Dropout(BatchNorm(ReLu(MultiEncProj))) where WO∈R(S×dmodel)×dmodel. For the experiments, the LibriSpeech corpus is used as the main training and testing datasets. The LibriSpeech corpus is a collection of approximately 1,000 hr audiobooks that are a part of the LibriVox project. Most of the audiobooks come from the Project Gutenberg. The training data is split into 3 partitions of 100 hr, 360 hr, and 500 hr sets while the dev and test data are split into the ‘clean’ and ‘other’ categories, respectively, depending upon how well or challenging ASR systems would perform against. Each of the dev and test sets is around 5 hr in audio length. This corpus also provides the n-gram language models and the corresponding texts excerpted from the Project Gutenberg books, which contain 803M tokens and 977K unique words. To prepare a lexicon, 522K words were selected among the 977K unique words that occur more than once in the LibriSpeech texts. Using the base lexicon of the CMUdict that covered 81K among the selected words of 522K, G2P model was trained using the Sequitur tool to cover the out-of-vocabulary words. The SRILM toolkit was used to train n-gram language models (LMs). The 4-gram LM was trained initially on the entire texts available with the modified KneserNey smoothing, then pruned to the 3-gram LM. The first-pass decoding was conducted with the 3-gram LM and the resultant lattices were rescored with the 4-gram LM later in the second-pass. The lattices were further rescored with the neural network LMs of three TDNN layers and two LSTM layers being interleaved that were trained by the Kaldi toolkit. The Kaldi toolkit was used for the acoustic modeling as well, mostly following the LibriSpeech recipe up to the stage of Speaker Adpative Training (SAT). The training data size was gradually increased from 100 hrs to 960 hrs over the course of the GMM training stages, while the neural network training used the entire 960 hr data. The GMMs were first trained within the framework of 3-state Hidden Markov Models (HMMs). The conventional 39-dimensional MFCC features were spliced over 9 frames and LDA was applied to project the spliced features onto a 40-dimensional sub-space. Further projection was conducted through MLLT for better orthogonality. SAT was applied with feature-space MLLR (fMLLR) to further refine mixture parameters in GMMs. For the neural network acoustic models, the 40-dimensional higher-resolution MFCCs were appended with 100-dimensional i-vectors and trained the models having the lattice alignments given by the SAT-ed GMMs as soft targets. The LF-MMI objective was used to optimized the parameters with the three regularization methods of cross-entropy, L2 and leaky HMM. The exponential decrease of learning rates from 10−3to 10−5was applied to make the entire training procedure stable and have better convergence. The number of nodes in the final layer was determined by the number of tri-phone states in the HMM, which is 6K after the phonetic tree clustering. The trainings were conducted on the Nvidia V100 servers with 8 GPUs. The dimensions of query, key, and value embeddings in self-attention are set to dq=dk=40, and dv=80, and dmodel=256. The bottleneck dimension for the factorized convolutions and the factorized feed-forwards is 128. The number of streams and the number of the 1D convolution layers in each stream is from 1 to 5 and 0 to 7, respectively, depending on the experiment type for the ablation tests. For the test of the validity of multiple streams in the proposed architecture, the number of multi-heads was controlled in the self-attention layer in each stream such that nsh=nh/S while fixing nh=15. For example, if the number of streams is 3 (i.e., S=3), then the number of multi-heads in the self-attention layer of each stream would be 15/3=5, thus n1h, =n2h=n3h=5. This is to rule out the possibility that any performance improvement would come from the significant increase of model complexity. The effect of having multiple streams in the proposed architecture is shown. The three entries for the system configurations correspond to S=1,3 and 5, respectively, while fixing the dilation rate of 1 across the streams. (For example, 1-1-1 means the three streams with the fixed dilation rate of 1 for the factorized convolutions of all the streams.) It is noticeable that with more streams, better accuracy was obtained, but without a diverse selection of dilation rates across the streams, the improvement would be limited. The effect of having diverse dilation rates across the multiple streams of the proposed architecture is shown. The various dilation rates across the streams are shown to help improve WERs by the clear margins. However, just mixing any values would not guarantee the performance improvement. For example, the system configuration of 1-3-5 presents the WER on the dev-other set worse than the configuration of 1-1-1 does. It seems that a careful mix of the dilation rates would be critical for the proposed model. The best configuration was from the 1-2-3-4-5 setup, which has the 5 different dilation rates (by the difference of 1) for the 1D convolutions across the streams, marking 7.69% and 6.75% WERR on dev-clean and dev-other, respectively, as compared to the single stream baseline. This validates the efficacy of the proposed multi-stream strategy of having 1D convolutions with a unique dilation rate in each stream. The proposed architecture seemingly helps the self-attention mechanism better process embeddings in each stream and lead to more accurate results overall. The main purpose of having the factorized feed-forward networks in the proposed architecture is to retain the model complexity within a reasonable boundary even with adding more streams. The proposed multi-stream model architecture, otherwise, would increase the model complexity very easily as more streams are added. With the same configuration of 1-2-3-4-5, it is shown that the factorization works as expected. The factorized feed-forward networks not only contribute to the model complexity under 10M parameters but also keep the performance in a similar level with the control group that includes the normal (i.e., without factorization) feed-forward networks with the wider bottleneck layer of 1,024- or 2,048-dimension. The effect of having the 1D convolutions in the proposed multi-stream self-attention model architecture is shown. It presents the importance of the 1D convolutions preceding the self-attention mechanism in the multi-stream framework. The 7-layer Conv-F leads to roughly 15% and 20% WERR for the dev-clean and dev-other dataset, respectively, against the case of having no convolutions. The pattern seems that the more convolution layers, the more performance gain would be obtained, but after the 7 layers any significant performance boost was not observed. The model for the LibriSpeech speech recognition task is configured by stacking the proposed multi-stream self-attention blocks. The chosen configuration of the ASR system for us is 3 layers of multi-stream self-attention blocks, 5 streams of self-attention encoders in each block, 7 layers of 1D Conv-F's in each stream, and dilation configuration of 1-2-3-4-5 to 1D CONVF's across streams. The total number of parameters for this setup is 23M. The lattice-rescored result of this system with the 4-gram LM is presented. As for the neural network language models, the models of 3 TDNN layers and 2 LSTM layers (unidirectional) were trained being interleaved. The averaged (relative) error rate reduction by the best neural network language model (with the dimension of 4,096) against the 4-gram LM case ranges from 15% to 20%. The embedding dimension seems to matter in terms of performance improvement. Dimensions bigger than 4,096 were not tried due to the prohibitive model training time. The best model in terms of WER improvement is the one with the 4-layered LSTMs, trained with around 10K word pieces. The hybrid DNN/HMM approach was used for the different neural network acoustic models of CNN-biLSTM, pyramidal Feedforward Sequential Memory Network (FSMN), and BiLSTM, respectively, in. The Transformer LM was exploited as the best rescoring LM in. In the end-to-end framework was considered. Time Depth Separable (TDS) convolutions were introduced in while the cut out of spectrograms was applied on the fly during training to enhance the noise robustness. In, the full Transformer model was studied comparatively in various speech tasks. As compared to the other state-of-the-art system performances, it is shown that the best performances on both of the dev-clean and test-clean set were achieved, while on the dev-other and test-other set the lowest WERs are presented in. The WERs of 1.8% and 2.2% on the datasets by the proposed system are the best numbers thus far reported in the literature. The multi-stream self-attention model architecture preceded by the layers of the factorized 1D convolutions with the unique dilation rate in each stream is proposed. This architecture allows input speech frames to be efficiently processed and effectively self-attended. The proposed ideas were validated by performing the ablation tests, and also configured the state-of-the-art ASR system by stacking the multistream self-attention model blocks with the strong neural network language models. The WERs on the dev-clean and test-clean set of 1.8% and 2.2% are the best reported numbers found in the literature. Note that the proposed system has only 23M parameters. The other systems have higher model complexity, for example, 100M for CNN-biLSTM or 200M for LAS. This could make our system more practical and appealing to speech engineers or practitioners who would like to deploy ASR models as a service on devices with the limited computing power. Of course, the practicality of the proposed model on on-device ASR would rely upon the usage of much lighter LM models. The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. “Processor” as used herein is meant to include at least one processor and unless context clearly indicates otherwise, the plural and the singular should be understood to be interchangeable. Any aspects of the present disclosure may be implemented as a computer-implemented method on the machine, as a system or apparatus as part of or in relation to the machine, or as a computer program product embodied in a computer readable medium executing on one or more of the machines. The processor may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more thread. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like. A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The software program may be associated with a server that may include a file server, print server, domain server, internet server, intranet server and other variants such as secondary server, host server, distributed server and the like. The server may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more locations without deviating from the scope of the disclosure. In addition, any of the devices attached to the server through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The software program may be associated with a client that may include a file client, print client, domain client, internet client, intranet client and other variants such as secondary client, host client, distributed client and the like. The client may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client. The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more locations without deviating from the scope of the disclosure. In addition, any of the devices attached to the client through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements. The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types. The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute program codes. The mobile devices may communicate on a peer-to-peer network, mesh network, or other communications network. The program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station. The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another. The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general-purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine-readable medium. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. While the invention has been disclosed in connection with the preferred embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the present invention is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law. All documents referenced herein are hereby incorporated by reference in the entirety.
68,846
11862147
DESCRIPTION OF THE PREFERRED EMBODIMENTS The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Overview As shown inFIG.1, a system100for providing information to a user includes and/or interfaces with a set of models and/or algorithms. Additionally or alternatively, the system can include and/or interface with any or all of: a processing subsystem; a sensory output device; a user device; an audio input device; and/or any other components. Further additionally or alternatively, the system can include and/or interface with any or all of the components as described in U.S. application Ser. No. 14/750,626, filed 25 Jun. 2015, U.S. application Ser. No. 15/661,934, filed 27 Jul. 2017, U.S. application Ser. No. 15/696,997, filed 6 Sep. 2017, U.S. application Ser. No. 15/795,054, filed 26 Oct. 2017, U.S. application Ser. No. 15/959,042, filed 20 Apr. 2018, U.S. application Ser. No. 17/033,433, filed 25 Sep. 2020, and U.S. application Ser. No. 17/144,076, filed 7 Jan. 2021, each of which is incorporated in its entirety by this reference. As shown inFIG.2, a method200for providing information to a user includes and/or interfaces with: receiving a set of inputs S210; processing the set of inputs to determine a set of sensory outputs S220; and providing the set of sensory outputs S230. Additionally or alternatively, the method200can include and/or interface with any other processes. Further additionally or alternatively, the method200can include and/or interface with any or all of the processes described in U.S. application Ser. No. 14/750,626, filed 25 Jun. 2015, U.S. application Ser. No. 15/661,934, filed 27 Jul. 2017, U.S. application Ser. No. 15/696,997, filed 6 Sep. 2017, U.S. application Ser. No. 15/795,054, filed 26 Oct. 2017, U.S. application Ser. No. 15/959,042, filed 20 Apr. 2018, U.S. application Ser. No. 17/033,433, filed 25 Sep. 2020, and U.S. application Ser. No. 17/144,076, filed 7 Jan. 2021, each of which is incorporated in its entirety by this reference, or any other suitable processes performed in any suitable order. The method200can be performed with a system100as described above and/or any other suitable system. 2. Benefits The system and method for providing information to a user can confer several benefits over current systems and methods. In a first variation, the technology confers the benefit of helping convey information to individuals with high frequency hearing loss, such as that which often occurs in age-related hearing loss or other hearing conditions. In specific examples, for instance, the system and/or method provide haptic stimulation (e.g., contemporaneously with the occurrence of the corresponding audio) to convey the occurrence of high frequency phonemes in an audio environment of the user. Additionally or alternatively, the haptic stimulation can convey other high frequency information (e.g., words including high frequency phonemes, non-phoneme information, etc.). In another set of specific examples, the system and/or method adjust sound parameters (e.g., frequency, pitch, volume, etc.) of high frequency phonemes in audio (e.g., recorded audio) prior to playing it to the user. In a second variation, additional or alternative to the first, the technology confers the benefit of developing and/or utilizing signal processing techniques (e.g., algorithms, trained models, etc.) which can robustly distinguish high frequency information (e.g., high frequency phonemes) from noise. Additionally or alternatively, the signal processing techniques can be configured for any or all of: working with audio from far field conditions, working on a device with a single microphone, working on a device with limited processing and/or computing (e.g., a wearable device, a wearable wristband device, etc.), working with low latency, preventing false positives and/or false negatives, and/or otherwise enabling performance of the device. In specific examples, for instance, the system includes a wearable tactile device with a single microphone and onboard processing system, wherein processing of audio information is performed which optimizes a tradeoff between low latency and high performance (e.g., accurate and robust identification of high frequency phonemes). In other specific examples, for instance, the system interfaces with a processing system onboard a user device (e.g., smartphone, mobile user device, etc.), where audio received at and/or onboard the user device (e.g., from a telephone conversation, from an application such as a podcast application, etc.) is processed and altered prior to being played for the recipient user (e.g., at a set of speakers onboard the user device, at a set of speakers offboard the user device, etc.). In a third variation, additional or alternative to those described above, the technology confers the benefit of minimizing and/or enforcing a maximum allowable latency in the processing of audio information, such that the outputs provided to the user are intelligible and comprehensible with respect to the original information from which the sensory outputs are derived. In some examples, for instance, a set of models and/or algorithms used in feature (e.g., phoneme, high frequency phoneme, etc.) detection are trained and/or otherwise configured to produce outputs within a predetermined time threshold (e.g., 50 milliseconds, between 10 and 100 milliseconds, between 40 and 60 milliseconds, etc.) such that the outputs are intelligible and do not cause confusion to the user (e.g., by being out of sync with the corresponding environmental information, by delaying one side of a two-sided conversation, etc.). In a particular example, this is accomplished through the development and use of a constrained loss function (e.g., constrained Connectionist Temporal Classification [CTC] loss function), which specifies and/or otherwise configures the models and/or algorithms to be performed within a predetermined latency. In a fourth variation, additional or alternative to those described above, the technology confers the benefit of dynamically (e.g., with negligible delay, with minimal delay, etc.) enhancing parts of an audio signal which a user has difficulty interpreting, thereby increasing an intelligibility of the audio to the user. In a set of specific examples, for instance, audio presented to a user having high frequency hearing loss is enhanced to make high frequency phonemes (or other parts of the speech) more noticeable and/or distinguishable to the user (e.g., by increasing their energy/volume, by decreasing their pitch, etc.). In a fifth variation, additional or alternative to those described above, the technology confers the benefit of enhancing audio and/or other sensory outputs (e.g., tactile outputs) in a way which is specifically targeted (e.g., personalized) to the hearing abilities of the user (e.g., frequencies of hearing loss). In specific examples, for instance, information associated with the user (e.g., audiogram results) and his or her hearing loss is used to determine which audio is enhanced and/or how the audio is enhanced. In a particular specific example, one or more models are trained specifically to the user, wherein the one or more models are used to process incoming audio signals. In a sixth variation, additional or alternative to any or all of those described above, the technology confers the benefit of performing any or all of the processing locally and onboard a user device and/or sensory output device, which can function to promote privacy of the user and/or decrease latency in the audio and/or other outputs being presented to the user. Additionally or alternatively, the system and method can confer any other benefit. 3. System100 As shown inFIG.1, a system100for providing information to a user includes and/or interfaces with a set of models and/or algorithms. Additionally or alternatively, the system can include and/or interface with any or all of: a processing subsystem; a sensory output device; a user device; an audio input device; and/or any other components. Further additionally or alternatively, the system can include and/or interface with any or all of the components as described in U.S. application Ser. No. 14/750,626, filed 25 Jun. 2015, U.S. application Ser. No. 15/661,934, filed 27 Jul. 2017, U.S. application Ser. No. 15/696,997, filed 6 Sep. 2017, U.S. application Ser. No. 15/795,054, filed 26 Oct. 2017, U.S. application Ser. No. 15/959,042, filed 20 Apr. 2018, U.S. application Ser. No. 17/033,433, filed 25 Sep. 2020, and U.S. application Ser. No. 17/144,076, filed 7 Jan. 2021, each of which is incorporated in its entirety by this reference. The system100functions to provide enhanced information and/or sensory outputs to a user which increases an intelligibility and/or interpretability of the user's understanding of audio information occurring in his or her environment. In a first set of variations of the system100, the system functions to supplement audio conveyed to a user through the provisional of tactile information at a body region (e.g., wrist, arm, leg, torso, etc.) of the user. In a second set of variations of the system100, the system functions to detect and enhance particular features (e.g., phonemes, words and/or sounds that a recipient user has trouble hearing and/or interpreting, etc.) in an audio stream such that the enhanced audio features are conveyed to the user as part of the naturally occurring audio. Additionally or alternatively, the system100can function to process audio information and/or any other inputs, provide tactile stimulation which is optimal to a user and/or an environment of the user, provide other information (e.g., audio information, optical information, etc.) to the user, and/or otherwise provide information to the user. 3.1 System—Audio Input Device110 The system100can optionally include and/or interface with an audio input device110, which functions to receive audio information (e.g., dialogue) with which to perform the method200. Additionally or alternatively, the audio input device no can perform any other functions. Further additionally or alternatively, audio information can be received from other sources, such as databases, libraries, and/or any other sources. The audio input device preferably includes one or more microphones, but can additionally or alternatively include any other audio input devices. The microphones can optionally be any or all of: monodirectional, bidirectional, omnidirectional, associated with other directionalities and/or no directionality, and/or can be otherwise configured. The audio input device(s) can be any or all of: onboard one or more devices (e.g., sensory output device, user device, headset and/or set of headphones, etc.), in an environment of one or more users, at any combination of devices and/or locations, and/or from any other sources. In a first set of variations, a set of one or more audio input devices are arranged onboard a wearable device of the user, such as a device configured to provide tactile stimulation to the user. In a first set of examples, for instance, the system includes a set of one or more microphones arranged onboard a tactile stimulation wristband (e.g., as shown inFIG.3A, as shown inFIG.5, etc.), where the microphones record audio information from an environment of the user. In a second set of examples, the system includes a set of one or more microphones arranged onboard a tactile stimulation vest (e.g., as shown inFIG.6) or other garment, wherein the microphones record audio information from an environment of the user. In a second set of variations, a set of one or more audio input devices are arranged onboard a user device (e.g., mobile user device), such that audio from users can be recorded and altered prior to provision to other users (e.g., recipient users having high frequency hearing loss). Additionally or alternatively, audio information from an environment of the user can be recorded from the audio input devices onboard the user device. In a third set of variations, a standalone microphone and/or microphone integrated within a headset and/or set of headphones is used to record audio information from a user (e.g., individual providing dialogue to a user with high frequency hearing loss) and/or an environment of the user. In a fourth set of variations, in addition or alternative to collecting dynamic (e.g., real-time) audio information, pre-recorded audio information can be received (e.g., retrieved from a library, database, application, audio file, etc.) and processed in the method200. In a set of specific examples, for instance, audio retrieved at a user device (e.g., via a podcast application, via an audio book application, via an audio file, etc.) and/or any other device can be processed in the method200. Further additionally or alternatively, non-audio information (e.g., text, messages, visual information, etc.) can be received and processed in the method200. The system100can additionally or alternatively include any other input devices, such as other sensors configured to receive information with which to determine and/or provide outputs to the user. These can include, for instance, any or all of: optical sensors, location sensors, temperature sensors, motion sensors, orientation sensors, and/or any other sensors. 3.2 System—Sensory Output Device120 The system100can optionally include a sensory output device120, which functions to provide sensory outputs to the user. The sensory outputs preferably function to enhance (e.g., supplement) any or all of the audio information provided to a user, but can additionally or alternatively function to enhance particular types and/or subsets of audio (e.g., dialogue), replace certain audio of the audio information (e.g., replace audio phonemes with tactile representations), and/or can perform any other functions. The sensor output device can optionally include a tactile stimulation device (equivalently referred to herein as a tactile device and/or haptic device) (e.g., as shown inFIGS.3A-3C,FIG.5,FIG.6,FIG.7,FIG.10, etc.), which functions to provide tactile stimulation to a user, thereby conveying tactile information (e.g., representing audio information, representing particular features of the audio information, representing high frequency phonemes, etc.) to the user. The tactile device is preferably a wearable device configured to be reversibly coupled to (e.g., fastened to, worn by, held by, etc.) the user, but can additionally or alternatively include a device irreversibly coupled to the user. Further additionally or alternatively, the tactile device can be a non-wearable device such as a tabletop device, handheld device, and/or any other suitable device. The tactile device preferably includes an actuation subsystem, which functions to apply the haptic (e.g., vibratory) stimulation to the user. The actuation subsystem preferably includes a set of actuators, which individually and/or collectively function to provide the haptic stimulation to a body region of the user. Additionally or alternatively, the haptic stimulation can include electric pulses, heat, and/or any other tactilely perceivable stimulation. In a preferred set of variations, the body region includes a partial or full circumference of one or more wrists of the user, but can additionally or alternatively include any or all of: a hand, arm, finger, leg, torso, neck, head, ankle, and/or any other suitable body part or body region of the user. The set of actuators can include one or more of: a motor (e.g., brushless motor, brushed motor, direct current (DC) motor, alternating current (AC) motor, eccentric rotating mass (ERM), etc.), an actuator (e.g., linear resonant actuator (LRA), electroactive polymer (EAP) actuator, electromechanical polymer (EMP) actuator, etc.), a piezoelectric device, and/or any other form of vibratory element. In a set of actuators including multiple actuators, the actuators can be arranged in an array (e.g., 1-dimensional array, 2-dimensional array, 3-dimensional array, etc.), arranged at least partially circumferentially around the body part (e.g., around a wrist, around half of the circumference of the wrist, etc.), arranged along the body part (e.g., up and down an arm), arranged over a body region (e.g., over the user's trunk, stomach, etc.), arranged among different body parts of a user (e.g., arranged around both wrists), and/or arranged in any other suitable way. The vibratory elements can be directly coupled to the skin of a user, separated from a user by an element of the housing (e.g., the wristband), placed over a user's clothing, and/or coupled to the user in any other way. In variations of the system configured to apply haptic stimulation to a wrist of the user, the system preferably includes 4 LRA actuators arranged around a portion of the circumference (e.g., half the circumference) of the wrist. Additionally or alternatively, the system can include actuators circumscribing the entire wrist (e.g., 8 LRA actuators), and/or any other suitable number and arrangement of actuators. The actuation subsystem is preferably operated in accordance with a set of stimulation patterns (e.g., series of stimulation patterns), wherein the stimulation patterns prescribe any or all of the following to the set of actuators (e.g., individually, collectively, etc.): amplitude of vibration, timing of vibration (e.g., when to start, duration, when to end, duration of time between vibrations, etc.), sequence of vibration, identification of which of the set of actuators to vibrate, frequency of vibration, and/or any other parameter(s) of stimulation. In preferred variations, the stimulation pattern prescribes an amplitude of vibration and a duration of vibration to one or more actuators of the set of actuators, wherein each of the set of actuators is configured to vibrate at a fixed frequency. Additionally or alternatively, the stimulation pattern can prescribe a frequency of vibration, a dynamic pattern of vibration (e.g., alternating between actuators), and/or any other suitable characteristic or parameter(s) of vibration. The set of stimulation patterns is preferably determined, at least in part, with a processing subsystem as described below, but can additionally or alternatively be predetermined, prescribed, and/or otherwise determined and/or assigned. Additionally or alternatively, the actuation subsystem can be operable in any number of modes, wherein the method200, for instance, can be performed in accordance with a particular operation mode. Additionally or alternatively, the tactile device can be otherwise operated. The actuation subsystem can include a haptic driver (e.g., LRA driver) configured to actuate the set of actuators according to the stimulation pattern. Additionally or alternatively, the actuators can be actuated in any suitable way with any other suitable component(s). The tactile device can optionally include a housing, which functions to support the set of actuators. The housing can additionally or alternatively function to: suspend the set of actuators, maintain a separation distance between the set of actuators, maintain an offset (e.g., minimize, maintain a constant offset, etc.) of the set of actuators from a skin surface of the user, conform to a variety of users (e.g., conform to a variety of user wrist sizes, flex to wrap around a user's wrist, etc.), house other components of the system (e.g., sensor subsystem, control module, etc.), be comfortable to a user, enhance a vibration of the actuators (e.g., minimize a dampening of the haptic output), reduce direct sound transmission from the set of actuators to a microphone, maintain an orientation of the system on a user (e.g., prevent rotation of the support subsystem on the wrist of a user), assist in alignment of the support subsystem, and/or perform any other suitable function. The tactile device preferably includes and/or interfaces with (e.g., as another component of the system100) an audio input device (e.g., as part of a greater sensor subsystem), which functions to receive information from an environment of the user and/or any other information sources (e.g., recorded message, written message, recording, etc.). This can include, for instance, audio information from a set of microphones (e.g., unidirectional microphones, bidirectional microphones, omnidirectional microphones, etc.) or other audio sensors with which to determine and/or trigger haptic stimulation to be applied to the user (e.g., in organic exposure therapy for tinnitus). Additionally or alternatively, the sensor subsystem can include any other suitable sensors (e.g., camera or other optical sensor(s), GPS system or other location sensor(s), accelerometer and/or gyroscope and/or other motion sensor(s), etc.) configured to receiving any other information. The sensor subsystem can be arranged onboard the tactile device, offboard the tactile device (e.g., remote from the tactile device, onboard a user device in communication with the tactile device, in an environment of the user, etc.), or any combination. The system can optionally include any number of output devices, such as, but not limited to, any or all of: speakers (e.g., to provide audio outputs), optical components (e.g., lights, light emitting diodes [LEDs], etc.), and/or any other output devices. Additionally or alternatively, the tactile device can include and/or interface with any other components. Further additionally or alternatively, the sensory output device can be configured to provide sensory outputs other than and/or additional to tactile stimulation, such as, but not limited to: audio information, optical information, and/or any other information. Additional or alternatively to a tactile device, the sensory output device can include and/or interface with a user device. Examples of the user device include a tablet, smartphone, mobile phone, laptop, watch, or any other suitable user device. The user device can include power storage (e.g., a battery), processing systems (e.g., CPU, GPU, memory, etc.), user outputs (e.g., display, speaker, vibration mechanism, etc.), user inputs (e.g., a keyboard, touchscreen, microphone, etc.), a location system (e.g., a GPS system), sensors (e.g., optical sensors, such as light sensors and cameras, orientation sensors, such as accelerometers, gyroscopes, and altimeters, audio sensors, such as microphones, etc.), data communication system (e.g., a WiFi module, BLE, cellular module, etc.), or any other suitable component. In some variations, for instance, the sensory output device includes a set of one or more audio output devices (e.g., speakers), which function to provide enhanced and/or altered audio (e.g., modified version of the audio information) to the user. In some examples, for instance, the sensory output device includes and/or interfaces with a user device, which provides enhanced audio to a user through a set of speakers. Additionally or alternatively, any or all of the sensory outputs can be provided with a wearable device (e.g., tactile device which also provides audio outputs), a combination of devices (e.g., tactile device and user device, etc.), any other devices, and/or any combination of devices. 3.3 System—Processing Subsystem140 The system100preferably includes a processing subsystem140, which functions to process information (e.g., audio information as described above) to determine a set of sensory outputs (e.g., tactile outputs, audio outputs, etc.) to provide at a body region (e.g., skin surface) of the user (e.g., at a tactile device). Additionally or alternatively, the processing subsystem can function to process any other inputs (e.g., text inputs) or information, and/or to provide any other outputs. The processing subsystem can include and/or interface with any or all of: one or more processors (e.g., CPU or other microprocessor, control circuit, relay system, etc.), computer memory modules (e.g., RAM), computer storage modules (e.g., hard disk drive, flash memory, etc.), and/or any other suitable elements. At least a portion of the processing subsystem is preferably arranged onboard one or more sensory output devices (e.g., tactile device, user device, etc.). Additionally or alternatively, any or all of the processing subsystem can be arranged and/or implemented remote/offboard from the sensory output device, such as at any or all of: a remote computing system (e.g., cloud computing system), another device (e.g., a mobile computing device in communication with the tactile device, another tactile device, etc.), and/or any other processing and/or computing subsystems. In a preferred set of variations, the processing subsystem is arranged fully onboard a tactile device. In an alternative set of variations, a portion of the processing subsystem is arranged onboard the tactile device and in communication with a second portion of the processing subsystem arranged remote from the tactile device. In yet another alternative variation, the entire processing subsystem is arranged offboard the tactile device (e.g., at a user device [e.g., mobile computing device]). Additionally or alternatively, the processing subsystem can be otherwise suitably arranged and/or distributed. The processing subsystem preferably implements one or more trained models and/or algorithms150, which function to process audio information (e.g., as described below in S210) and identify features of the audio information. The features preferably include high frequency information, further preferably high frequency phonemes (e.g.,/th/, /f/, /s/, /h/, /k/, /z/, /b/, /dh/, /t/, /d/, /v/, etc.), but can additionally or alternatively include low frequency information (e.g., low frequency phonemes), multiple phonemes and/or combinations of phonemes (e.g., n-gram of phonemes, bigram of phonemes, trigram of phonemes, multiple phonemes occurring in succession such as an \s\ phoneme followed by a \t\phoneme, etc.), non-phoneme information (e.g., other acoustic features or sounds, acoustic features or sounds which sound like phonemes, etc.), and/or any other information. Additionally or alternatively, the set of trained models can be configured for any or all of: distinguishing audio features (e.g., high frequency audio features) from noise, detecting audio features in both near and far field conditions, detecting audio features with only a single microphone, detecting audio features with multiple microphones, robustly detecting audio features with limited computing (e.g., onboard a wearable device), detecting audio features with low latency, optimizing for a tradeoff between low latency and high performance, and/or any the set of trained models can be otherwise suitably configured. In a preferred set of variations, the set of trained models and/or algorithms150is configured to detect high frequency phonemes from audio information collected in an environment of the user. In specific examples, the set of trained models and/or algorithms is further specifically configured to robustly (e.g., repeatedly, reliably, with an occurrence of false positives below a predetermined threshold, with an occurrence of false positives below a predetermined threshold, with an occurrence of false negatives below a predetermined threshold, etc.) detect high frequency phonemes in the presence of noise, which is conventionally difficult to distinguish from high frequency phonemes due to their similarities. Additionally or alternatively, the set of trained models and/or algorithms can be otherwise configured, such as for any or all of the outcomes as described above. The trained models and/or algorithms preferably include one or more deep learning models and/or algorithms, further preferably one or more neural networks. Additionally or alternatively, the trained models and/or algorithms can include other machine learning models and/or machine learning algorithms. Further additionally or alternatively, the processing subsystem can implement any number of non-trained models or other tools, such as, but not limited to, any or all of: a set of algorithms, a set of equations, a set of programmed rules, and/or any other tools. In preferred variations including one or more neural networks, the neural networks preferably include a set of recurrent neural net layers which function to detect a set of high frequency phonemes in audio in the environment of the user. The neural network further preferably includes one or more convolutional layers, which function to improve the detection of time and/or frequency features in the collected audio, thereby providing additional information with which the neural network can use to make decisions (e.g., deciding between two different phonemes). The convolutional layer preferably includes a lookahead mechanism (e.g., with future context size being at least one frame) to provide this benefit, but can additionally or alternatively be otherwise configured and/or designed. Additionally or alternatively, the neural networks can be any or all of: absent of recurrent neural net layers, absent of convolutional layers, and/or can include any other architecture. In a specific example, the set of trained models includes a neural network including multiple (e.g., 3, 2, 4, greater than 4, etc.) recurrent neural network (RNN) layers and one or more (e.g., 1, 2, 3, more than 3, etc.) convolutional neural network (CNN) layers. In this specific example, RNNs are chosen (e.g., over long short-term memory [LSTM] layers) specifically due to their ability to be implemented with low latency (e.g., less than 50 milliseconds [ms], less than 40 ms, less than 30 ms, less than 20 ms, less than 10 ms, less than 100 ms, etc.) for small networks, which can be deployed on embedded devices (e.g., in the case of a wearable device as described above). In this specific example, for instance, the inventors have discovered that RNNs can be used rather than LSTMs because their performance on small networks is similar to that of LSTMs given the same or similar numbers of trainable parameters, and the RNNs can be evaluated approximately and/or at least twice as fast. Additionally or alternatively, other recurrent architectures with built-in state management (e.g., LSTMs, Gated Recurrent Units [GRUs], etc.) can be used, LSTMs can be used in place of RNNs, LSTMs can be used together with RNNs, and/or any other neural network architecture can be implemented on and/or off the device. The trained model(s) (e.g., neural network with RNN and CNN architecture) is preferably trained with one or more loss functions. Additionally, the trained model(s) can optionally be trained with a force aligned labels, which confers benefits in low latency (e.g., as compared to other loss functions and/or processes). In some variations, for instance, the trained model is trained with a cross-entropy loss function and force aligned labels which confers low latency advantages. The trained model is further preferably trained with a constrained loss functions, which functions to ensure that the outputs of the trained model can be produced (e.g., and sensory outputs correspondingly provided to the user) within a predetermined time threshold. This can function to enable intelligibility and/or interpretability of the audio information to be maintained. For instance, in an event that enhanced audio is provided to a user with hearing loss while he or she is conversing with another person, the audio can be enhanced and played back to the user in such a way that intelligibility of the conversation is not sacrificed. In another instance, in an event that tactile stimulation is correspondingly applied to naturally occurring audio, the constrained loss function enables the tactile stimulation to be applied contemporaneously with (e.g., overlapping with, partially overlapping, in quick succession with, etc.) the corresponding audio such that the user can interpret it as being applied to the correct audio portions. In some variations, for instance, the trained model can be trained with a sequence-based loss function (e.g., Connectionist Temporal Classification [CTC] loss function, Recurrent-Neural-Network-Transducer [RNN-T] loss function, etc.) which does not require forced alignment and typically has higher and unconstrained latency as compared with forced alignment, but can result in better performance and higher accuracy. In some variations, a reworked CTC loss function—equivalently referred to herein as a latency-constrained CTC loss function—is used which has been designed to prescribe an upper bound to the latency by enforcing certain constraints within the loss function algorithm and prohibiting even a theoretical possibility of delayed phoneme emission. Additionally or alternatively, any other CTC loss function can be used and/or the CTC loss function can be otherwise altered. The latency is preferably less than and/or at most equal to 150 milliseconds, but can additionally or alternatively be any or all of: greater than 150 milliseconds (e.g., 200 milliseconds, between 150-200 milliseconds, between 150 and 300 milliseconds, greater than 300 milliseconds, etc.), less than 150 milliseconds (e.g., 100 milliseconds or less, 50 milliseconds or less, 25 milliseconds or less, between 0 and 100 milliseconds, etc.), and/or any other value. In some use cases and individuals, for instance, a latency greater than 150 milliseconds can result in the brain having difficulty in properly interpreting and/or correlating two incoming signals. Additionally or alternatively, any other loss functions and/or training processes can be used. The trained model can additionally or alternatively be trained with curriculum learning, which can function to improve the detection of audio features based on training the model and/or models with a simplest subset of data first (e.g., easiest data to learn based on, least complex audio data, least noisy audio data, most clear audio data, most intelligible audio data, etc.) and then increasing the complexity (e.g., adding noise, incorporating complex room configurations, etc.) of the training data in subsequent training processes. In a set of examples, for instance, the set of trained models is first trained with clean audio data, and then trained with audio data incorporating noise, and then trained with augmented audio data (e.g., including multiple speakers, including multiple speakers and noise, including audio artifacts, etc.). The training data for the trained model preferably includes phoneme time step data obtained through a ground truth identification process (e.g., forced alignment process, as shown inFIG.8, etc.), wherein audio training data is divided into frames and labeled with any phonemes present. In specific examples, for instance, a Hidden Markov Model [HMM] based forced aligner is used to annotate speech data on a phonetic level, and as a result of the forced alignment process, timestamps indicated start and end positions of each phoneme in the audio file are produced. Additionally or alternatively, any or all of the training data can be hand labeled, any or all of the training data can be labeled (e.g., with ground truth labels) using a larger non-constrained neural network, and/or any or all data can be otherwise labeled. In some variations, a majority of the training data used to train models for high frequency phoneme detection and/or other high frequency features is audio speech data from women and children in comparison with audio speech data from men, as the inventors have discovered that this can improve performance of one or more trained models (e.g., due to the higher occurrence of high frequency phonemes in women and children). Additionally or alternatively, any other training data and/or composition of training data can be used. The trained model(s) is further preferably trained to be robust for high frequency audio (e.g., high frequency phoneme) detection. This preferably includes, for instance, training the model to be robust to far field high frequency audio features. In some variations, for instance, this includes performing a set of simulations as part of the training process, wherein the set of simulations is configured to simulate various room conditions and arrangements of audio sources within a room. For instance, in some variations, the simulations place a virtual sound source at a first random location within a virtual room and place a microphone at a second random location within the virtual room. As the virtual sound source is played, the audio received at the microphone can take into account reflection, reverberation, and/or any other properties arising from the room and the locations of the virtual sound source and virtual microphone. Additionally or alternatively, the room properties can be adjusted (e.g., size of room, height of room, shape of room, number of walls in room, etc.), obstacles can be arranged between the speaker and the microphone (e.g., partially shut door, curtain, furniture, etc.), and/or any other properties can be adjusted. In a set of specific examples, room acoustics and propagation of sound waves within the room are simulated using an impulse response augmentation method. In this, a set of multiple (e.g., hundreds, thousands, etc.) of rooms with random but realistic shapes are generated and their response captured based on random placements of audio source(s) and microphone(s). In preferred specific examples, the impulse response augmentation method is preferably performed with zero-phase impulse response augmentation, which confers the benefit of not introducing a time delay to the signal, thereby making it easier to synchronize processed audio with associated forced aligned labels in the training process. Additionally or alternatively, a non-zero phase can be implemented in the room impulse response process. Additional or alternative to room simulations, one or more simulations can include mixing in noise with randomized signal-to-noise ratio (SNR) values. The noise can include any or all of: artificially generated white noise, recorded real-word background noise, and/or any other noise. In specific examples, to reduce false positives of the method due to noise corresponding to aggressive environmental sounds (e.g., door closing, hair dryer, barking dog, etc.), a mixture of artificially synthesized impact noises can be used in simulations. Training one or more models can additionally or alternatively include one or more transfer learning processes, wherein the transfer learning process uses the training of a larger neural network, which achieves superior results due to its larger size, to train a smaller neural network which is ultimately implemented (e.g., due to computing constraints on an embedded device). In specific examples, this is performed with a student-teacher approach, but can additionally or alternatively include any other transfer learning approaches. Additionally or alternatively, any or all of the models can be otherwise trained, additionally trained, untrained, and/or otherwise configured. In variations in which the trained models are processed onboard a device (e.g., embedded device, wearable, mobile device, etc.) with limited processing/computing resources, the training process can include one or more size reduction processes, which function to enable the model(s) to be performed on this device with its available (e.g., limited) compute and/or with a minimized latency. The size reduction process(es) can optionally include a quantization process, which functions to reduce model size (e.g., of about 3× thereby enabling 4 models to fit in the same storage size as 1 before quantization) and/or memory consumption by representing model parameters and/or weights with more compact data types of lower, yet sufficient, numerical precision. Additionally or alternatively, the quantization process can function to enable the model(s) to run faster on the device and/or result in improved (e.g., smaller) inference latency. The quantization process can, however, result in a decrease in prediction accuracy of the trained model(s). To prevent and/or minimize this, the method200can include designing and/or applying quantization-aware techniques (e.g., during training) configured to constrain (e.g., squeeze) the dynamic range of neural network weights and activations, such as, but not limited, a distribution reshaping process. Additionally, one or more penalties (e.g., infinity norm penalty) can optionally be applied during quantization-aware training. The size reduction process can further additionally or alternatively include a shrinking process, which functions to minimize the size (e.g., computational size, computational requirements, size of integers in model, etc.) of the trained models, and/or any other size reduction processes. The shrinking process is preferably performed such that it prevents and/or minimizes performance degradation, but can otherwise be suitably performed. Additionally or alternatively, the processing subsystem can include and/or implement any other models, trained in any suitable way and/or untrained. The processing subsystem can additionally or alternatively include and/or implement one or more models and/or algorithms (e.g., trained models and/or algorithms, same as any or all of those described above, separate and distinct from any or all of those described above, etc.) which function to pre-process the audio information (e.g., audio signal). Additionally or alternatively, pre-processing the audio information can be performed with one or more rule-based tools (e.g., algorithm, natural language processing tool, decision tree, lookup table, etc.) and/or any combination of tools. In a preferred set of variations, for instance, the processing subsystem implements one or more classifiers (e.g., as shown inFIG.13), which function to inform the selection of one or more models and/or algorithms described above (e.g., and used later in the method200). The classifiers can include for instance, an accent-detection classifier, which functions to detect a particular accent associated with one or more users (e.g., user speaking to a user with a hearing impairment, user with the hearing impairment, etc.), where the particular accent can be used to: inform the selection of one or more downstream models for audio processing (e.g., models specifically trained based on that accent); determine how audio should be adjusted (and/or the associated parameters for adjusting audio) for a user (e.g., based on the particular intonations associated with that accent); and/or be otherwise used in processing and/or providing the adjusted audio to the user. The set of classifiers can additionally or alternatively be configured to determine any or all of: a sex associated with a user (e.g., such that a pitch of high frequency phonemes is adjusted more for female speakers than male speakers); a number of users participating in conversation; a topic of the conversation; a level of noise present in the environment; a distance of the speaker(s) from the user; and/or any other features of the audio. The processing subsystem can further additionally or alternatively include and/or implement one or more models and/or algorithms configured to determine and/or produce a transformation for providing sensory outputs to the user. In variations, for instance, in which altered audio is provided to a user, the audio is preferably altered with a transformation, which specifies the alteration of the original audio to be performed for its enhancement. The enhancement is preferably configured specifically for the user's particular hearing difficulties (e.g., high frequency hearing loss or specific to their hearing loss characteristics, possible defined by an audiogram of their hearing loss), but can additionally or alternatively be configured to optimize for any number of objectives, such as, but not limited to, any or all of: an intelligibility or other quality of the enhanced audio (e.g., according to a Perceptual Evaluation of Speech Quality [PESQ] metric, according to a short-time objective intelligibility [STOI] measure, according to a Hearing Aid Speech Perception Index [HASPI] metric, etc.), one or more phenomena associated with auditory perception (e.g., loudness recruitment effect), preferences of the user, and/or otherwise configured. The transformations can be nonlinear, linear, or any combination. The transformation(s) used to alter the audio signal are preferably determined with a set of one or more trained models (e.g., machine learning models, deep learning models such as neural networks, trained classifiers, etc.). The one or more trained models can be trained based on data associated with the particular user (e.g., one or more audiograms of the user), trained based on data aggregated from users having a similar audio perception as the user (e.g., other users having high frequency hearing loss, where a user is assigned a model based on similarity of his or her audiogram with other users used to train the model, etc.), trained based on predicted (e.g., simulated) data (e.g., a predicted profile of a user's hearing loss based on an audiogram or other data from the user), trained based on any other data, and/or trained based on any combination of data. The trained models can optionally further be trained to penalize outcomes which would sound abnormal to the user. In some cases, for instance, in which a pitch of an audio feature is adjusted (e.g., decreased for high frequency phonemes), if the pitch is altered too much, the resulting audio can sound abnormal and/or unintelligible. To prevent this, the transformation(s) can optionally be trained or designed with any or all of a word recognizer program, a phoneme recognizer problem, a speaker identification model, or any model that minimizes distance in embeddings learned through self-supervised, semi-supervised, or fully-supervised machine learning, which penalizes particular adjustments of the audio which would or could be unintelligible to the user. Additionally or alternatively, any or all of the transformations can be determined in absence of a learned model (e.g., through static/predetermined adjustments of parameters), with one or more rule-based tools and/or processes (e.g., rule-based decision trees, lookup tables, etc.), and/or any combination. In a first set of variants, the transformation includes a filter (e.g., inverse filter) configured to enhance the audio in a way which effectively reverses the effects of hearing loss associated with the user, where the filter is preferably determined with a trained neural network, but can additionally or alternatively be otherwise suitably determined. In specific examples, the filter is determined in accordance with a loudness recruitment effect, but can additionally or alternatively be determined in accordance with any other auditory phenomena, and/or in absence of any of these phenomena. In a specific example of the first set of variants, an audiogram or other hearing assessment of the user is used to simulate (e.g., with a hearing loss simulator) a hearing loss (e.g., type of hearing loss, parameters associated with user's hearing loss, severity of user's hearing loss, etc.) of the user, which is then used for any or all of: selecting a filter for the user, designing a filter for the user, training a model specific to the user which is used to produce the filter, and/or otherwise used to determine a filter for the user and/or other similar users. In a second set of variants, additional or alternative to the first, for users experiencing high frequency hearing loss, a neural network can be trained to find the optimal (with respect to one or more underlying metrics and/or heuristic functions) frequency transposition or any other suitable frequency transformation which represents the adjustment to the detected audio features (e.g., adjustment to the energy of high frequency phonemes). Additionally or alternatively, the processing subsystem can implement and/or interface with any other tools (e.g., models, algorithms, etc.) configured to perform any or all of the method200, and/or any other processes. 3.4 System—Variations In a first variation of the system100, the system includes a tactile device including a microphone and a set of tactile actuators configured to be reversible coupled to a body region of the user, wherein the tactile device includes a processing subsystem arranged at least partially onboard the tactile device, wherein the processing subsystem processes audio information received at the microphone to determine a set of stimulation patterns to be provided through that set of tactile actuators. In a first specific example (e.g., as shown inFIG.5), the tactile device is configured to be worn on a wrist region of the user. In a second specific example (e.g., as shown inFIG.6), the tactile device is configured to be worn as a vest at the torso of the user. Additionally or alternatively, the tactile device can be otherwise coupled to the user and/or remotely arranged from the user. In a second variation of the system100, the system includes a tactile device including a microphone and a set of tactile actuators configured to be reversible coupled to a body region of the user, wherein the tactile device includes a processing subsystem arranged at least partially offboard the tactile device and optionally partially onboard the tactile device, wherein the processing subsystem processes audio information received at the microphone to determine a set of stimulation patterns to be provided through that set of tactile actuators. In a third variation of the system100, the system includes and/or interfaces with a user device which records audio information from an environment of the user (e.g., with a set of microphones, receives audio information from another user (e.g., during a phone call), and/or retrieves audio information from an audio data source, and processes the audio with a processing subsystem (e.g., onboard the user device, remote from the user device, etc.) implementing a set of models and/or algorithms, wherein the processing subsystem processes the audio information to determine altered audio information to provide to any or all users. Additionally or alternatively, the system100can include any other suitable components. 4. Method As shown inFIG.2, a method200for providing information to a user includes and/or interfaces with any or all of: receiving a set of inputs S210; processing the set of inputs to determine a set of sensory outputs S220; and providing the set of sensory outputs S230. Additionally or alternatively, the method200can include and/or interface with any other processes. Further additionally or alternatively, the method200can include and/or interface with any or all of the processes described in U.S. application Ser. No. 14/750,626, filed 25 Jun. 2015, U.S. application Ser. No. 15/661,934, filed 27 Jul. 2017, U.S. application Ser. No. 15/696,997, filed 6 Sep. 2017, U.S. application Ser. No. 15/795,054, filed 26 Oct. 2017, U.S. application Ser. No. 15/959,042, filed 20 Apr. 2018, U.S. application Ser. No. 17/033,433, filed 25 Sep. 2020, and U.S. application Ser. No. 17/144,076, filed 7 Jan. 2021, each of which is incorporated in its entirety by this reference, or any other suitable processes performed in any suitable order. The method200can be performed with a system100as described above and/or any other suitable system. 4.1 Method—Receiving a Set of Inputs S210 The method200can optionally include receiving a set of inputs S210, which functions to receive information with which to determine any or all of: a set of tactile outputs (e.g., to be conveyed at a tactile device); an altered audio signal to be provided to a user to increase an intelligibility associated with the original audio signal; any other sensory outputs; and/or can perform any other suitable functions. Additionally or alternatively, S210can function to receive inputs with which to determine one or more operation modes of the system, determine one or more parameters associated with the system, and/or can perform any other function(s). S210is preferably performed initially in the method200, but can additionally or alternatively be performed multiple times during the method200(e.g., at a predetermined frequency), in response to another process of the method200, during another process of the method200, and/or at any other times. Alternatively, the method200can be performed in absence of S210. The set of inputs are preferably received at an audio input device, such as that arranged at any or all of: a tactile device, a user device, any other supplementary device, any combination, and/or at any other devices. Additionally or alternatively, inputs can be received at any or all of: one or more sensors, a client application, one or more input components (e.g., buttons, switches, etc.) of a device, a database and/or library, and/or at any other components. The inputs can be received at any or all of: continuously, at a predetermined frequency, at a predetermined set of intervals, at a random set of intervals, in response to a trigger, and/or at any other times. Alternatively, the method200can be performed in absence of receiving a set of inputs S210. The set of inputs received in S210preferably includes audio information from an environment of the user and/or from an audio source (e.g., another user speaking to the user locally and/or remotely, an audio file, etc.) which a user is listening to, wherein the audio inputs received (e.g., at a microphone of the tactile device, at a microphone of a user device in communication with the tactile device, etc.) are used to determine and/or trigger sensory outputs (e.g., haptic stimulation, altered audio, etc.), such as contemporaneously with (e.g., in quick succession after, overlapping with, partially overlapping with, in real time, in near real time, with negligible delay, etc.) the occurrence of the audio information. The audio input is preferably in the form of an audio signal, further preferably an audio waveform (e.g., single-channel audio signal, dual-channel audio signals, multi-channel audio signals in the case of multiple microphones, etc.), wherein the audio waveform can be processed in accordance with any or all of the subsequent processes of the method. Additionally or alternatively, any other suitable audio input(s) can be received. The audio input is preferably received at a microphone of a sensor subsystem of the system, such as a microphone (e.g., single microphone, multiple microphones, etc.) onboard a housing of a wearable tactile device, but can additionally or alternatively be received from a microphone of a separate sensor subsystem (e.g., onboard a user device), a remote computing system, and/or any other suitable sensor or information source. The audio signal can optionally include speech information from one or more users, such as speech occurring during a call (e.g., phone call, video call, etc.), speech naturally occurring in an environment of a user (e.g., collected at a user device of the user, collected at a hearing aid of the user, conveyed through a speaker in an environment of the user such as at an event and/or concert, etc.), speech occurring in recorded and/or pre-recorded audio (e.g., podcast, audio book, television show, etc.), and/or any other speech. The audio signal can additionally or alternatively include non-speech audio, such as, but not limited to, any or all of: music, environmental sounds (e.g., crosswalk indicators, sirens, alarms, etc.), and/or any other sounds. The audio signal can be collected dynamically and continuously, such as while a user is participating in a call with a user, while the user is conversing with a friend, while the user is detecting speech in his or her environment, and/or during any other dynamic interactions. Additionally or alternatively, an audio signal can be received as a whole and/or at a single time prior to the user's listening, such as in an event of pre-recorded audio like an audiobook, podcast, and/or any other audio. Further additionally or alternatively, pre-recorded audio can be processed dynamically and continuously, such as during playback by the user. The audio signal can be received from any or all of: a user device of the user (e.g., via a calling application and/or platform executing on the user device, via a 3rdparty client application executing on the user device, etc.); a microphone (e.g., onboard a user device of the user, remote from the user device, in an environment of the user, etc.); retrieved from a database and/or server (e.g., cloud); and/or received from any other suitable sources. In a first set of variants, an audio signal is collected from a user device (e.g., smartphone) while a user is on a phone call with a second user, wherein the speech from the second user is processed and enhanced for easier perception by the user. In a second set of variants, recorded audio (e.g., from a podcast, audio book, movie, etc.) is processed to determine enhanced audio, which can then be played back to the user (e.g., at a later time, upon initiation by the user, etc.). In a third set of variants, an audio signal which is dynamically received at a hearing aid of the user is processed and enhanced for easier perception by the user. The inputs can additionally or alternatively include any or all of: user preferences and/or other user inputs (e.g., indicating a user condition, loudness preferences, tinnitus frequency associated with the user's condition, operation mode selection, etc.), and/or any other inputs. 4.2 Method—Processing the Set of Inputs to Determine a Set of Sensory Outputs S220 The method200can include processing the set of inputs to determine a set of sensory outputs S220, which functions to process the information received in S210in order to determine a set of outputs (e.g., tactile outputs, enhanced and/or otherwise altered audio outputs, etc.) to be provided to the user. Additionally or alternatively, S220can perform any other suitable functions. S220is preferably performed based on and in response to S210, but can alternatively be performed in absence of S210, multiple times during the method200, in response to and/or during another process of the method200, and/or at any other suitable times or in response to any other information and/or triggers. S220is preferably performed with a processing subsystem (e.g., as described above), and further preferably with a set of models and/or algorithms (e.g., as described above), but can additionally or alternatively be performed with any suitable processors, computers, and/or combination of devices. In preferred variations, S220is at least partially performed on edge and with embedded processing, such as at a sensory output device (e.g., user device of the user, tactile device, etc.), where the audio signal for processing is received at the sensory output device (e.g., through a calling application, through a microphone of the device, etc.). This can function to promote privacy of the user and his or her information, reduce latency associated with processing and providing audio signals to the user, and/or enable energy consumption in executing the method to be minimized. Additionally or alternatively, any or all of S220can be performed remote from the sensory output device (e.g., at a cloud computing system), at another device, at multiple devices, at a combination of locations, and/or at any other locations. S220can optionally include pre-processing the audio signal (e.g., as shown inFIG.12), which functions to prepare the audio for further processing. Pre-processing the audio signal can include windowing a time series audio signal into a set (e.g., series) of frames (e.g., overlapping frames, partially overlapping frames, etc.), which can be individually processed, processed in parallel, processed partially in parallel, and/or otherwise processed. Alternatively, an audio signal can be processed in absence of windowing, such as in an event where an entire audio signal has been recorded and will be played back to the user (e.g., in the form of an audio book). Additionally or alternatively, pre-processing the audio signal can include buffering any or all of the audio signal Pre-processing the audio signal can optionally include running one or more classifiers (e.g., as shown inFIG.13), which functions to inform the selection of one or more models and/or algorithms used later in the method200. The classifiers can include for instance, an accent-detection classifier, which functions to detect a particular accent associated with one or more users (e.g., user speaking to a user with a hearing impairment, user with the hearing impairment, etc.), where the particular accent can be used to: inform the selection of one or more downstream models for audio processing (e.g., models specifically trained based on that accent); determine how audio should be adjusted (and/or the associated parameters for adjusting audio) for a user (e.g., based on the particular intonations associated with that accent); and/or be otherwise used in processing and/or providing the adjusted audio to the user. The set of classifiers can additionally or alternatively be configured to determine any or all of: a sex associated with a user (e.g., such that a pitch of high frequency phonemes is adjusted more for female speakers than male speakers); a number of users participating in conversation; a topic of the conversation; a level of noise present in the environment; a distance of the speaker(s) from the user; and/or any other features of the audio. In a set of variations, for instance, S220includes pre-processing the audio signal with a set of classifiers to select and/or otherwise inform (e.g., select weights for) the trained models and/or algorithms used to process the audio signal (e.g., to detect a set of high frequency phonemes as described below) to produce an altered audio signal, which functions to alter the audio signal for enhanced perception by the listening user. In another set of variations, pre-processing the audio signal with a set of classifiers enables selection of a set of trained models for use in providing tactile outputs to the user. S220preferably includes detecting a set of audio features associated with the set of audio signals S220(e.g., as shown inFIG.2), which functions to determine which parts of the audio should be enhanced, provided through other sensory outputs (e.g., tactile outputs), and/or otherwise further processed. Alternatively, all of the audio signal can be further processed and/or the audio signal can be processed in absence of detecting features. In a set of preferred variations, the set of audio features includes phonemes, further preferably high frequency phonemes, but can additionally or alternatively include any or all of: particular words, particular sounds (e.g., sirens, alerts, etc.), particular phrases/expressions, speech associated with a detected context, and/or any other features. The set of audio features can further additionally or alternatively include the absence of audio, such as a detection that a portion of audio is missing (e.g., portion of user's speech was cut off, portion of user's speech was uninterpretable/unintelligible [e.g., due to the simultaneous occurrence of a loud sound in his or her environment], etc.) from the audio signal and should be replaced with intelligible audio (e.g., based on a prediction of what the speech contained and/or should have contained, based on a prediction of which phoneme was most likely to have occurred, etc.), such as replacement audio, enhanced replacement audio (e.g., if the missing audio corresponds to a high frequency phoneme), and/or any other audio. Additionally or alternatively, any other audio features can be detected. The audio features are preferably detected with one or more trained models (e.g., as described above, a trained classifier for phoneme detection, etc.), which function to automatically detect audio features in the audio signal. Additionally or alternatively, the set of trained models can be configured for any or all of: distinguishing audio features (e.g., high frequency audio features) from noise, detecting audio features in both near and far field conditions, detecting audio features with only a single microphone, detecting audio features with multiple microphones, robustly detecting audio features with limited computing (e.g., onboard a wearable device), detecting audio features with low latency, optimizing for a tradeoff between low latency and high performance, and/or any the set of trained models can be otherwise suitably configured. Further additionally or alternatively, detecting audio features can include one or more manual processes (e.g., rule-based algorithms, rule-based decision trees, etc.), any combination of learned and manual processes, any other processes, and/or any or all of the processes described in U.S. application Ser. No. 17/144,076, filed 7 Jan. 2021, and U.S. patent application Ser. No. 15/696,997, filed 6 Sep. 2017, each of which is incorporated herein in its entirety by this reference. In a preferred set of variations involving phoneme detection (e.g., high frequency phonemes), S220includes implementing one or more trained models (e.g., neural networks) to process audio information and identify high frequency phonemes (e.g., /th/, /f/, /s/, /h/, /k/, /z/, /b/, /dh/, /t/, /d/, /v/, etc.) present in the audio (e.g., from a particular user, from a user speaking, etc.). In examples of this preferred set of variations, the set of trained models includes one or more neural networks, the neural networks including a set of recurrent neural net layers which function to detect a set of high frequency phonemes (e.g., specific to the user's hearing loss, fixed for all users with high frequency hearing loss, etc.). The neural network further preferably includes one or more convolutional layers, which function to improve the detection of time and/or frequency features in the collected audio, thereby providing additional information with which the neural network can use to make decisions (e.g., deciding between two different phonemes). The convolutional layer preferably includes a lookahead mechanism to provide this benefit, but can additionally or alternatively be otherwise configured and/or designed. Additionally or alternatively, the neural networks can be any or all of: absent of recurrent neural net layers, absent of convolutional layers, and/or can include any other architecture. Further additionally or alternatively, any other audio features can be detected in the audio signal(s) in any suitable way(s). In a first set of variations, S220includes determining a set of tactile outputs based on the audio information received in S210. Determining the set of tactile outputs preferably includes determining a set of stimulation patterns (e.g., as described above) based on audio information received in S210and a set of models (e.g., set of trained models as described above). In variations including an actuation subsystem having a set of one or more tactile actuators, determining tactile outputs can include determining any or all of: which actuators to actuate, which parameters to actuate the actuators with (e.g., frequency, amplitude, temporal parameters, etc.), and/or any other information. Additionally or alternatively, the tactile outputs can include any other features and/or parameters. In some examples, for instance, any or all of the features and/or parameters associated with the tactile output (e.g., stimulation patterns) are determined based on the audio input received at a microphone of the tactile device, such as which (if any) high frequency phonemes are present in the audio input. In preferred variations, for instance, at least a location or set of locations of tactile stimulation is determined based on the detection of a particular high frequency phonemes and optionally one or more mappings between the high frequency phonemes and the set of actuators (e.g., as shown inFIG.9). The phonemes can be mapped to a location in a 1:1 fashion, multiple phonemes can be mapped to a single actuator (e.g., collectively forming a “super phoneme” for phonemes which sound alike such as /d/ and /t/ or others), a single phoneme can be mapped to multiple actuators, and/or any combination. Additionally or alternatively, phonemes or other features can be distinguished based on any of the other actuation parameters, such as intensity of vibration, frequency of vibration, temporal parameters of vibration (e.g., pulse duration), spatial parameters (e.g., single actuator vs. multiple actuators), other features of vibration (e.g., sweeping pulses), textural features (e.g., actuator taking on different intensity values over time), an increasing or decreasing intensity over time for an actuator, and/or any other features. In some specific examples, for instance, any or all of the actuation parameters can take into account a priority associated with a detected phoneme, such as whether or not the phone is classified as coming from speech or coming from the environment. The mappings are preferably predetermined (e.g., such that the user can learn these mappings and thereby interpret the tactile information), but can additionally or alternatively be dynamically determined. In specific examples, the mappings include any or all of those described in U.S. patent application Ser. No. 15/696,997, filed 6 Sep. 2017, which is incorporated herein in its entirety by this reference. The location can include any or all of: an assignment of a particular tactile actuator in a set of multiple tactile actuators, a location at or between actuators (e.g., for illusion-based tactile stimulation), any combination, and/or any other suitable locations. Additionally or alternatively, any other parameters (e.g., amplitude and/or intensity of vibration) can be determined based on the detected high frequency phoneme(s). Further additionally or alternatively, any other features of the audio input (e.g., loudness) can be used to determine the tactile outputs. Additionally or alternatively, any or all of the features and/or parameters of the tactile output can be determined based on information from the audio input (e.g., high frequency phonemes, loudness, distance of sound source to device, etc.), predetermined (e.g., based on a product specification of the tactile actuator, based on a fixed frequency of an LRA, based on a user preference, etc.), determined based on other information, and/or otherwise determined. Additionally or alternatively, any or all of the tactile output features and/or parameters can be determined with one or more algorithms, such as any or all of the algorithms described in U.S. application Ser. No. 17/144,076, filed 7 Jan. 2021, which is incorporated herein in its entirety by this reference. In a second set of variations, S220includes altering the audio signal based on the set of audio features to produce an altered audio signal S230(e.g., as shown inFIG.12). Additionally or alternatively, the audio signal can be altered based on any other information (e.g., user input, specifics of user's hearing loss, etc.), in absence of a set of detected features, and/or otherwise suitably processed. Further additionally or alternatively, a new audio signal can be produced, and/or any other audio signal can be determined. The audio features can be altered (equivalently referred to herein as conditioned) through the adjustment of one or more audio parameters, such as, but not limited to, any or all of: frequency and/or pitch (e.g., decreasing the pitch of a high frequency phoneme, increasing the pitch of a low frequency phoneme, compressing a frequency range such as through dynamic range compression, etc.), volume/energy (e.g., increasing the volume of a high frequency phoneme, increasing the volume of any audio feature which the user has difficulty perceiving, decreasing the volume of noise present in the signal, etc.), duration (e.g., increasing duration of high frequency phonemes or other auditory features which the user has difficulty perceiving), and/or any other adjustments. Additionally or alternatively, parts of the audio signal other than those associated with audio features to be enhanced can be altered. In some variations, for instance, in an event that a certain audio feature will be lengthened in duration, a less crucial or more easily interpretable part of the audio signal (e.g., gap, low frequency phoneme, etc.) can be correspondingly shortened such that the total length of the audio signal remains the same and/or close to its original duration. This can be critical in audio signals received from dynamic conversations, where a repeated increase in duration of a particular user's audio could cause compounded delays as the users try to converse. The audio is preferably altered with a transformation, which specifies the alteration of the original audio to be performed for its enhancement. The enhancement is preferably configured specifically for the user's particular hearing difficulties (e.g., high frequency hearing loss or specific to their hearing loss characteristics, possible defined by an audiogram of their hearing loss), but can additionally or alternatively be configured to optimize for any number of objectives, such as, but not limited to, any or all of: an intelligibility or other quality of the enhanced audio (e.g., according to a Perceptual Evaluation of Speech Quality [PESQ] metric, according to a short-time objective intelligibility [STOI] measure, according to a Hearing Aid Speech Perception Index [HASPI] metric, etc.), one or more phenomena associated with auditory perception (e.g., loudness recruitment effect), and/or otherwise configured. The transformations can be nonlinear, linear, or any combination. The transformation(s) used to alter the audio signal are preferably determined with a set of one or more trained models (e.g., machine learning models, deep learning models such as neural networks, trained classifiers, etc.). The one or more trained models can be trained based on data associated with the particular user (e.g., one or more audiograms of the user), trained based on data aggregated from users having a similar audio perception as the user (e.g., other users having high frequency hearing loss, where a user is assigned a model based on similarity of his or her audiogram with other users used to train the model, etc.), trained based on predicted (e.g., simulated) data (e.g., a predicted profile of a user's hearing loss based on an audiogram or other data from the user), trained based on any other data, and/or trained based on any combination of data. The trained models can optionally further be trained to penalize outcomes which would sound abnormal to the user. In some cases, for instance, in which a pitch of an audio feature is adjusted (e.g., decreased for high frequency phonemes), if the pitch is altered too much, the resulting audio can sound abnormal and/or unintelligible. To prevent this, the transformation(s) can optionally be trained or designed with any or all of a word recognizer program, a phoneme recognizer problem, a speaker identification model, or any model that minimizes distance in embeddings learned through self-supervised, semi-supervised, or fully-supervised machine learning, which penalizes particular adjustments of the audio which would or could be unintelligible to the user. Additionally or alternatively, any or all of the transformations can be determined in absence of a learned model (e.g., through static/predetermined adjustments of parameters), with one or more rule-based tools and/or processes (e.g., rule-based decision trees, lookup tables, etc.), and/or any combination. In some examples, the transformation includes an inverse filter configured to enhance the audio in a way which effectively counteracts the effects of hearing loss associated with the user, where the inverse filter is preferably determined with a trained neural network, but can additionally or alternatively be otherwise suitably determined. In specific examples, the inverse filter is determined in accordance with a loudness recruitment effect, but can additionally or alternatively be determined in accordance with any other auditory phenomena, and/or in absence of any of these phenomena. In a specific example, an audiogram or other hearing assessment of the user is used to simulate (e.g., with a hearing loss simulator) a hearing loss (e.g., type of hearing loss, parameters associated with user's hearing loss, severity of user's hearing loss, etc.) of the user, which is then used for any or all of: selecting an inverse filter for the user, designing an inverse filter for the user, training a model specific to the user which is used to produce the inverse filter (e.g., for audio enhancement methods), and/or otherwise used to determine an inverse filter for the user and/or other similar users. In another specific example, additional or alternative to the first, for users experiencing high frequency hearing loss, a neural network can be trained to find the optimal (with respect to one or more underlying metrics and/or heuristic functions) frequency transposition or any other suitable frequency transformation which represents the adjustment to the detected audio features (e.g., adjustment to the energy of high frequency phonemes). S220can additionally or alternatively include determining any other outputs (e.g., visual), and/or any other processes. In a first variation, S220includes processing audio received at a set of one or more microphones with a set of trained models (e.g., as described above) to detect if a high frequency phoneme is present in the audio (e.g., in a most recent frame of audio), wherein in an event that a high frequency phoneme is present, the phoneme is mapped to a particular location and/or set of actuation parameters to be provided at the tactile device. In a second set of variations, S220includes optionally pre-processing the audio signal received in S210; detecting a set of features present in the audio with a trained classifier or classifiers; processing the set of features with a learned inverse filter to determine a set of adjustments to be made to the detected features to enhance the audio for the user; and optionally altering other portions of the audio signal (e.g., shortening less important portions of the audio signal to maintain a constant length of the audio signal in an event that some of the detected audio features will be lengthened) in response to determining the set of adjustments. In a specific example of the second set of variations, the inverse filter is selected or built for the user based on information associated with the user's hearing (e.g., an audiogram) and optionally one or more simulations of the user's hearing loss based on the information. Additionally or alternatively, an inverse filter can be determined (e.g., selected, tuned, etc.) for a user according to a user's assignment to a particular subgroup of users based on similarities between the user's hearing loss and the hearing loss of users in that subgroup (e.g., subgroup of users having a similar high frequency hearing loss as determined based on the user's audiogram and/or simulated hearing loss). In a second specific example, additional or alternative to the first, the inverse filter is determined with a model specifically trained for the particular user. In a third set of variations, additional or alternative to those described above, S220includes optionally pre-processing the audio signal received in S210; detecting a set of features present in the audio with a trained classifier or classifiers; processing the set of features with a trained neural network to determine an optimal set of adjustments (e.g., frequency transposition and/or any other suitable frequency transformation) to be made to the detected features to enhance the audio for the user; and optionally altering other portions of the audio signal (e.g., shortening less important portions of the audio signal to maintain a constant length of the audio signal in an event that some of the detected audio features will be lengthened) in response to determining the set of adjustments. In a fourth set of variations, additional or alternative to those above, S220includes optionally pre-processing the audio signal received in S210; detecting a set of features present in the audio with a trained classifier or classifiers; applying a set of predetermined adjustments (e.g., predetermined pitch adjustment for each high frequency phoneme) to the detected audio features to enhance the audio for the user; and optionally altering other portions of the audio signal (e.g., shortening less important portions of the audio signal to maintain a constant length of the audio signal in an event that some of the detected audio features will be lengthened) in response to the set of adjustments. In a fifth set of variations, additional or alternative to those above, S220includes optionally pre-processing the audio signal received in S210; detecting a set of features present in the audio with a trained classifier or classifiers; replacing the detected features with predetermined audio; and optionally altering other portions of the audio signal (e.g., shortening less important portions of the audio signal to maintain a constant length of the audio signal in an event that some of the detected audio features will be lengthened) in response to the replaced audio. 4.3 Method—Providing the Set of Sensory Outputs S230 The method200can include providing the set of sensory outputs S230, which functions to provide information to the user which enhances the interpretability and/or intelligibility associated with the audio information. Additionally or alternatively, S230can perform any other functions. The sensory outputs can optionally include tactile outputs, which are preferably provided at an actuation subsystem of a tactile stimulation device, but can additionally or alternatively be elsewhere and/or otherwise provided. The tactile outputs are further preferably provided with a haptic driver (e.g., as described above), but can additionally or alternatively be provided with any other components. In preferred variations, for instance, S230includes, at the distribution of haptic actuators, cooperatively producing a haptic output representative of at least a portion of the input signal through executing control signals at the haptic driver, thereby providing information to the user. In the variations and examples described above, phoneme outputs generated in S220can be encoded, mapped, and delivered through the array of tactile interface devices in a manner similar to the natural timing of speech. In more detail, stimulus provision preferably occurs in real-time or near real-time (e.g., within a time threshold, such as 100 ms, 90 ms, 75 ms, 50 ms, 110 ms, 125 ms, 150 ms, 200 ms, 300 ms, any range between these values, etc.), such that the user perceives tactile feedback substantially simultaneously with reception of input signals in S210(e.g., is unable to discern a delay between the input signal and tactile feedback), with minimal delay. However, delivery of haptic stimuli can alternatively be implemented in a manner that does not mimic natural speech timing. As such, delivery of haptic stimuli can be implemented with any suitable speed, frequency, cadence, pauses (e.g., associated with grammatical components of language), gain (e.g., amplitude of stimulation corresponding to “loudness” or punctuation), pattern (e.g., spatiotemporal pattern played using subarrays of the array of tactile interface devices, etc.), and any other suitable output component. The sensory outputs can additionally or alternatively include audio outputs, such as an altered audio signal provided to the user at a set of audio output devices, which functions to enhance the user's listening experience and interpretation of the audio. The altered audio signal is preferably played back with minimal delay relative to when the audio was recorded, which can function, for instance, to enable natural conversation to occur between users despite implementation of the method100. The delay can optionally be configured to be less than, equal to, or not significantly more than (e.g., less than 2×, within an order of magnitude larger, etc.) the lag typically experienced in phone calls. Additionally or alternatively, the delay can be otherwise configured. This minimal delay is preferably less than 100 milliseconds (ms) (e.g., 100 ms or less, 90 ms or less, 80 ms or less, 75 ms or less, 70 ms or less, 60 ms or less, 50 ms or less, 40 ms or less, 30 ms or less, 20 ms or less, 10 ms or less, between 50-100 ms, less than 50 ms), but can alternatively be greater than 100 ms (e.g., 150 ms, between 100 and 150 ms, 200 ms, between 150 and 200 ms, greater than 200 ms, etc.). The altered audio signal can optionally be provided at/through any or all of: a physical speaker (e.g., of a user device), a virtual speaker (e.g., of a user device), a 3rdparty platform (e.g., a teleconferencing platform and/or client application), and/or the altered audio signal can be otherwise suitably provided. Sensory output provision in S230can, however, include any other processes and/or be implemented in any other suitable manner. The method200can additionally or alternatively include any other processes, such as, but not limited to, any or all of: training any or all of the set of models (e.g., as described above) (e.g., as shown inFIG.12), retraining any or all of the set of models (e.g., in response to collecting data from users), preprocessing the set of inputs (e.g., to determine a set of frames as shown inFIG.4), and/or any other processes. 4.4 Method: Variations In a first variation of the method200(e.g., as shown inFIG.7), the method is configured to allow users with high-frequency hearing loss receive tactile information corresponding to high frequency phonemes (e.g., /th/, /f/, /s/, /h/, /k/, /z/, /b/, /dh/, /t/, /d/, /v/, etc.), wherein the method includes: receiving audio information at a set of one or more microphones, processing the audio information with a set of trained models to detect a set of high frequency phonemes, mapping the set of high frequency phonemes to a set of tactile actuators of a tactile device and/or a set of actuation parameters, and actuating the tactile actuators based on the mapping. Additionally or alternatively, the method200can include any other processes. In a specific example, the audio information is received at a single microphone onboard a wearable device configured to be coupled (e.g., reversibly coupled) to the user, such as to a wrist region of the user. In a second variation of the method200(e.g., as shown inFIG.11, as shown inFIG.13, etc.), the method includes receiving audio information from a speaking user; optionally transmitting the audio information (e.g., to a device of a listening user); optionally pre-processing the audio signal to select a set of trained models and/or algorithms; detecting a set of features present in the audio with the selected models and/or algorithms; processing the set of features with a transformation (e.g., learned inverse filter) to determine a set of adjustments to be made to the detected features to enhance the audio for the listening user; and optionally altering other portions of the audio signal (e.g., shortening less important portions of the audio signal to maintain a constant length of the audio signal in an event that some of the detected audio features will be lengthened) in response to determining the set of adjustments; and providing the altered audio signal to the user. Although omitted for conciseness, the preferred embodiments include every combination and permutation of the various system components and the various method processes, wherein the method processes can be performed in any suitable order, sequentially or concurrently. Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), contemporaneously (e.g., concurrently, in parallel, etc.), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein. Components and/or processes of the following system and/or method can be used with, in addition to, in lieu of, or otherwise integrated with all or a portion of the systems and/or methods disclosed in the applications mentioned above, each of which are incorporated in their entirety by this reference. Additional or alternative embodiments implement the above methods and/or processing modules in non-transitory computer-readable media, storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the computer-readable medium and/or processing system. The computer-readable medium may include any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, non-transitory computer readable media, or any suitable device. The computer-executable component can include a computing system and/or processing system (e.g., including one or more collocated or distributed, remote or local processors) connected to the non-transitory computer-readable medium, such as CPUs, GPUs, TPUS, microprocessors, or ASICs, but the instructions can alternatively or additionally be executed by any suitable dedicated hardware device. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.
90,848
11862148
DETAILED DESCRIPTION Techniques described herein may be utilized to implement systems and methods to analyze contacts data. Contacts data may refer to various types of communications that occur within the context of a contact center. A contact center may refer to a physical or logical unit of an organization that manages customer interactions. A contact center may handle inbound and outbound customer communication over multiple channels such as telephone, web, chat, email, messaging apps, social media, text, fax, traditional mail, and more. Contact centers can make use various types of advanced technology to help resolve customer issues quickly, to track customer engagements, and to capture interaction and performance data. Contacts analytics service may refer to a service or component of a service such as a contact center service that addresses a broad set of core speech analytics use cases without requiring technical expertise of users of the contact center service. In many cases, the users of a contact center service—supervisors and agents—may be trained to use the contact center service, but lack technical training to understand how to build and deploy computing infrastructure to perform data analytics. By providing an out-of-the-box experience directly within a contact center service, contacts analytics service can be used by supervisors and agents without requiring additional manual work and configuration and technical training by employees of an organization that uses a contact center service solution. A computing resource service provider may include various backend services such as data storage services, compute services, serverless compute services, and more. A computing resource service provider may include a backend contact center service, which may be used to offer customers of the computing resource service provider powerful analytics capabilities that enable businesses to improve the quality of their customer interactions without requiring technical expertise. A contact center service may have one or more self-service graphical interfaces that make it easy for non-technical users to design contact flows, manage agents, and track performance metrics, etc. without requiring specialized technical skills. In at least some embodiment, a computing resource service provider configures and manages computing resources that provide the infrastructure for running a customer center service so that businesses do not need to make costly up-front investments into computer servers, information technology infrastructure, etc. In at least one embodiment, contacts analytics service refers to a set of analytics capabilities powered by artificial intelligence and/or machine learning in a contact center service that make it easy for customers (e.g., organizations that use a computing resource service provider to support contact center capabilities) to offer better customer experience and improve operational efficiency of organizations' contact centers by extracting actionable insights from customer conversations. In at least one embodiment, contacts analytics service is integrated into a customer call service console and allows supervisors to conduct fast, full-text search on call and chat transcripts, discover themes and emerging trends from customer contacts, and improve agent performance with analytics-based coaching tools. Contacts analytics service may provide real-time analytics for both supervisors and agents during live calls which can provide actionable insights and suggestions to deliver improved customer support. Supervisors can use contacts analytics service's visual dashboard with call scoring to track all in-progress calls and intervene when customers are having a poor experience. Agents can use contacts analytics service's suggested answers to address live customer queries more effectively. Contacts analytics service does not requires technical expertise and can be easily used, taking just a few clicks in contact center service. A contact center may refer to a service that a business or organization provides to its customers to provide support for those customers. For example, an organization may provide its customers access to a contact center to provide technical support, troubleshoot issues, manage products and services, and more. A contact center service may be one of the only—or even the only—personal connection an organization's customer has with the organization and this experience may have a big impact on customer trust and loyalty. A contact center service can be utilized by an organization to field large volumes of customer conversations every day which results in millions of hours of recorded calls. In at least some embodiments, a contact call center provides services to obtain accurate transcripts of calls and uses call data to perform data analytics, identify issues, common themes, opportunities for agent coaching, and various combinations thereof. In some cases, traditional call centers have various shortcomings, such as difficulties in making some or all of the aforementioned functionality available to their non-technical staff, which may result in a need for data scientists and programmers to apply machine learning techniques and manage custom applications over time. As an alternative, they can use existing contact center analytics offerings, but they are expensive, slow in providing call transcripts, and lack required transcription accuracy. This makes it difficult to quickly detect customer issues and provide objective performance feedback to their agents. The inability of existing tools to provide real-time analytics also prevents supervisors from identifying and helping frustrated customers on in-progress calls before they hang up. Similarly, agents struggle to quickly resolve customers' complex issues, and often put them on hold because finding answers scattered across their enterprise's knowledge base takes a lot of time. As a result of these challenges, many contact centers don't have analytics capabilities that they could use to reduce customer churn, long hold times, agent turnover, and even regulatory fines. Techniques described herein can be utilized to solve some or all of the technical challenges briefly described above. Contacts analytics service may be utilized in the context of a contact center service to allow users of the service to address complex problems with AI-powered analytics capabilities that are available within the contact center service offerings, and does not require any coding or ML experience to use. In various embodiments, contacts analytics service uses highly accurate speech transcription technology to transcribe calls and automatically indexes call transcripts and chat-based interactions so that they are searchable in the contact center service console, which may be a graphical user interface that can be used by non-technical supervisors, and the supervisors can use the console to easily search contacts based on content and filter by sentiment to identify issues such as customers wanting to cancel services, return products, and other issues which may be pertinent to the supervisor's organization. In at least some embodiments, contacts analytics service implements a theme detection feature that allows supervisors to analyze multiple customer conversations and presents a set of themes that are causing increased call volumes, dissatisfied customers, and recurring issues. In at least some embodiments, contacts analytics service presents these themes in an easy to understand visual format that helps supervisors quickly respond to customer feedback and to perform remediations, if appropriate. In at least some embodiments, contacts analytics service includes agent coaching capabilities that enables supervisors to find opportunities to increase their agents' effectiveness—for example, contacts analytics service may generate a graphical illustration for past calls that makes it easy for supervisors to spot issues and share feedback with agents by commenting on specific portions of the conversation. Supervisors can track agent compliance with defined categorization rules that provide parameters for how agents interact with customers—for example, a supervisor may review call transcripts to determine how often an agent greets the customer in a call, which may be part of an agent handbook that guides agent behavior to provide a more pleasant and uniform customer experience. Supervisors can also track agent performance by defining categories that organize customer contacts based on content and characteristics such as silence duration, sentiment, talk speed, and interruptions. In at least some embodiments, contacts analytics service provides real-time assistance to supervisors and/or agents. In at least some embodiments, real-time supervisor assistance allows a supervisor to monitor call center analytics data in real-time, which may be aggregated across an entire call center, to specific product or service lines, or even a view onto a specific agent. In at least some embodiments, contacts analytics service provides a dashboard that shows analysis of all live calls happening in the call center and scores them based on customized criteria such as repeated requests to speak to a manager, yelling, or long silences. In at least some embodiments, a contacts analytics service dashboard allows supervisors to look across live calls and see where they may need to engage and assist in de-escalating a situation. In at least some embodiments, contacts analytics service provides real-time assistance to agents which can provide assistance to agents during live calls by automatically searching vast amounts of content contained in manuals, documents, and wikis and giving agents answers to customer questions as they are being asked, or surfacing the most relevant documents. Organizations may interface with a service frontend that abstracts the use of one or more backend services that utilize machine learning, data analytics, and coordinate the use of various computing resources hosted or otherwise utilized by a computing resource service provider. In at least some embodiments, contacts analytics service is an out-of-the-box experience within a contact center service that enables the customer (e.g., an organization, or supervisors and/or agents thereof) to deliver better outcomes for their end users without requiring technical expertise to wrote code, build custom solutions, machine learning models, etc. In at least one embodiment, data analytics capabilities can be enabled through a computing resource service provider management console for a contact center service provided by said computing resource service provider. Contacts analytics service may be provide capabilities for agents and supervisors that are integrated into a contact center service's user experience (e.g., graphics-based consoles and interfaces). In at least some embodiments, supervisors have access to new and enhanced existing user interface elements within a contact center service that allow them to categorize conversations, setup call scoring, search historical contacts, derive themes, provide post-call agent coaching, and various suitable combination thereof. In at least one embodiment, contacts analytics service provides a real-time agent assistance interface (e.g., as a widget) which provides an agent with guidance as to the next best action. In at least one embodiment, real-time interfaces can be embedded within existing tools or deliver to agents in a custom UI by using APIs supported by contacts analytics service. Contact trace records (CTR) in a contact summer center can be enriched with metadata from contacts analytics service which may include the following non-limiting examples: transcriptions, sentiment, and categorization tags. In at least some embodiments, businesses can easily export this information and use business intelligence or data visualization tools to perform further analysis by combining with their data from other sources. Contacts analytics service may be a component or subs-service of a contact center service that provides an organization 100% visibility into customer interactions. Contacts analytics service may be configured to automatically transcribe calls and using machine learning to extract intelligence and insights from them. Contacts analytics service can be used by organizations to identify customer experience issues and agent training gaps. In at least one embodiment, contacts analytics service includes a console that a supervisor can use to filter conversations by characteristics such as sentiment and silence duration. In at least one embodiment, contacts analytics service can be used by an organization to use quality and performance management features such as call categorization and scoring, and theme detection directly within a contact center service. Contacts analytics service may be implemented as a scalable service of a computing resource service provider that provides real-time agent assistance that scales to thousands of agents handling millions of calls. In at least one embodiment, contacts analytics service may be used by an organization to provide answers to customers on a wide range of questions in a rapid manner. In at least some embodiments, contacts analytics service provides efficient access to large volumes of data such as call transcripts, which provides benefits to supervisors by making it easier for them to analyze past interactions and provide timely feedback to agents. In at least some embodiments, supervisors are able to get real-time or near real-time visibility into live interactions between agents and an organization's customer. In at least one embodiment, reducing delays for supervisors makes it easier for supervisors to analyze past interactions and provide timely input and feedback to agents. In at least one embodiment, supervisors receive real-time visibility into live interactions and agents get in-call recommendations with answer and relevant articles from knowledge bases that help them provide quick and helpful responses to customers' questions. In at least some embodiments, contacts analytics service can be used to provide real-time agent assistance that reduces the amount of time agents spend researching customer issues and/or increase the rate at which customer issues are resolved on the first call. In at least some embodiments, contacts analytics service is a component of a customer contact service. Contacts analytics service may, in accordance with at least one embodiment, deliver post-call analysis features, real-time AI-powered assistance for supervisors (e.g., real-time supervisor assist), real-time AI-powered assistance for agents (e.g., real-time agent assist), and combinations thereof. In at least some embodiments, post-call analytics features refers to a set of features are provided in a post hoc manner, providing analytics and insights to data after calls, chats, and other customer interactions occur. In some cases, call data is collected in a central repository and is aggregated and analyzed to determine insights that can be used by supervisors. In some embodiments, customer calls are automatically transcribed and indexed, and can be accessed within a customer contact service UI. Call audio and transcripts may be provided together with additional metadata associated with the call, such as sentiment scoring for different segments of a call. A contact search page may be used for conducting fast full-text search on call transcripts. In at least some embodiments, users can filter by entities (e.g., product names), sentiment, and other call characteristics. In some cases, calls are analyzed to extract different call characteristics which may include one or more of the following non-limiting examples: talk speed, interruptions, silence (e.g., gaps in speech), speaker energy, pitch, tone, and other voice characteristics. In at least some embodiments, a rich set of filtering parameters can be leveraged by users based on criteria such as silence duration and number of interruptions to identify potential areas for improvement. Contacts analytics service may be used to implement various uses cases. For example, for past calls, contacts analytics service may record the call audio, transcribe the audio, and index the transcript to provide fast full-text search that can be used by supervisors to diagnose problems such as customer churn by searching for past conversations where customers have expressed frustration with the company's products or mentioning cancelling their services. Organizations may use this capability to investigate the magnitude of known issues by searching through transcripts of past customer conversations and categorizing the calls to identify common issues. Contacts analytics service may be used to search through specific segments of a call to see whether agents are following protocols set by the organization. For example, an organization may have a protocol for how customer interactions should be handled at the start and end of a call. An organization may specify that an agent should greet customers a certain way at the beginning of a call (e.g., “Thanks for being a valued customer” or “Thanks for being a subscriber” based on the customer's relationships to the company). An organization may specify that agents should, prior to the end of a call, check with the customer that all of his/her questions were resolved as part of the call. Calls may be analyzed against a set of categorization rules that define rules for customer interactions (e.g., customer greeting rules) and calls which fail to meet various rules may be flagged to help supervisors ensure compliance. Contacts analytics service may be used for theme and trend detection, which can be used to flag potential issues to the attention of a supervisor. While search may be effective at diagnosing known issue, theme detection may be used by customers to discover new issues which may have been previously unknown. Contacts analytics service may be used to perform theme detection by analyzing multiple transcribed conversations at once and presenting a set of themes. In at least some cases, themes are presented in a visual format and surface findings in an easy-to-understand format for supervisors. In at least some embodiments, contacts analytics service employs machine learning in an unsupervised manner and/or post-processing techniques to extract similar key phrases across conversations, perform intelligent grouping, and display result themes in a ranked order along with a count or severity value that indicates the magnitude of the issue. Contacts analytics service may provide trend detection capabilities that allow customers (e.g., organizations and supervisors/agents thereof) to detect anomalous patterns in their customer conversations. Trend detection may be utilized, in various embodiments, to allow businesses to discover new issues which are seeing increased magnitude in a customer specified period (e.g., 24 hour period) and investigate earlier. For example, if an organization released a coupon code to be used with certain products and services but discovered an issue where they saw an increase in calls with the phrase “broken coupon code” then contacts analytics service may flag “broken coupon code” as a trending flag which may allow a supervisor to investigate the issue, as there may not be an easy way for customers to detect whether such an issue is on their end or on the business's end. Theme and/or trend detection may have various use cases. In at least some embodiments, organizations (e.g., business leaders of such) may use theme and/or trend detection to understand top reasons for customer outreach over a period of time and/or for specific products or business workflows. For example, theme and/or trend detection can be used to detect commonalities such as product returns, and use a data-driven approach to determine when to determine a root cause for the product returns. In at least some embodiments, theme detection can be used by an organization to make changes in their products or processes to improve call deflection rates (e.g., increase in volume and/or proportion of calls handled by self-service tools or automated customer assistance tools). In at least some embodiments, contacts analytics service generates a rich call waveform that provides a visual representation of a given call's details, such as progression of customer and agent sentiment during the call, segments with silence, key phrases spoken, interruptions, talk speed, and call volume. Supervisors can use call audio and rich metadata to quickly identify areas of improvement for agents and to identify patterns and areas of improvement so that agents can better resolve customer issues and provide a better customer experience when they contact the organization via the customer contact service. Contacts analytics service may, in at least some embodiments, be used to categorize calls and chats into categories based on custom rules, logics, and criteria, which can include the use of keywords and phrases, acoustic characteristics such as silence duration, cross-talk, and talk speed. Supervisors can use contacts analytics service to quickly identify calls and chats with criteria of interest that they want to track, in accordance with at least one embodiment. Accordingly, an organization can use contacts analytics service to more effectively train supervisors and/or agents. In at least some embodiments, contacts analytics service can be used to solve the problem of attrition in contact centers and/or used to help supervisors provide more specific feedback. For example, contacts analytics service can be used to provide a data-driven approach to improving customer contact experiences, which may, using traditional techniques, be more haphazard and ad hoc. For example, instead of supervisors listening to a randomly selected sample of calls and relying upon skewed customer satisfaction surveys, contacts analytics service can be used to analyze and categorize all calls. Supervisors can use contacts analytics service to review comments and/or feedback for specific portions of historic calls, categorize historic calls to determine compliance with different organizational rules or categories. Agents can receive objective feedback provided by supervisors in at least some embodiments. Supervisors can, in at least some embodiments, mark specific calls with a thumbs up or thumbs down and/or comments and an agent can listen to the portion of the call where the supervisor provided feedback for taking more concrete corrective measures. In some embodiments, contacts analytics service provides an interface which supervisors can use to assign labels/tags for recurring searches (e.g., mapping to topics like customer churn and agent script adherence). Tagged calls, in some embodiments, are directly searched upon in a customer contact center or can be exported from the customer contact center and, for example, analyzed by a separate business intelligence tool. Contacts analytics service, in at least some embodiments, provides real-time analytics capabilities which can be used analyze call and chat data in real-time and provide assistance to supervisors and/or agents. In at least some embodiments, contacts analytics service exposes a graphical dashboard to supervisors that shows real-time analytics of all live calls of a customer contact center. Real-time analytics dashboards may present sentiment scores for calls as interactions evolve, allowing supervisors to look across live calls and see where they may be needed to engage and/or de-escalate and/or help an agent. In at least some use cases, contacts analytics service provides a dashboard that allows supervisors to track live calls being handled by agents and displays call scores, customer sentiment scores, categorizations, and other information that can be used by supervisors to prioritize calls that need their attention. In at least some embodiments, supervisors receive alerts for calls involving challenging situations such as repeated customer requests for escalation, yelling, use of profanity or forbidden language, frustrated tone, references to competitors, or inability of an agent to solve the customer's problem. A supervisor may use a contacts analytics service dashboard to detect challenging situations as they develop, allowing a supervisor to quickly intervene and de-escalate the situation. Supervisors may be able to setup actions (e.g., providing agent prompts and assigning call scores) based on call characteristics such as keywords. For example, contacts analytics service may transcribe call audio in real-time and detect instances where an agent says “I don't know” or “I don't handle that” to detect instances in which an agent's responses may cause customers frustration. In at least some embodiments, customer/agent tone (e.g., customer yelling at agent) or failure by agent to adhere to script and compliance procedures may be flagged to a supervisor dashboard to provide supervisors more transparency into how customer issues are being resolved. In at least some embodiments, contacts analytics service provides real-time agent assistance. Contacts analytics service may use artificial intelligence and machine learning to provide in-call assistance to agents based on real-time call audio which is being transcribed and analyzed to generate suggestions to agents to help them better solve customers' issues. In at least some embodiments, real-time transcripts of calls and/or chats are provided to Kendra which can then provide specific answers or give a list of relevant documents from the company's knowledge base (e.g., using a document ranking feature) to help an agent more quickly locate an answer to the customer's specific question. In at least some embodiments, contacts analytics service presents, real-time feedback to agents as a widget or plug-in of a customer contact center interface which agents uses. Contacts analytics service may provide visual cues to agents to provide agents awareness of customer sentiment during calls, and as to their own speaking style to make it easier for agents to make adjustments. For example, a contacts analytics service agent dashboard may surface a visual indicator to agents when they are speaking too quickly, they are not speaking loudly enough, when the agent's sentiment score decreases, and more. Agents can use contacts analytics service to identify in real-time adjustments to their own speaking style to show more empathy, speak slower, etc., to improve customer interactions. In at least some embodiments, organizations use feedback to make agents more aware of various call characteristics such as silence duration, talk speed, interruption frequency, amplitude/speaker energy, and customer sentiment. Contacts analytics service may provide an agent dashboard that provides real-time contextual in-call guidance. In at least some embodiments, an agent dashboard includes a “next best action” suggestion for agents to help them answer customers questions, promote relevant sales offers, read out regulatory disclosures, and more. In at least some embodiments, call audio is transcribed in real-time and submitted to an AI-based suggestions platform to provide a “next best action” suggestion as to the next action that the agent should take. For example, various types of next best actions may include: a greeting script to read to a customer (e.g., at the start of a call); a specific answer to a question asked by a customer; a link to one or more knowledge base article that contacts analytics service believes is most relevant to helping an agent answer a customer's question; a prompt alerting the agent to read a mandatory disclosure (e.g., as per organizations' rules, legal and/or regulatory requirements, etc.) In at least some embodiments, real-time agent assistance tools provided by contacts analytics service are used to help agents improve their soft skills by providing immediate automated feedback during the call. In at least some embodiments, a client (e.g., organization or employees thereof such as supervisors and/or agents) of a customer contact service uses contacts analytics service's real-time capabilities to quickly identify themes and trends from a given set of customer conversations (e.g., conversations in text and/or voice) and API support for third party application integration (e.g., as a widget of a customer solution). In at least one embodiment, customers are able to access to data (e.g., call transcripts, categorizations) generated by customer contact service and contacts analytics service in a data storage service (e.g., a bucket accessible from the data storage service) which clients are able to combine with other data sources for analysis in business intelligence tools and to apply data analytics to the data. In at least some embodiments, contacts analytics service supports one or more of the following capabilities: API support, redaction capabilities (e.g., PHI, PII, PCI redaction), and ability to provide a unified view across voice and chat interactions. In at least one embodiment, contacts analytics service is implemented as an independent software-as-a-solution (SAAS) application that integrates with different contact center software solutions. In at least one embodiment, contacts analytics service provides an integrated experience within a contact center service by launching features that enable AI-powered capabilities that non-technical users are able to use without additional training. Contacts analytics service provides an agent feedback widget that can be easily embedded within existing tools (such as Salesforce) used by agents, in accordance with at least one embodiment. Contacts analytics service may support APIs to give organizations additional flexibility to provide feedback to agents in their custom UI. Agents can review their performance feedback in a “Supervisor Feedback” GUI in a contact center service. In various embodiments, supervisors have access to new and enhanced existing pages within a contact center service that allow them to configure suggested actions for agents, setup call scoring, search historical contacts, and provide post-call agent feedback. In at least some embodiments, contacts analytics service automatically redacts sensitive data from chat logs, call transcripts, and other text-based records. Non-limiting examples of sensitive data may include one or more of the following: credit card numbers; social security numbers; patient health records; date of birth; passwords or pass phrases; cryptographic keys or other secret material; personal identification number (PIN); and more. In at least some embodiments, sensitive data includes personal health information (PHI) and/or personally identifiable information (PII). In at least some embodiments, contacts analytics service is payment card industry (PCI) compliant and can automatically redact PCI data from both call audio and chat transcript to ensure that sensitive customer information is not exposed to unauthorized employees within the organization. In at least some embodiments, sensitive data is redacted from contacts analytics service GUI and stored in an encrypted format. In at least some embodiments, an organization may have access to a cryptographic key that can be used to decrypt sensitive data of chat logs if such data is needed, such as in cases where such information is required for compliance with statutory and/or regulatory reasons. Contacts analytics service stores metadata (including call transcript) along with the call recordings in a bucket of a data storage service, in accordance with at least one embodiment. A client of a customer contact service may access the data storage service to obtain call recordings, metadata, and other information which can be integrated with the client's own business intelligence tools, other systems (e.g., CRM tools), or other services offered by a computing resource service provider. Contacts analytics service may support post-call analytics features such as full-text search, theme detection, and agent coaching. Post-call analytics features may be available for audio and/or text interactions. In at least some embodiments, real-time analytics for agents and supervisors are currently only available for audio calls. In the preceding and following description, various techniques are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of possible ways of implementing the techniques. However, it will also be apparent that the techniques described below may be practiced in different configurations without the specific details. Furthermore, well-known features may be omitted or simplified to avoid obscuring the techniques being described. As one skilled in the art will appreciate in light of this disclosure, certain embodiments may be capable of achieving certain advantages, including some or all of the following: improving customer experience and call center operations without requiring technical expertise by supervisors and agents; improving security of computer systems through diagnostics and discovery capabilities by making it easier for analysts and supervisors to detect security issues (e.g., in accordance withFIGS.7-11). FIG.1shows an illustrative example of a computing environment100, in which a contacts analytics service can be implemented, according to at least one embodiment. In at least one embodiment,FIG.1illustrates a client computing environment102that includes a client104and a client data store106.FIG.1illustrates, in accordance with at least one embodiment, an implementation of a contacts analytics service that can be used by a client to process and analyze contacts between agents and customers. In at least one embodiment, a client computing environment102refers to a physical and/or logical organization of resources of a client. A client may refer to an organization that runs a contact center which customers of an organization can contact to ask questions, request help, and more. In at least one embodiment, an organization's client computing environment includes computer systems that are used to receive contacts from customers. Contacts data may refer to different types of touch points that customers can use to contact an organization, and may include the following non-limiting examples: phone calls; chat messages; e-mails; social media messaging systems; online messaging; and more. An organization may have a team of dedicated agents and/or supervisors that are tasked with handling contacts with clients. For example, a customer may use a telephone to call a contact center (e.g., via a toll-free number) which is routed through a customer contact service to an available agent. The agent may receive the call and begin talking with a customer to address the customer's reason(s) for calling the organization. Contacts with an organization (e.g., via a customer contact center) may be recorded in a client data store106and such contact data may be analyzed by a contacts analytics service to generate insights, identify themes and trends, perform diagnostics, combinations thereof, and more. A client computing environment102may refer to one or more physical computer servers, software running thereon, human resources (e.g., agents and supervisors employed by an organization), etc. In some cases, a client computing environment102is or includes a data center with computing resources connected to a computing resource service provider via a network. Client104may refer to a client computer system connected to a server (e.g., computing resource service provider) over a network. In some cases, client104refers to a user or operator of a client computer system, and may be an employee of an organization that utilizes a computing resource service provider to host a customer contact service and/or contacts analytics service. In some cases, an employee of an organization runs client software on a computer system in client computing environment102that includes a graphical user interface (GUI) such as a graphical dashboard which includes user interface (UI) elements which can be used to start a job. A job may refer to a request to perform a task, such as to run analytics on customer contact data. Client104may start a job by using various UI elements to generate a request that is routed across a network to a frontend service108. Client data store106may refer to an electronic data store that an organization uses to store contact data. Contact data may refer to audio recordings of calls between agents and customers, chat logs of online conversations between agents and customers, video interactions between agents and customers, and more. Contact data may be stored in various formats, such as compressed audio files (e.g., MP3), compressed text files (e.g., as ZIP files) and more. Client data store106may be implemented using any suitable type of data storage medium, including hard disk drives, data storage services, databases, network area storage (NAS) devices, and more. In some cases, a combination of different types of data storage devices and/or services are used to store customer contact data. In at least one embodiment, client data store106refers to a data storage service of a computing resource service provider (e.g., hosted by a computing resource service provider on behalf of client organization) which an organization is able to access over a network. In some cases, client data store106may refer to data storage devices and services that are operated and managed by an organization and/or physically located within a data center or office of the client. In some embodiments, a client uses a computing resource service provider to host a data store, as well as provide access to a customer contact service. A service may comprise a frontend service frontend108and a backend service. Frontend service108may be implemented in accordance with service frontends described elsewhere in this disclosure, such as those discussed in connection withFIG.3. In at least one embodiment, client104uses client software that is configured to establish a client-server relationship with a service of a computing resource service provider. A client may connect to a service via frontend service108which receives requests from clients and routes them to backend services. Frontend service108may be a frontend service of a customer contact service which may be one among several services offered by a computing resource service provider to its customers. In at least one embodiment, client104interacts with a GUI to setup a job to be run, and client-side software translates the GUI setup to a web service API request which is transmitted from the client computer system to frontend service108via a network. In an embodiment, the network includes any appropriate network, including an intranet, the Internet, a cellular network, a local area network, a satellite network or any other such network and/or combination thereof, and components used for such a system depend at least in part upon the type of network and/or system selected. Many protocols and components for communicating via such a network are well known and will not be discussed herein in detail. In an embodiment, communication over the network is enabled by wired and/or wireless connections and combinations thereof. In some cases, a network may include or refer specifically to a telephone network such as a public switched telephone network or plain old telephone service (POTS). Frontend service108may route a request to run a job to a metadata service110. Metadata service may be a backend service of a web server that stores jobs to execute and tracks the status of jobs as they are being executed. In at least one embodiment, metadata service110receives a request to run a job for a client and generates a job. In at least one embodiment, a job is a record in a database that includes information indicating how to run the job, such as a network location of a customer bucket, a set of contacts to run the job on, and more. In at least one embodiment, a job includes a field in which job status is stored, which may indicate how much progress has been made towards executing the job—for example, the job status information may indicate the job is not yet started, a particular stage in a workflow that is in progress or completed, that the job has been completed, etc. A timestamp may be included with the job status update, which can be used to track how long a particular stage in a workflow has been running. In at least one embodiment, customers are able to define custom workflows to run for their jobs, and each job is mapped to a particular workflow based at least in part on a customer identifier. In various embodiments, job sweeper112is software and/or hardware that is used to execute a workflow for a job. For example, job sweeper112may be an event-driven function implemented in accordance with an event-driver compute service such as those described in connection withFIG.26. In at least one embodiment, job sweeper112is an event-drive function that is triggered when a metadata service110adds a new job. In some cases, job sweeper112runs on a periodic basis and runs jobs in batches. Upon a new job being added (e.g., to a queue or stack), an event-driven compute service may instantiate computing resources to run job sweeper112. In at least one embodiment, job sweeper112finds a new job, determines a workflow for the job, and coordinates execution of the workflow such as step functions workflow114illustrated inFIG.1. In at least some embodiments, the job specifies a specific workflow which the job sweeper uses a scaling service or workflow manager service to coordinate. A workflow may be executed using a scaling service, such as those described in connection withFIG.25. Step functions workflow114may refer to a series of operations that are specified to run a job. The workflow may be specified in the job, either directly or indirectly. For example, the job may specify a set of capabilities of comprehend service132to utilize as part of the workflow. A step function workflow may include a series of steps, some of which may be executed in parallel and others which are to be executed sequentially. A workflow may specify a set of dependencies that describes how the workflow is to be executed. A workflow may be represented as a directed acyclic graph where nodes represent different steps and directed edges represent dependencies. If step A is a dependency of step B, then the workflow may require step A to be completed prior to step B. For example, a sentiment analysis step may have a dependency on a transcribing step, since text generated by the transcribing step is used as an input to perform sentiment analysis. In at least one embodiment, steps116-128are each executed as a separate event-driven function, such that completion of one or more event-driven functions cause another event driven function to run the next step in the workflow. In at least one embodiment, some or all of steps116-128are batched together as a single event-driven function. In various embodiments, a scaling service is used so that computing resources for each step of a workflow can be scaled up or down as needed based on demand. Step functions workflow114may be executed using a scaling service or components thereof in accordance withFIG.25. One or more portions of step functions workflow114may be executed asynchronously. In at least one embodiment, a customer role is assumed to perform at least some parts of step functions workflow114. When a principal assumes a destination role, it may receive a resource identifier and a hashed message such as a keyed-hash message authentication code (HMAC). The resource identifier may be a resource identifier associated with the destination role and may be in a human readable format. An HMAC may also be associated with the destination role but is not human readable in at least some embodiments. An HMAC may include security information that may be used to grant permissions to resources that the destination resource may access, and may further include an expiration time that indicates when the HMAC expires. For example, an HMAC for a role may be set to expire every 15 minutes. Upon expiration, an HMAC may be invalidated and no longer able to be used for submitting requests on behalf of the assumed role. Attempting to submit a request with an invalid HMAC may result in an authorization service denying the request. An application programming interface (API) call may be used to assume a role. When a principal (e.g. user or role) assumes a role, it may have permissions associated with the role. For example, a role may have access to a certain database, computing resources (such as a virtual machine), or cryptographic keys. A principal such as a user may assume the role and then request access to the resource by providing the resource name and HMAC associated with the role. A computer system may receive the request and use an authorization module to determine whether the requestor (in this case, the role) should be granted access to the resource. The requestor may check whether an access control list associated with the resource includes the role as being sufficiently privileged to access the resource. An access control list may be implemented using various types of data structures such as an array, a vector, a map, a hash, etc. and/or structured storage such as a database table or any combination thereof. Additionally, the authentication module may verify the HMAC. The HMAC may be verified by generating a new HMAC using the key and checking if it matches the HMAC provided in the request. Additionally, once the HMAC has been verified to be authentic, the expiration time for the HMAC may be compared against the system clock. If the expiration time of the HMAC code is earlier than the service's current system time, it may indicate that the HMAC code has expired and that the requestor does not have access to the requested resource. There are several aspects to the use of HMAC codes in accordance with various implementations. First, in some examples, the HMAC code includes an expiration time—when the HMAC expires, the principal assuming the destination role no longer has the rights associated with the destination role and until the principal obtains a new HMAC code that is no longer expired. When an HMAC code expires, a backend system may automatically detect that the HMAC code has expired, and generate a new HMAC code that is set to expire 15 minutes after it was generated. Upon expiration of an HMAC code, a principal may submit a request to for the new HMAC code. As part of step functions workflow114, an event-driven compute service may execute an event-driven function to copy input data from data store116. In at least some embodiments, a role associated with client104is assumed and, upon assumption of the client role, a request is made to client data store106for contacts data. Contacts data may include audio recordings of calls between agents and customers, chat logs of online conversations between agents and customers, video interactions between agents and customers, and more. Audio recordings may be stored as audio files, such as MP3 files. Chat logs may be recorded in any suitable text-based format and one or more chat logs may be compressed in a ZIP file. Contacts data may be copied from a client bucket of a data storage service client bucket to a bucket controlled by a contacts analytics service. In at least one embodiment, the job that kicked off the step functions workflow114includes a time period that indicates a subset of contacts data that should be analyzed. For example, a job may indicate that contacts data from the previous 24-hour period should be copied and analyzed. Upon copying contacts data as inputs from client data store106, the next step in the step functions workflow may be executed, and the job's status in metadata service110may be updated to indicate that contacts data was successfully copied from data store. Once contacts data has been copied, a step of step functions workflow is to transcribe calls118included in the input data. Audio recordings of customer calls may be transcribed using a speech-to-text service130. Speech-to-text service130illustrated inFIG.1may be in accordance with those described elsewhere in this disclosure, such as those discussed in connection withFIG.2. In at least some embodiments, speech-to-text service130uses artificial intelligence and/or machine learning techniques to map audio waveforms to text. Speech-to-text service130may utilize neural networks such as recurrent neural networks (RNNs), deep neural networks (DNNs), variational autoencoders (VAEs), long short-term memory (LSTM) neural networks, convolutional neural networks (CNNs), and more. A speech-to-text service may receive audio waveforms as inputs and produce text as outputs. In some cases, contacts data includes chat logs or other text-based contacts data. In some cases, this step is optional, such as in cases where client data store106solely includes text-based contacts data. Transcription of text-based contacts data may be skipped, as the data is already in a text-based format. However, in at least one embodiment, audio-based contacts data (e.g., video and audio recordings) may be transcribed using a speech-to-text service130. In at least one embodiment, speech-to-text service130receives audio data (e.g., in the form of an audio or video file) and generates a text-based transcript of the audio data. In at least some embodiments, speech-to-text service130organizes the transcript by turns, breaking down the audio into different turns based on the speaker. Transcripts may be partitioned by speaker, by sentence, by time (e.g., a fixed duration wherein each turn lasts 15 seconds or a fixed number wherein an entire call is partitioned into N segments of equal length). For example if an agent speaks for the first 10 seconds of a call and a customer speaks for the next 15 seconds, text for the first turn may include the agent's speech from the first 10 seconds and text for the second turn may include the customer's speech from the next 15 seconds. Speech-to-text service130may be a service of a computing resource service provider. Speech-to-text service130may be accessed via a web service API request that accepts audio data (e.g., the data itself or a reference to such data) as an input and produces a text-based transcript (e.g., in a text file or other text-based file format). Speech-to-text service130may generate metadata for audio which can include periods of silence, cross-talk (e.g., where multiple speakers talk over each other), and more. Metadata may be included as part of a transcript output. Upon receiving requested call transcripts from speech-to-text service130, the next step in the step functions workflow may be executed, and the job's status in metadata service110may be updated to indicate that calls have been successfully transcribed. In at least one embodiment, text-based contacts data (e.g., transcripts generated by speech-to-text service130or text-based contacts data obtained from client data store106) are analyzed using a natural language processing (NLP) service. In at least one embodiment, NLP service132is a service of a computing resource service provider. In at least one embodiment, NLP service132is in accordance with those described elsewhere in this disclosure, such as those discussed in connection withFIG.2. In at least one embodiment, NLP service132uses artificial intelligence and/or machine learning techniques to perform sentiment analysis120A, entity detection120B, key phrase detection120C, and various combinations thereof. In at least one embodiment, text-based contacts are organized by turns—for example, turns may alternate based on which party was speaking or typing on a contact. Each sentence spoken may correspond to a turn (e.g., successive turns may be from the same speaker). In at least some embodiments, each turn is analyzed separately for sentiment analysis120A, entity detection120B, key phrase detection120C, and various combinations thereof. In some embodiments, for the text of a turn, sentiment analysis120A, entity detection120B, key phrase detection120C, and various combinations thereof are processed in parallel by NLP service132. In an embodiment, other natural language processing capabilities offered by NLP service132are utilized to analyze text-based contacts data. In at least one embodiment, sentiment analysis120A, entity detection120B, key phrase detection120C, and various combinations thereof are executed as individual event-drive functions on a per-turn basis. Sentiment analysis120A may refer to analyzing text (e.g., a turn, being a portion of a text-based transcript of an audio recording) and determining one or more characteristics of the call. For example, sentiment analysis120A of a statement may generate a sentiment score that indicates a sentiment of the statement in question was positive, neural, negative, or mixed. Sentiments may be separated by speaker. A sentiment score may be generated based on successive sentiments of a speaker—for example, if a customer's sentiment of a first turn is positive, it may be assigned an initial sentiment score value of +1; if the customer's sentiment on his/her next turn is still positive, the sentiment score may increase from +1 to +2, and so on. In some cases, sentiment scores are in a bounded range of values, such as between −5 and +5, such that additional positive turns after reaching a maximum sentiment score simply leaves the sentiment score at the maximum value. In some cases, sentiment score is reset when a neural or negative turn follows a positive run, and vice versa. Sentiment analysis120A may be performed turn by turn in a synchronous manner for a chat log. Sentiment scores for individual turns can be used to generate an overall sentiment score for an entire call or chat. Entity detection120B may refer to detect entities in a document or text-based portion thereof. An entity may refer to a textual reference to the unique name of a real-world object such as people, places, and commercial items, and to precise references to measures such as dates and quantities. For example, in the text “Jane moved to 1313 Mockingbird Lane in 2012,” “Jane” might be recognized as a PERSON, “1313 Mockingbird Lane” might be recognized as a LOCATION, and “2012” might be recognized as a DATE. For example, entity detection120B may be used on a call transcript to identify products, dates, events, locations, organizations (e.g., competitors), persons, quantities, titles, and more. In at least some embodiments, NLP service132supports a set of default entities and furthermore supports adding custom entities. In at least some embodiments, a client can supply a set of training data which is used by NLP service132to train a neural network to recognize a custom entity. Key phrase detection120C may refer to finding key phrases in a document or text-based portion thereof. A key phrase may refer to a string that includes a noun phrase that describes a particular thing. A key phrase may comprise a noun and one or more modifiers that distinguish it. For example, “day” is a noun and “a beautiful day” is a noun phrase that includes an article (“a”) and an adjective (“beautiful”) describing the noun. In various embodiments, key phrases have scores that indicates the level of confidence that NLP service132has that the string is a noun phrase. In various embodiments, a turn (e.g., transcribed from audio recording) is parsed to identify key phrases which can be indexed and searched upon to perform diagnostics, trend and theme detection, and more. NLP service132may be a service of a computing resource service provider that provides a set of web service API commands that can be used to identify key phrases from documents or other text-based data sources. In at least some embodiments, NLP service132offers a set of natural language processing capabilities such as120A-120C illustrated inFIG.1, which are merely illustrative of example capabilities offered by NLP service132and other natural language processing capabilities may be supported by NLP service132. In some embodiments, an audio recording or audio call (e.g., real-time call) is transcribed using speech-to-text service130to generate a text-based transcript. As part of transcribing the audio source, the transcript may be organized by turns, which alternate when speakers change. An event-driven function may submit turns of a text-based transcript to NLP service132which provides sentiment scores for the turns. Analytics results generated by NLP service132may be aggregated and stored as a set of output file. Upon performing analytics using NLP service, step functions workflow may further include a step to process analytics results122. The analytics results processed may be outputs from the NLP service132described above. In at least one embodiment, the processing of the data includes translating the data into a human readable format. In at least one embodiment, sentiment scores are calculated based on sentiment analysis. Post processing steps such as categorization and translation of output data to a human-readable format (e.g., converting to JSON format) may be performed. The analytics may be processed to generate an output which is provided to the categorization step of the workflow. A human-readable medium or human-readable format may refer to a representation of data or information that can be naturally read by humans—in contrast, a machine-readable format may refer to a format that can be easily processed by a computer but difficult for a human to interpret (e.g., a bar code). Categorization service134may be used to categorize documents124. The documents may be the output generated by processing analytics results. Categorization service134may have access to a category store136that stores a set of categorization rules. Categories may be defined by client104. Categorization service134may provide a set of default categories, such as determining when there are instances of prohibited words (e.g., use of profanity by agents). Categorization service134may generate a set of results including information on which categories were matched, as well as points of interest associated with those categories. These results may be encoded in an output file and the outputs may be written to data store126. A client role may be assumed to store the output in client data store106. Finally, the workflow may include a final step to emit events and/or metering128which may be used for billing and various other applications. Metrics emitter138may refer to a service, daemon, or any suitable monitoring component that may track the status of jobs. Metrics emitter138may track how long certain jobs have been pending and whether they have been pending at specific stages for longer than a specified time, indicating that there may be an issue with the job. Such jobs may be re-started, terminated, or notifications may be sent to client104alerting the client to investigate. Different stages may be expected to take different amounts of processing time—for example, transcribing audio to text may be particularly demanding compared to other steps in a workflow and may be expected to take longer than other steps. If a job fails, a contacts analytics output or transcript file is not generated, according to at least one embodiment. In at least one embodiment, if a customer start a new job with the same input parameters as an existing job, a new job with a new job id will be started and all intermediate outputs will be re-generated (e.g., step functions workflow is re-run in its entirety). In at least one embodiment, if NLP jobs succeed but individual documents in the NLP job fail, the job fails. FIG.2shows an illustrative example of a computing environment200in which various services are implemented within the context of a computing resource service provider202, according to at least one embodiment. A computing resource service provider described herein may be implemented using techniques described inFIG.27. In at least one embodiment, a computing resource service provider202offers computing capabilities to clients. For example, a computing resource service provider may implement various services such as a customer contact service204, a contacts analytics service206, a speech-to-text service208, a natural language processing (NLP) service210, an enterprise search service212, and combinations thereof. Various additional services may be offered by computing resource service provider202which are not illustrated inFIG.2for the sake of clarity—for example, the computing resource service provider may further implement a data storage service, a compute service, a serverless compute service, an event-driven compute service, an authorization service, an authentication service, a data streaming service, and more.FIG.2illustrates a server architecture which may be used to implement various embodiments within the scope of this document. In at least one embodiment, a computing resource service provider202provides various capabilities which can be access to clients such as client214via a network. The network may be any appropriate network, including an intranet, the Internet, a cellular network, a local area network, a satellite network or any other such network and/or combination thereof, and components used for such a system depend at least in part upon the type of network and/or system selected. A computing resource service provider may implement customer contact service204as a service which client214is able to interact with. Client214may interface with a service frontend which routes requests to customer contact service204which may be an example of a service backend. Customer contact service204may be a service of computing resource service provider202. A customer contact service described herein may be implemented using one or more servers, such as those described in connection withFIG.27. Customer contact service204may be used by an organization to run a customer contact center. The customer contact service204may implement various capabilities related to facilitating customer contacts. For example, when a customer calls a phone number or initiates a chat, those calls and chats may be routed to customer contact service204and the customer contact service204may route the customer contact to an available agent. Customer contact service204may provide call and chat capabilities to agents via a graphical user interface which may also provide the agents with access to an organization's resources that can be used to help facilitate resolution of customer issues. In at least some embodiments, the agent is able to view an agent dashboard which provides suggestions to organization resources, knowledge bases, or suggested answers to customer questions. Agents may be employees of an organization. Customer contact service204may provide a supervisor dashboard or graphical user interface which supervisors can use to monitor the status of customer contacts by agents, including trend and theme detection and diagnostics capabilities. Customer contact service204may implement features described in connection withFIG.1. Contacts analytics service206may be a service of computing resource service provider202. A contacts analytics service described herein may be implemented using one or more servers, such as those described in connection withFIG.27. Customer contact service204may use contacts analytics service206to process contacts data such as audio calls (e.g., recordings or real-time audio stream) between agents and customers to identify issues. In some embodiments, contacts analytics service206is a software component or module implemented within customer contact service204. Contacts analytics service may obtain contacts data (e.g., audio or text-based contacts data) and process the data to identify diagnostics, insights, and trends. Contacts data may, for example, be real-time data (e.g., streaming audio or on-going chat conversation) or recordings (e.g., audio recordings or chat logs). In at least one embodiment, contacts analytics service206utilizes speech-to-text service208. A speech-to-text service described herein may be implemented using one or more servers, such as those described in connection withFIG.27. Contacts analytics services may obtain contacts data that includes audio data and provide such audio to speech-to-text service208to generate a transcript. The transcript may be organized by speaking turns and may read similarly to a chat log. Speech-to-text service208may obtain an audio waveform and parse the audio waveform by speaker (e.g., by parsing the waveform into an agent channel and customer channel). Speech-to-text service208may use a neural network such as recurrent neural networks (RNNs), deep neural networks (DNNs), variational autoencoders (VAEs), long short-term memory (LSTM) neural networks, convolutional neural networks (CNNs), and more to convert audio waveforms to text. Speech-to-text service208may be in accordance with those described elsewhere in this disclosure, such as those discussed in connection withFIG.1. Transcripts, chat logs, and other text-based contacts data may be provided by contacts analytics service206to NLP service210. A NLP service described herein may be implemented using one or more servers, such as those described in connection withFIG.27. NLP service parse text-based inputs to perform various natural language processing techniques such as those described in connection withFIG.1. For example, chat logs may be organized into turns and each turn may be provided to NLP service210to determine a sentiment of the turn. Sentiments can be used to determine the overall mood and progression of a conversation—for example, if the sentiment of a customer starts out as negative and trends positive after successive turns, then that contact may be considered a good contact. However if a customer's sentiment trends negative and ends negative at the end of a customer contact, that may indicate that there was a difficulty with the contact and may require additional investigation by a supervisor. NLP service210may perform entity and key phrase detection to identify important aspects of customer contacts. NLP insights may be encoded in an output file or response that is provided to contacts analytics service206. In some cases, NLP service210parses contacts data and generates suggestions to questions or issues presented by customers as part of a real-time agent assistance feature. For example, NLP service210may parse a customer's turn to detect key phrases and entities that indicate that the customer is having trouble with a product and is requesting a return. NLP service210may generate suggested responses, such as troubleshooting steps, which may be surfaced to an agent via customer contact service204. Contacts analytics service206may interface with enterprise search service212. An enterprise search service described herein may be implemented using one or more servers, such as those described in connection withFIG.27. Enterprise search service212may have access to an organization's internal documents such as FAQs and knowledge bases. For example, an organization may have internal documents on where produce is sourced, sustainability practices, and other information, which may be stored in various FAQs—a customer can ask those questions in multiple ways. Enterprise search service212may be used to parse customer questions and maps those questions to the most appropriate FAQs. Enterprise search service212may use machine learning techniques to make context-aware search recommendations. For example, a customer may ask whether an organization's retail stores are open on a particular day. Enterprise search service212may determine the customer's geolocation, and use geolocation to determine store hours in the customer's vicinity, including whether the particular day is on a holiday which may affect typical store hours. Enterprise search service212may make a determination of the specific context of the question, so it returns a specific answer—for example, that stores in the USA may be closed on Thanksgiving Day, but are opened the following day (Black Friday) at midnight. In at least some embodiments, enterprise search service212searches for the most relevant documents based on what was asked. Enterprise search service212may be implemented using elastic search and machine learning and/or artificial intelligence techniques. Client214may refer to a client computing device or a user of a client computing device. Client214may, for example, refer to an agent or supervisor of an organization that is a customer to a computing resource service provider. Client214may submit a request for access to various computing resources (e.g., services or computing resources thereof) of computing resource service provider202. The request, in some examples, is a web service application programming interface request (also referred to simply as a web service request), may be received by a service frontend. The service frontend may be a system comprising a set of web servers (e.g., a single web server or a set of web servers which may be managed by a load balancer). Web servers of the frontend may be configured to receive such requests and to process them according to one or more policies associated with the service. Web servers or other components of the frontend may be configured to operate in accordance with one or more SSL and/or TLS protocols, such as referenced herein. The request for access to the service may be a digitally signed request and, as a result, may be provided with a digital signature. The service frontend may then send the request and the digital signature for verification to an authentication service. Customer contact service204may be used to implement various GUI-based dashboard to client214, such as those described in connection withFIGS.4-17. FIG.3is an illustrative example of an environment300in which various embodiments of the present disclosure can be practiced. In an embodiment, a principal302may use a computing device to communicate over a network304with a computing resource service provider306. Principal302may be a client such as those described elsewhere in this disclosure. For example, principal302may be an employee of an organization (e.g., agent, supervisor, engineer, system administrator, data scientist) that accesses a customer contact service308for various reasons, such as to make customer contacts, manage customer contacts, analyze themes and trends in customer contacts, develop insights into customer contacts, and more. Communications between the computing resource service provider306and the principal302may, for instance, be for the purpose of accessing a customer contact service308operated by the service provider306, which may be one of many services operated by the service provider306. The customer contact service308may comprise a service frontend310and a service backend314. The principal302may issue a request for access to a customer contact service308(and/or a request for access to resources associated with the customer contact service308) provided by a computing resource service provider306. The request may be, for instance, a web service application programming interface request. The principal may be a user, or a group of users, or a role associated with a group of users, or a process representing one or more of these entities that may be running on one or more remote (relative to the computing resource service provider306) computer systems, or may be some other such computer system entity, user, or process. Generally, a principal is an entity corresponding to an identity managed by the computing resource service provider, where the computing resource service provider manages permissions for the identity. Note, however, that embodiments of the present disclosure extend to identities not managed by the computing resource service provider, such as when identities are anonymous or otherwise unspecified. For example, a policy may apply to anonymous principals. The principal302may correspond to an identity managed by the computing resource service provider306, such as by the policy management service or another service. The identity may be one of multiple identities managed for an account of a customer of the computing resource service provider, and the computing resource service provider may manage accounts for multiple customers. Note that, while the principal302may correspond to a human, such a human may communicate with the computing resource service provider306through a suitably configured computing device which may perform operations (e.g., generation and transmission of requests) on behalf of the principal302. The principal302may communicate with the computing resource service provider306via one or more connections (e.g., transmission control protocol (TCP) connections). The principal302may use a computer system client device to connect to the computing resource service provider306. The client device may include any device that is capable of connecting with a computer system via a network, such as example devices discussed below. The network304may include, for example, the Internet or another network or combination of networks discussed below. The computing resource service provider306, through the customer contact service308, may provide access to one or more computing resources such as virtual machine (VM) instances, automatic scaling groups, file-based database storage systems, block storage services, redundant data storage services, data archive services, data warehousing services, user access management services, identity management services, content management services, and/or other such computer system services. Other example resources include, but are not limited to user resources, policy resources, network resources and/or storage resources. In some examples, the resources associated with the computer services may be physical devices, virtual devices, combinations of physical and/or virtual devices, or other such device embodiments. Note that such services and resources are provided for the purpose of illustration and embodiments of the present disclosure may utilize other services and/or resources. The request for access to the customer contact service308which, in some examples, is a web service application programming interface request (also referred to simply as a web service request), may be received by a service frontend310. The service frontend310may be a system comprising a set of web servers (e.g., a single web server or a set of web servers which may be managed by a load balancer). Web servers of the frontend310may be configured to receive such requests and to process them according to one or more policies associated with the customer contact service308. Web servers or other components of the frontend310may be configured to operate in accordance with one or more SSL and/or TLS protocols, such as referenced herein. The request for access to the customer contact service308may be a digitally signed request and, as a result, may be provided with a digital signature. The service frontend310may then send the request and the digital signature for verification to an authentication service316. The authentication service316may be a stand-alone service or may be part of a service provider or other entity. The authentication service316, in an embodiment, is a computer system configured to perform operations involved in authentication of principals. In some examples, requests submitted to the service frontend310are digitally signed by the principal (i.e., by a computing device used by or operating on behalf of the principal) using a symmetric cryptographic key that is shared between the principal302and the authentication service316. The authentication service, therefore, may use a copy of the symmetric cryptographic key to verify digital signatures of requests purported to have been generated by the principal302. However, in other embodiments, the authentication service316may be configured to utilize asymmetric cryptography for digital signature verification such as, for example, when the principal digitally signs requests using a private cryptographic key. In such embodiments, the authentication service may be configured to trust a certificate authority that digitally signed a certificate of the principal302corresponding to the private cryptographic key. Consequently, in some embodiments, the authentication service may use a public cryptographic key specified by the certificate. Generally, the authentication service may utilize a cryptographic key that is registered with the authentication service316in association with the principal302. Upon successful authentication of a request, the authentication service316may then obtain policies applicable to the request. A policy may be a set of information that defines a set of permissions with respect to a set of resources. An access control policy may be a type of policy that is associated with access to resources and specifies a set of cipher suites suitable for accessing the resources. The policy may be applicable to the request by way of being associated with the principal302, a resource to be accessed as part of fulfillment of the request, a group in which the principal302is a member, a role the principal302has assumed, and/or otherwise. To obtain policies applicable to the request, the authentication service316may transmit a query to a policy repository318managed by a policy management service320, which may be the policy management service discussed above in connection withFIG.1. The query may be a request comprising information sufficient to determine a set of policies applicable to the request. The query may, for instance, contain a copy of the request and/or contain parameters based at least in part on information in the request, such as information identifying the principal, the resource, and/or an action (operation to be performed as part of fulfillment of the request). The policy repository, which may be a database or other system operable to process queries, may process the query by providing any policies applicable to the request. Note that, if authentication of the request is unsuccessful (e.g., because a digital signature could not be verified), policies applicable to the request may not be obtained. Having obtained any policies applicable to the request, the authentication service316may provide an authentication response and, if applicable (e.g., when there is a positive authentication response), the obtained policies back to the service frontend310. The authentication response may indicate whether the response was successfully authenticated. The service frontend310may then check whether the fulfillment of the request for access to the customer contact service308would comply with the obtained policies using an authorization module312. An authorization module312may be a process executing on the service frontend that is operable to compare the request to the one or more permissions in the policy to determine whether service is authorized to satisfy the request (i.e., whether fulfillment of the request is authorized). For example, the authorization module may compare an API call associated with the request against permitted API calls specified by the policy to determine if the request is allowed. If the authorization module312is not able to match the request to a permission specified by the policy, the authorization module312may execute one or more default actions such as, for example, providing a message to the service frontend that causes the service frontend to deny the request, and causing the denied request to be logged in the policy management service320. If the authorization matches the request to one or more permissions specified by the policy, the authorization module312may resolve this by selecting the least restrictive response (as defined by the policy) and by informing the service frontend whether the fulfillment of the request is authorized (i.e., complies with applicable policy) based on that selected response. The authorization module312may also select the most restrictive response or may select some other such response and inform the service frontend whether the fulfillment of the request is authorized based on that selected response. Note that, whileFIG.3shows the authorization module312as a component of the service frontend310, in some embodiments, the authorization module312is a separate service provided by the computing resource service provider306and the frontend service may communicate with the authorization module312over a network. Service frontend310may be configured to communicate with a service backend314that may be used to access one or more computing resources. For example, service backend314may have access to a contacts analytics service322which may be implemented in accordance with techniques described elsewhere such as those discussed in connection withFIGS.1,2,18, and19. In some embodiments, client requests are received at service frontend310and fulfilled at least in part by service backend314routing the request (or generating a second request based on the client request) to another service of computing resource service provider306. Service backend314may have access to computing resources such as a data storage service which service backend314uses to store contacts data to a client bucket or storage location. In some examples, access control information stored in a policy or resource metadata repository is associated with resources and specifies a set of cipher suites suitable for the resources. For a particular resource, the access control information may specify or otherwise indicate a set of cipher suites such that, to fulfill an API request received over a cryptographically protected communications session and involving the resource, the cryptographically protected communications session must utilize a cipher suite in the set. The set may be specified explicitly (e.g., with an identifier for each cipher suite in the set and/or an identifier for the set), implicitly (e.g., with a security level for the resource), and/or otherwise. As with other access control information, the access control information may specify conditions involving when requirements regarding cipher suites apply, such as which API requests the requirements apply to (i.e., which type(s) of requests), which may be all API requests whose fulfillment involves access to the resource, which principals the requirements apply to (which may be all principals), and other requirements. In some examples, access control information specifies conditions involving contextual information which, for an API request, which may include a source network address (e.g., source Internet Protocol (IP) address), a time of day when the request is submitted, a network from which the request is submitted (e.g., an identifier of a private network or a subnet of a private network), and other information. In one example, a source network address of an API request may be mapped to a geographic location (which may be defined in various ways, such as in accordance with geopolitical boundaries and/or legal jurisdictions) and applicability of one or more conditions may apply to the geographic location. For instance, certain geographic locations may require certain cipher suites be in use for fulfillment of certain requests (e.g., requests whose fulfillment involves access to certain resources). Note that, whileFIG.3shows a particular configuration of a distributed system of a computing resource service provider, other configurations are also considered as being considered within the scope of the present disclosure. For example, authentication and authorization determinations may be made by different components of a distributed system (e.g., the service frontend310). As another example, applicable request-mapping rules and authorization rules may be stored in the policy repository and part of obtaining applicable policy may include application of the request-mapping rules to determine the proper authentication rules. As described throughout this document, such as in connection withFIG.1, an output file may be generated by a contacts analytics service. For example, a contacts analytics service may cause a step functions workflow to be triggered which generates an output file that is compiled from the use of other services such as transcription services (e.g., a speech-to-text service) and analytics services (e.g., a natural language processing service). A contacts analytics output file—which may be referred to simply as an output file or transcript based on context—my refer to an output file or object that is vended when a contacts analytics job (e.g., job illustrated inFIG.1) completes. In at least one embodiment, a contacts analytics output comprises information about a job such as input metadata, the call or chat transcript, sentiment, key phrases, entities, categories, and additional derived metrics such as non-talk time, talk speed, etc. In at least one embodiment, a contacts analytics service writes an output file to a customer contact service's data bucket. In at least some embodiments, the output file is used to facilitate customer contacts searches and detailed contact trace record (CTR) pages, such as those described in connection withFIGS.12-13. In some embodiments, contacts analytics service writes an output file to a customer contact service's data storage bucket and the output file is then copied to a customer's data storage bucket (e.g., organization's data storage bucket) and the customer may perform subsequent business intelligence (BI), machine learning, or aggregate the customer analytics output data with other data of the organization. In various embodiments, a contacts analytics output file includes some or all end-customer-specified inputs to a request to start an analytics jobs. Examples of customer-specific inputs may include a language code which can be used by a downstream NLP service to determine the language in which to use. In various embodiments, internal input parameters which are used by a customer contact service and downstream services and which are not exposed to the end customer may be omitted from the output file. Examples of internal input parameters may a data access role resource name (RN) and the input data configuration which points to a network location of a data storage bucket owned by a customer contact service (note that this network location is different from the customer's data storage bucket). A contacts analytics output file may, in various embodiments, be zipped before being saved (e.g., copied) to a customer's data bucket. In some cases, multiple contacts analytics output files are aggregated to one zipped file. In some cases, in a zipped file includes a single contacts analytics output file and multiple zipped files may be saved to a customer's data bucket. In at least one embodiment, a contacts analytics output file generated from an audio source (e.g., audio recording) may be represented as, or based on, the following: {“Participants”: {“52b61899-1f78-4253-8072-fbda7f7b9072”: “AGENT”,“4A20212C”: “CUSTOMER”}“Channel”: “VOICE”,“AccountId”: 467045747823,“JobId”: “2fe2d2a1-6770-4188-a43a-8535485e1554”,“JobStatus”: “COMPLETED”,“LanguageCode”: “en-US”,“CustomModels”: [{“Type”: “TRANSCRIPTION_VOCABULARY”,“Name”: “MostCommonKeywordsTranscriptionV1”},{“Type”: “TEXT_ANALYSIS_ENTITIES”,“Name”: “Top100EntitiesV2”}],“CustomerMetadata”: {“InputSvcUri”: “svc://connect-747d90ef9/connect/poc-1/CallRecordings/2019/07/“ContactId”: “75c88693-4782-4b27-a5f9-fc45a8ee7616”,“InstanceId”: “fe9e4e32-17fb-40a8-b027-5a1a65d1acb0”},“Transcript”: [{“ParticipantId”: “52b61899-1f78-4253-8072-fbda7f7b9072”,“MessageId”: “sldkgldk-3odk-dksl-hglx-3dkslgld”,“Content”: “Hello, my name is Jane, how may I assist you?”,“BeginOffsetMillis”: 0,“EndOffsetMillis”: 300,“Sentiment”: “NEUTRAL”,“Entities”: [{“Text”: “Jane”,“Type”: “PERSON”,“BeginOffsetCharacters”: 15,“EndOffsetCharacters”: 20}]},{“ParticipantId”: “4A20212C”,“MessageId”: “l40d9sld-dlsk-z;xl-dlwl-38222ldl”,“Content”: “I'm having trouble accessing my Foobar Application account today.“BeginOffsetMillis”: 500,“EndOffsetMillis”: 945,“Sentiment”: “NEGATIVE”,“Entities”: [{“Text”: “ Foobar Application”,“Type”: “TITLE”,“BeginOffsetCharacters”: 32,“EndOffsetCharacters”: 47},{“Text”: “today”,“Type”: “DATE”,“BeginOffsetCharacters”: 56,“EndOffsetCharacters”: 61}],“KeyPhrases”: [{“Text”: “trouble”,“BeginOffsetCharacters”: 11,“DndOffsetCharacters”: 18},{“Text”: “my Foobar Application account”,“BeginOffsetCharacters”: 29,“EndOffsetCharacters”: 55},{“Text”: “today”,“BeginOffsetCharacters”: 56,“EndOffsetCharacters”: 61}]}],“Categories”: {“MatchedCategories”: [ “Swearing”, “Interruptions” ],“MatchedDetails”: {“Swearing”: {“PointsOfInterest”: [{“BeginOffsetMillis”: 0,“EndOffsetMillis”: 300},{“BeginOffsetMillis”: 360,“EndOffsetMillis”: 500}]},“Interruptions”: {“PointsOfInterest”: [{“BeginOffsetMillis”: 0,“EndOffsetMillis”: 500},{“BeginOffsetMillis”: 360,“EndOffsetMillis”: 500}]}}},“ConversationCharacteristics”: {“TotalConversationDurationMillis”: 7060,“NonTalkTime”: {“TotalTimeMillis”: 172,“Instances”: [{“BeginOffsetMillis”: 3,“EndOffsetMillis”: 60,“DurationMillis”: 57},{“BeginOffsetMillis”: 45,“EndOffsetMillis”: 160,“DurationMillis”: 115}]},“TalkTime”: {“TotalTimeMillis”: 90000“DetailsByParticipant”: {“52b61899-1f78-4253-8072-fbda7f7b9072”: {“TotalTimeMillis”: 45000},“4A20212C”: {“TotalTimeMillis”: 45000}}},“TalkSpeed”: {“DetailsByParticipant”: {“52b61899-1f78-4253-8072-fbda7f7b9072”: {“AverageWordsPerMinute”: 34},“4A20212C”: {“AverageWordsPerMinute”: 40}}},“Interruptions”: {“TotalCount”: 2,“TotalTimeMillis”: 34,“InterruptionsByInterrupter”: {“52b61899-1f78-4253-8072-fbda7f7b9072”: [{“BeginOffsetMillis”: 3,“EndOffsetMillis”: 34,“DurationMillis”: 31},{“BeginOffsetMillis”: 67,“EndOffsetMillis”: 70,“DurationMillis”: 3}]}},“Sentiment”: {“OverallSentiment”: {“52b61899-1f78-4253-8072-fbda7f7b9072”: 3,“4A20212C”: 4.2},“SentimentByPeriod”: {“QUARTER”: {“52b61899-1f78-4253-8072-fbda7f7b9072”: [{“BeginOffsetMillis”: 0,“EndOffsetMillis”: 100,“Score”: 3.0},{“BeginOffsetMillis”: 100,“EndOffsetMillis”: 200,“Score”: 3.1},{“BeginOffsetMillis”: 200,“EndOffsetMillis”: 300,“Score”: 3.6},{“BeginOffsetMillis”: 300,“EndOffsetMillis”: 400,“Score”: 3.1}],“4A20212C”: [{“BeginOffsetMillis”: 0,“EndOffsetMillis”: 100,“Score”: 3.1},{“BeginOffsetMillis”: 100,“EndOffsetMillis”: 200,“Score”: 3.2},{“BeginOffsetMillis”: 200,“EndOffsetMillis”: 300,“Score”: 3.6},{“BeginOffsetMillis”: 300,“EndOffsetMillis”: 400,“Score”: 3.6}]}}}}} In various embodiments, a client submits a contacts analytics job and a workflow such as those described in connection withFIGS.1and25is used to coordinate execution of a step functions workflow that generates a contacts analytics output or transcript, as shown above for an audio contacts data source. In at least some embodiments, a contacts analytics output file or transcript file is encoded in a human-readable format (e.g., JSON) and may have one or more of the following fields. It should be noted that the fields described herein are merely illustrative, and other nomenclature may be used to represent the fields described herein. A channel may refer to the modality of a customer contact. For example, a channel field may be chat, voice call, video call, and more. For example, an accountId field may represent the end customer's account identifier and may be distinguishable from an account identifier associated with the customer contact service account which submits a job. For example, a jobId field may be a job identifier which serves as a unique identifier that resolves to a particular contacts analytics job and can be used to disambiguate one job from another. In at least some embodiments, a contacts analytics output file or transcript file includes a transcript field that is partitioned by sentences for calls, and by messages (may be multi-sentence) for chats. A transcripts filed may include both the transcript text as well as any segment-level metrics that are generated—for example, by a NLP service. In various embodiments, chat messages don't have duration field whereas audio has a duration field that indicates how long it took for a particular sentence or turn took. Chats may have a single field, absoluteTimestamp, and calls may have two fields—relativeOffsetMillis and durationMillis. For example, BeginOffsetMills/EndOffsetMillis fields may refer to offsets in milliseconds from the beginning of the audio, end of audio. For example, absoluteTime field may refer to an ISO8601 formatted absolute timestamp to the millisecond a message was sent. In at least one embodiment, only one of absoluteTime or relativeOffsetMillis is needed. For example, beginOffsetCharacters/endOffsetCharacters fields in the entities/key phrases output may refer to the character offsets in a particular portion of a transcript where the entity or key phrase begins. For example, a categories field may refer to a list of categories which the conversation triggered. In at least one embodiment, a contacts analytics output file generated from a text-based source (e.g., chat log) may be represented as, or based on, the following: {“Channel”:“chat”,“Transcript”:[{“ParticipantId”:“52b61899-1f78-4253-8072-fbda7f7b9072”,“Message_id”:“sldkgldk-3odk-dksl-hglx-3dkslgld”,“Content”:“Hello, my name is Jane, how may I assist you?”,“AbsoluteTime”:“2019-07-23T13:01:53.23”,“BeginOffsetMillis”:500,“EndOffsetMillis”:945},{“ParticipantId”:“4A20212C”,“Message_id”:“l40d9sld-dlsk-z;xl-dlwl-38222ldl”,“Content”:“I'm having trouble accessing my Foobar Application account today.”,“AbsoluteTime”:“2019-07-23T13:01:53.23”,“BeginOffsetMillis”:1014,“EndOffsetMillis”:1512}],“Categories”:{“Swearing”:{“IsRealTime”:true,“StartAndEndAbsoluteTime”:[{“StartAbsoluteTime”:“2019-07-23T13:01:53.23”,“EndAbsoluteTime”:“2019-07-23T13:01:53.23”},{“StartAbsoluteTime”:“2019-07-23T13:01:53.23”,“EndAbsoluteTime”:“2019-07-23T13:01:53.23”}]},“Interruptions”:{“IsRealTime”:true,“StartAndEndAbsoluteTime”:[{“StartAbsoluteTime”:“2019-07-23T13:01:53.23”},{“StartAbsoluteTime”:“2019-07-23T13:01:53.23”}]}}} Customer contact service may have a concept of call recordings—once a customer call recording finishes, customer contact service may take that audio file and perform all of this analysis with various backend services such as transcribing the audio to text and running natural language processing algorithms on the text. In some cases, contacts analytics service also performs its own post-processing and generate an output file, such as described above, which is saved to a data bucket of the customer contact service. Customer contact service may then copy that output into a customer's data bucket and the customer may take the contacts analytics output and ingest it in their application for various use cases. As a first example, customers (e.g., organizations) can ingest contacts analytics output files in their elastic search cluster (e.g., for keyword search to see how often agents comply with certain categories). As a second example the contacts analytics data can be exported by customers so that they can combine this data with other data sets and aggregate the data—an example of this may be that contacts analytics data is used to determine how often an agent complies with an organization's greetings category, which is combined with other organization data such as how often the agent was tardy to work to create an agent score card using additional metadata that an organization may store internally. FIGS.4and5may collectively illustrate a graphical user interface that can be used to manage categories. For example,FIGS.4and5may be presented to a client of a customer contact service that utilizes a contacts analytics service as a backend service. In at least some embodiments, a first part400and second part500of a categories UI is illustrated inFIGS.4and5. In at least one embodiment, a client such as a supervisor, QA specialist, or other member of an organization may use the GUI described inFIGS.4and5to generate categories. Customer contacts may be processed to determine which categorization rules are met in particular customer contacts. Categories may apply to various types of customer contacts of various modalities, including but not limited to audio and chat interactions with customers of an organization. FIGS.4and5as illustrated may include various UI elements, which may be illustrated in the thickened lines and text. While a text-based GUI is illustrated for clarify, graphics and other types of graphical icons may be utilized. Categories may be persisted and managed by a categorization service and/or category data store, which may be in accordance with techniques described in connection withFIG.1. In at least one embodiment, a contacts analytics service interfaces with a categorization service as backend services to support features presented to a user via a customer contact service. In a first part400of a categorization UI, the UI may allow a user to create new categories, manage existing categories, such as by editing or copying existing categories, or deleting existing categories. For example,FIG.4illustrates the creation of a new category. A user may type in a name of a category—for example, inFIG.4, the category being created is named “Improper Greetings” category and may be a rule that is used to detect when agents of an organization are not properly greeting subscribing customers in accordance with the organization's internal processes. Italicized text as illustrated inFIG.4may refer to input fields in which a user can type in a custom text string. A category may support rules-based categorization where a user can specify a set of criteria for when a category is met. For example, a category may be keyed on certain attributes being true or false. For example, as illustrated inFIG.4, an attribute may have a type, a specific attribute (options of which may vary based on the selected type), a matching criteria, and a value. As shown inFIG.4, a category may be applied when an external type with attribute member status is equal to subscriber, meaning that member status indicates that a contact is with a subscriber. Attributes may specify various properties of customer contacts so that only customer contacts that meet the attributes of a category are tagged as such. A drop-down menu or text input box may be used by the user to specify various properties of an attribute. Attributes may be combined using various Boolean operators. For example,FIG.4illustrates a second attribute specifying that if a system queue is a subscribers queue, then the second attribute is matched. A Boolean operator “Or” combines the two attributes, such that a condition is satisfied if either the first attribute (encoding a first condition that a customer is a subscriber) or second attribute (the contact is from a subscribers queue) is met. The order in which various conditions are evaluated may be performed in any suitable order, as determined by a categorization service, such that it is not necessarily the case that the standard order of operations is always honored. FIG.4further illustrates key words and phrases. Keywords and phrases may refer to specific keywords and phrases that are searched upon in customer contacts in connection with criteria defined under the keywords and phrases. In various embodiments, an analytics service may perform natural language processing to extract keywords and phrases from a contacts data source such as a call recording or chat log.FIG.4illustrates. As illustrated inFIG.4, a user can specify that a category is met when a key word or phrase is included or is not included during a specified time range. The specified time range may, for example, be the first portion of a call, the last portion of a call, anywhere in the call, or numerous other variations.FIG.4illustrates a category to detect improper greetings when the phrases “Thanks for being a subscriber” or “We value you as a subscriber” are not spoken by the agent within the first 30 seconds of a call. In some embodiments, substantially similar variations of the specified keywords or phrases may be sufficient (e.g., they are treated equivalently). Whether a particular phrase is sufficiently similar to a specified keyword or phrase may be determined using natural language processing to determine whether the two have similar syntactic meaning. For example, if an agent speaks “Thank you for being a subscriber” it may be determined to be substantially similar to the specified phrase “Thanks for being a subscriber” and therefor meet the phrase specified inFIG.4. Different keyword and phrases can be specified for different speakers. Users may add and remove keywords and phrases. In some cases, there is a maximum number of keywords and phrase the user can add. FIG.5may illustrate a second part500of a categories UI.FIG.5illustrates various actions that can be taken if category rules such as those described in connection withFIG.4are met. For example, if a Boolean rule is evaluated to be TRUE, then the alert illustrated inFIG.4may be triggered. For example, an alert may send an email, send a text message, or display an alert in a supervisor or agent dashboard. In at least one embodiment, an agent is prompted with a reminder to thank a customer for being a subscriber. An alert may be displayed in a supervisor dashboard for various categories. For example, a dashboard such as those described in connection withFIGS.12and13may surface information as to which customer contacts met certain categories. Supervisors may filter by category to determine, for example, which agents failed to properly greet subscribing customers. The supervisor's dashboard may include additional information such as sentiment scores which can be used for performing business intelligence or analytics. For example, an organization can collect aggregated data across multiple agents over a period of time to determine whether greeting customers in a particular manner (e.g., thanking subscriber at the start of a call) results in higher customer sentiment. In at least one embodiment, an alert can be sent to a supervisor using a notification service, which may push a notification to a queue which can be obtained by subscribing supervisors. In various embodiments, categories can be defined based on content of communications as well as acoustic characteristics in the case of audio contacts. For example, calls may be categorized to identify instances of long silence, talking too fast, interruptions, more. FIG.6illustrate a contact search page600, in accordance with at least one embodiment. A supervisor may have access to contact search page600through a graphical user interface (e.g., webpage) exposed by a customer contact service such as those described elsewhere in this disclosure, such as those discussed in connection withFIGS.1and2. Data generated by a contacts analytics service may be indexed and used to identify contacts that meet a particular search query. Contact search page600may be used to perform ad hoc analysis and post hoc analysis for analysis. Contact search may also be used to discover themes and trends which may not have been. Contact search page600may be a graphical interface that is accessible to clients of a customer contact service (e.g., scoped to supervisors or a defined set of permissioned users). Contact search page, in at least one embodiment, allows a client of a customer contact service to search through all contacts (e.g., any interactions between customer and agent, regardless of modality). In various embodiments, contacts search page600supports a rich set of search parameters beyond agent name and contact identifier (e.g., a unique identifier assigned to a contact) such as searching by keyword. For example, if a supervisor has an impression that there has been a lot of account login issues, then he or she can search for customer contacts that include the words“account is locked” or “can't access my account” in keywords and as a supervisor you can look into issues with that. Similarly, contact search page can also be scoped to a particular agent, customer, or event particular contact identifier. In at least some embodiments, conversation characteristics from output calls for silent, non-talk time, cross-talk, etc. can also be searched upon. In some embodiments, contact search page600includes additional search parameters not illustrated inFIG.6, such as capabilities to search by categories. Search results may be displayed in accordance with some or all ofFIGS.7-10. Pressing the “Search contacts” button may initiate a search of some or all contacts data of a client such as an organization. The search may be initiated by a client computing device, routed to a service frontend over a network, the request authenticated and authorized, and then routed to a backend service that executes a search. The search may be indexed using contacts analytics output files that includes metadata about audio calls, including but not limited to the textual content of the call (e.g., transcript of the audio) but also conversation characteristics from output call for silent, non-talk time, cross-talk, and more. Categories may be searched upon to determine which customer contacts did or did not meet certain categories. For example, a supervisor can search for contacts in which an agent performed an improper greeting. FIG.7illustrates a contact search result page700, in accordance with at least one embodiment. In at least one embodiment, contact search result page700is provided to a client of a service provider in response to a client search request with a specified set of search parameters. In at least one embodiment, search parameters may include various parameters such as a time period to search over and one or more keywords or phrases. Searches may be performed using NLP techniques so that literal as well as semantic matches are returned. In at least one embodiment, contact search result page700allows a user to edit an executed search to modify the search parameters. Contact search result page700may display a set of common themes that are detected for the search parameters. Common themes may refer to keywords and phrases that are positively correlated with search parameters. For example,FIG.7illustrates a search result over a specified period of time for instances of “account is locked” and “can't access my account” which has a high correlation with “account access” which can be deduced by the number of instances of “account access” keyword within the search result—in fact, all 98 search results that matched the search parameters were also associated with “account access,” which can be seen by the “98” in the circle under the Common Themes section as well as the bottom of the search results. Other common themes may be listed as well, in order of magnitude or frequency. For example, “account is locked” and “password is not accepted” are also relatively high occurrences in the search results. Supervisors can use the common themes to discover potential issues which are more specific than the search parameters or identify potential root causes to a problem that customers report experiencing. In some embodiments, contact search result page700allows a client to download contacts data for the search results—for example, a client may be able to download contacts analytics output files for all 98 search results shown inFIG.7as a single zipped file. Clients may take the downloaded data and use them to perform business intelligence and use additional data internal to the client. In at least one embodiment, a contact ID is a Globally Unique Identifier (GUID) or Universally Unique Identifier (UUID). Contact search result page700may display a page of search results with various fields which may be displayed.FIG.7merely displays one among several possible set of fields to report in a search result page.FIG.7illustrates the following fields: Contact IDUnique identifier associated with a particularcustomer contactChannelMethod of communication (e.g., chat or voice)Initiation TimestampStart of customer contactPhone NumberPhone NumberQueueQueue from which the contact was madeAgentAgent name or identifier that handled thecustomer contactRecording/TranscriptCall recording or chat transcript ofcustomer contact. Can be viewed, played,downloaded, or deletedCustomer NumberCustomer contact informationDisconnect TimestampEnd of customer contact FIG.8illustrates playback capabilities a contact search result page800, according to at least one embodiment. In at least one embodiment, a contacts analytics service transcribes audio and process the transcribed text to identify entities, keywords, and phrases. In some cases, customer sentiment is also encoded and can be viewed from the contact search result page. In at least one embodiment, each audio contacts data is transcribed by turn, based on who was speaking at a given point in time.FIG.8illustrates an example in which a user clicks on the audio of the second search result with contact ID “1po9ie0-7-fdc-2588-9erc-cd2iuy510987q” which plays the audio of the customer contact and displays additional information. The prompt may include the total call duration, and a speech-to-text transcript of the turn that is being re-played. In some cases, keywords, phrases, entities, categories, or a combination thereof are highlighted, bolded, or otherwise surfaced to the user. For example, under the search, the “I can't access my account” phrase is highlighted, which may represent keywords, phrases, entities, categories, etc. of interest to the user. In some embodiments, audio contacts data is ingested by a contacts analytics service which uses a speech-to-text service to transcribe the audio contacts data source into a transcript of the audio contact. The transcript may be organized into turns. The transcript may be provided to a NLP service which performs sentiment analysis, entity detection, keyword detection, phrase detection, and more processing. Contacts analytics service may perform additional post-processing, such as assigning a sentiment score to portions of the audio contact and/or a sentiment score for the overall contact. Transcripts may be provided to a categorization service that checks whether a set of rules-based categories (or a portion thereof) are met by the contact. In some embodiments, clicking on a chat transcript brings up a prompt that shows the conversation between an agent and customer. FIG.9illustrates contact theme filtering capabilities of a contact search result page900, according to at least one embodiment.FIG.9may be implemented in the context of contact search described inFIGS.9-11.FIG.9may illustrate a scenario in which a user submitted a search for a set of parameters—for example, for keywords or phrases “account is locked” or “can't access my account” over Dec. 22, 2018 to Dec. 23, 2018. Search results may be populated as shown inFIG.9, but additional themes that the user may not be aware of may also be surfaced, which allows the user to explore additional. Themes may be listed by frequency, with the most common themes being shown first. For example,FIG.9shows the following themes:Account Access—98 instancesAccount is locked—76 instancesPassword is not accepted—70 instancesOnline banking—63 instancesCan't access online banking—55 instancesPage says access denied—44 instancesCan't reset password—41 instancesOnline checking account—33 instancesOnline banking page is broken—32 instances The user may review the common themes presented, and select any a theme to drill in deeper to learn new insights or identify root causes or previously undiscovered issues. For example, inFIG.9, the user may click on “Account is locked” theme which was surfaced using metadata generated by a contacts analytics service. This flow may be continued onFIG.10. FIG.10illustrates contact theme filtering capabilities of a contact search result page1000, according to at least one embodiment.FIG.10may be implemented in the context of contact search described inFIGS.9-11. In at least one embodiment, a user clicked on the “Account is locked” theme inFIG.9which results in the flow onFIG.10, which displays the 76 contacts that fall under the “Account is locked” theme. Furthermore, there may be a hierarchy displayed on the search page “Search parameter>Account is locked” which can be used to navigate to different levels of the search. For example, clicking on “Search parameters” may bring the user back to the original search results inFIG.9. Contact search result page1000may show the most relevant sub-themes under a theme. Note that the themes presented in contact search result page1000are the most commonly occurring themes of the subset of contacts that match to “Account is locked”—accordingly, these themes may differ from those inFIG.9. For example,FIG.10illustrates the following themes:Password is not accepted—70 instancesOnline banking—63 instancesIdentity theft—30 instancesOnline banking page is broken—20 instancesOnline checking account—15 instancesCan't reset password—11 instancesShared Accounts—6 instancesSecurity Settings—5 instances It should be noted that exploring sub-themes results may, as inFIG.10, surface themes that were not visible on search themes, since the search domain is different. For example,FIG.10shows that of the 76 instances of “Account is locked” there were 30 instances of identity theft also associated with the accounts being locked. This insight may lead the user to conclude that there is a rise in identity theft and to take action accordingly, such as to increase security measures and to inform agents to use more robust methods to authenticate users, such as requiring the use of multi-factor authentication during periods of higher than usual risk. In at least some embodiments, a user can click on the “Identity theft” theme to drill deeper into sub-themes associated with reports of identity them. This flow may be continued onFIG.11. FIG.11illustrates contact theme filtering capabilities of a contact search result page1100, according to at least one embodiment.FIG.11may be implemented in the context of contact search described inFIGS.9-11. In at least one embodiment, a user clicked on the “Identity theft” theme inFIG.10which results in the flow onFIG.11, which displays the 30 contacts that fall under the “Identity theft” theme. Furthermore, there may be a hierarchy displayed on the search page “Search parameter>Account is locked>Identity theft” which can be used to navigate to different levels of the search. For example, clicking on “Search parameters” may bring the user back to the original search results inFIG.9and clicking on “Account is locked” may bring the user back to the search results inFIG.10. In at least one embodiment, additional themes can be discovered to identify more particularly useful information relating to identity theft. For example, as shown inFIG.11, it may be the case that 26 of the 30 instances of identity theft were reported in Oregon state. A supervisor may use this information to impose stricter authentication requirements on Oregon but not other states. FIG.12illustrates a first portion of a contact trace record page1200, according to at least one embodiment.FIGS.12and13may collectively illustrate a contact trace record page. In at least one embodiment, a user is able to review a contact trace record for a customer contact. For example, a user may navigate to a contact trace record page through a contact search result by clicking on the result for a particular contact. In at least some embodiments, a contacts analytics output file is obtained and the data from the output file is used to populate a contact summary card, contact analysis, call transcript, categorizations, entities, keywords, phrases, and more. Contact trace record page1200may be a visualization of some or all data of an output file generated by a contacts analytics service. Contact trace record page1200may include a contact summary section that includes some or all of the following information: contact id, start and end times (e.g., based on an initiation timestamp and a disconnect timestamp), contact duration, customer number, agent, queue, and actions triggered (e.g., categories). Actions triggered may refer to categories that matched. The contact trace record page1200may include a graph of the customer sentiment trend, which may be based on a rolling sentiment score. The contact trace record page1200may include aggregate data, such as aggregate statistics on the customer sentiment. For example, a graph may show what percentage of the call a customer's sentiment was positive, neutral (e.g., neural or mixed), or negative. In some embodiments, the percentages are based on what proportion of turns were positive, neutral, or mixed. In some embodiments, the percentages are based on what portion of call length that the customer's sentiment was positive, neutral, or mixed (e.g., a longer positive sentiment is weighted more heavily than a shorter negative sentiment). In at least some embodiments, total talk time is broken up, by percentage, to each speaker. In some cases, periods of silence are denoted non-talk time. The contact trace record page1200may present additional contact details and/or contact analysis information. Contact trace record page1200may, in at least one embodiment, display audio and transcript information. In at least some embodiments, users can search for specific words or phrases in the audio, which may be matched against a transcript of the audio generated by a speech-to-text service. Visualizations of the audio may be presented in the contact trace record page1200. The audio may be color coded by speaker—bar heights may represent loudness and the bars may be of different colors for when agent is speaking, customer is speaking, when both are speaking (e.g., cross-talk), and for periods of silence. In at least some embodiments, sentiments and/or sentiment scores are displayed. In at least some embodiments, audio playback may be made available in the page, which may include filters for party sentiment, playback speed may be adjusted to be faster or slower than typical, and more. FIG.13illustrates a second portion of a contact trace record page1300, according to at least one embodiment.FIGS.12and13may collectively illustrate a contact trace record page. In at least one embodiment, a user is able to review a contact trace record for a customer contact. For example, a user may navigate to a contact trace record page through a contact search result by clicking on the result for a particular contact. In at least some embodiments, a contacts analytics output file is obtained and the data from the output file is used to populate a contact summary card, contact analysis, call transcript, categorizations, entities, keywords, phrases, and more. In at least one embodiment, contact trace record page1300displays a transcript of an audio recording of a customer contact (e.g., video or audio call). In at least one embodiment, the transcript is a text-based transcript of audio that is generated by a speech-to-text service. The transcript may be organized by turns, and an emoticon may be displayed next to each turn, indicating the sentiment of the turn. For example, the first turn shown inFIG.13the first speaker is the agent (e.g., as shown by the agent speaking at 00:01) and that the sentiment of the speaker's first turn was neutral. The transcript may highlight or otherwise surface information relating to categories, entities, keywords, phrases, etc., of the transcript which were identified using natural language processing techniques. In at least some embodiments, a contacts analytics service automatically redacts sensitive data from chat logs, call transcripts, and other text-based records. Non-limiting examples of sensitive data may include one or more of the following: credit card numbers; social security numbers; patient health records; date of birth; passwords or pass phrases; cryptographic keys or other secret material; personal identification number (PIN); and more. In at least some embodiments, sensitive data includes personal health information (PHI) and/or personally identifiable information (PII). In at least some embodiments, contacts analytics service is payment card industry (PCI) compliant and can automatically redact PCI data from both call audio and chat transcript to ensure that sensitive customer information is not exposed to unauthorized employees within the organization. In at least some embodiments, sensitive data is redacted from contacts analytics service GUI and stored in an encrypted format. In at least some embodiments, an organization may have access to a cryptographic key that can be used to decrypt sensitive data of chat logs if such data is needed, such as in cases where such information is required for compliance with statutory and/or regulatory reasons. FIG.13further illustrates actions that were triggered by the call. Actions may refer to categories or other points of interest which may have been identified while processing the call audio. For example, the call triggers illustrated inFIG.13show eight actions triggered: yelling; loud volume (occurring three times over the course of the call); a link to current promotions; a link to change address form; a swearing word flag (e.g., profanity); and swear word notification to supervisor. In at least some embodiments, contact trace record page1300includes a text box in which a supervisor can added comments which are associated and/or stored with metadata associated with the customer contact, and the comment can be shared (e.g., using the sharing button in the upper right corner of the “general comments” prompt). FIG.14illustrates searching of keywords and entities in a contact trace record page1400, according to at least one embodiment.FIG.14is implemented in the context of contact trace record pages described elsewhere in this disclosure, such asFIGS.12and13, according to at least one embodiment. In at least one embodiment, a user is able to review a contact trace record for a customer contact. For example, a user may navigate to a contact trace record page through a contact search result by clicking on the result for a particular contact. In at least some embodiments, a contacts analytics output file is obtained and the data from the output file is used to populate a contact summary card, contact analysis, call transcript, categorizations, entities, keywords, phrases, and more. A contacts analytics output file may encode metadata such as sentiments, keywords, and entities that are extracted from text inferred (e.g., using one or more speech neural networks) from audio. In at least one embodiment, a user can start typing in keywords and entities and automatically be prompted with keywords and entities that match the search string. For example, if a user start to type “Ac”—as in “Account ID”—a menu may appear with different keywords and entities relevant to the contact. FIG.15illustrates a diagram1500of detailed audio data and metadata, in accordance with at least one embodiment. In at least one embodiment,FIG.15illustrates details of an audio call which can be surfaced in a contact trace record such as those described elsewhere in this disclosure. In at least one embodiment,FIG.15illustrates various points of interest which may be presented to a user visually as different colors, with mouse-over descriptions, and more. For example, the vertical bars shown inFIG.15may correspond to loudness with longer vertical bars representing louder sounds (e.g., speech). A consistently high loudness may be categorized as shouting. The vertical bars may be color coded such that different speakers may be illustrated as different colors—for example, the customer's bars may be light blue, the agent's bars may be dark blue, cross-talk (where both agent and customer are speaking over each other) may be orange, and periods of silence may be yellow. In at least some embodiments, sentiment scores are shown in the contact audio visualization. For example, for each turn, a color or numeric score may be shown. Additionally, categories may be shown. For example, when someone swears, there may be a visual indicator that a category for profanity was triggered. In at least one embodiment,FIG.15illustrates a long silence from 2:30-3:00 that triggers a “long silence” category which may be useful for detecting when agents are failing to provide customers consistent feedback. In various embodiments, organizations can use contacts analytics output files for various use cases. For example, an organization can take sentiment scores for customer contacts and ingest them in their own models (e.g., machine learning models) and train the models to help identify pain points and identify instances where supervisors should be alerted, additional information presented to agents in real-time, and more. As a second example, organizations (e.g., employees thereof) can adjust various settings to set threshold associated with sentiments and define actions or categories based on certain thresholds being exceeded. For example, a run of N negative sentiments coupled with an overall negative sentiment score may be categorized as a bad interaction and a supervisor may be notified (e.g., after the fact or in real-time). FIG.16illustrates commenting functionality of a contact trace record page1600, in accordance with at least one embodiment. In at least one embodiment,FIG.16illustrates a contact trace record page or a portion thereof. For example,FIG.16may be implemented as a part of a contact trace record page described in connection withFIGS.12-13. In at least on embodiment, a supervisor can review customer contacts (e.g., by drilling into a specific customer contact from a contact search result) and offer comments to help agents improve. In at least one embodiment, a supervisor can click on a specific turn or portion of text and add comments. In at least one embodiment, clicking on part of the transcript brings up a comment window in which the supervisor can select whether the commented text was “Good” “Poor” “Needs Improvement”—other classifications are also possible to help organize comments. In a least one embodiment, a supervisor can comment that an agent wishing a customer a happy birthday as part of a conversation is “Good” and add additional comments on how congratulating the customer on his birthday is an example of how the organization can introduce delight into their conversations with customers. FIG.17illustrates a contact analysis dashboard1700, in accordance with at least one embodiment. Contact analysis dashboard1700may be a graphical interface that supervisors use to monitor and manage customer contacts in the context of a customer contact service, such as those described in connection withFIGS.1-2. In various embodiments, a customer contact service utilizes a backend contacts analytics service to process large amounts of contacts data which is aggregated and reported to the contact analysis dashboard. A contact analysis dashboard1700surfaces various information at a glance to a supervisor. In various embodiments, contact analysis dashboard1700is a web-based UI. Contact analysis dashboard1700surfaces aggregate statistics at the top of the UI, in at least one embodiment, and displays one or more aggregate statistics. One example of an aggregate statistic is calls in queue, which may surface the number of calls in a queue. In some cases, a trend line may also show how the number of calls in the queue is changing over time. One example of an aggregate statistic is oldest call in queue, which may be a selection of the call in the queue that is oldest. Another example of an aggregate statistic is agents online, which may be a count of the total number of agents online. Agents available may show the number of agents that are available to take new calls. Average handle time (AHT) may refer to the average length of a customer interaction, which includes hold time, talk time, and after-call work (ACW). ACW may refer to the time agents take to wrap up a call. ACW activities may include data-entry, activity codes, dispositions, form completion and post-call communication by the agent after a customer call. A contact analysis dashboard1700may have one or more panes that describe various call center-related activities. For example, an active contacts pane may display the number of active contacts via different modalities (e.g., calls, chats, and emails) and trend lines and percentages that show relative load. In some embodiments, a contact volume pane displays a more detailed view into the volume of contacts over different modalities. In some cases, a contact volume pane provides comparisons, such as comparing the current day's loads against those of the previous day or historic averages. A queue pane may illustrate different queues, which may be different ways in which customers contact a customer call service—for example, customers that call regarding online banking may be placed in one queue and customers calling regarding home loans are placed in a different queue. Queue occupancy and may be color coded such that higher occupancy percentages are better. In at least some embodiments, a pane for agents online shows a breakdown of the activity of the agents online. In at least one embodiment, themes across all contacts may be displayed, and may be used for issue discovery similar to a contact search result. FIG.18illustrates a computing environment1800in which various embodiments may be practiced. In accordance with at least one embodiment, computing environment1800illustrates a service architecture for real-time supervisor assistance.FIG.18may be implemented in the context of a computing resource service provider. In at least one embodiment,FIG.18illustrates a computing resource service provider that supports a customer contact service such as those described in connection withFIG.1. Customer contact service may be utilized by an organization to provide support to customers such as customers1802illustrated inFIG.18. Customer contact service may be a scalable service that can scale up or down computing resources as needed based on demand, and may be implemented in accordance with techniques described elsewhere in this disclosure, such as those discussed in connection withFIG.25. A supervisor may be responsible for overseeing customer contacts and for managing a group of agents. FIG.18illustrates a customer contact service that supports real-time calls between customers1802and agents1804. Customers1802may be customers of an organization and the organization may employ agents1804to answer questions from customers, resolve issues, and so on. In at least one embodiment,FIG.18illustrates an architecture in which a supervisor is alerted to potentially problematic customer contacts in real-time. Agents may be employees of an organization that are tasked with communicating with customers, troubleshooting, technical support, customer support, etc. Agents1804may have access to a computer system connected to a customer contact service that provides the agents with access to knowledge bases, internal customer resources, and backend system to process returns, update subscriptions, and more. Customers1802and agents1804may be connected via a network. The network may include a combination of POTS and voice-over-IP (VOIP) network connections. Customers1802may initiate a phone call which is routed to agents1804via a VOIP system. Once connected, customers and agents can speak to each other. In various embodiments, customers may have different sentiments based on their reasons for calling as well as the responses of agents. For example, if an agent is being unhelpful or rude, a customer may become frustrated or angry. As illustrated inFIG.18, customers may exhibit positive sentiments (e.g., as illustrated inFIG.18by the customers with smiles) as well as negative sentiments (e.g., as illustrated inFIG.18by the angry customer). Real-time supervisor assist can be used to notify supervisor1814of a customer's negative sentiment and allow the supervisor to intervene or provide guidance to an agent, in accordance with at least one embodiment. In at least one embodiment, active calls are connected in real-time to contacts analytics service1806. Contacts analytics service1806may be implemented in any suitable manner, such as in accordance with techniques described in connection withFIG.1. In at least one embodiment, the real-time connection provides an on-going stream of audio contacts data from one or more agents to contacts analytics service1806. A real-time connection may be established with agents using a WebSocket protocol, according to at least one embodiment. A WebSocket connection may be used to establish a real-time bidirectional communications channel between a client (e.g., agent or agent's computer system) and a server (e.g., contacts analytics service1806). A WebSocket connection may be implemented using a TCP connection. A WebSocket connection may be established between agents and a contacts analytics service1806or a component thereof. In some cases, a scalable service is utilized to ensure that period of high activity do not cause performance bottlenecks at contacts analytics service1806. A Web Socket connection or other suitable real-time connection may be utilized to provide audio from customer-agent calls to contacts analytics service1806. It should be noted that “real-time” in this context may involve some delays for buffering, batches, and some tolerance for delay may be acceptable. For example, audio may be batched in 15 or 30 second increments. In some embodiments, audio is batched and released when a channel goes silent—for example, if a customer speaks for 25 seconds and then stops to allow an agent to respond, call audio for the customer's 25 seconds of speaking may be batched and then released when the customer stops talking or when the agent begins to talk, thereby signaling the end of the customer's turn, in accordance with at least one embodiment. In at least one embodiment, real-time customer contacts data (e.g., audio or text) is streamed to contacts analytics service1806. Contacts analytics service1806may provide audio source data to speech-to-text service1808and speech-to-text service1808may provide a transcript of the provided audio. Speech-to-text service1808may be in accordance with those described elsewhere in this disclosure, such as those discussed in connection withFIGS.1and2. Transcribed audio may be organized in turns, as described elsewhere in this disclosure. In some cases, real-time audio provided to speech-to-text service1808may include a fragment of a turn—that is, that someone was still speaking. In some cases, contacts analytics service1808holds the fragment and when the rest of the fragment is obtained (e.g., when the remainder of the turn is transcribed), the contacts analytics service1808stitches together the fragments—reconstructing complete turns may be performed so that accurate sentiment scores can be determined based on the entire turn, rather than just a portion thereof. In some cases, speech-to-text service1808will retain a fragment, reconstruct the full turn, and provide the full turn to NLP service1810. For a call, transcribed text may be stored in a file and subsequent audio that is transcribed for the call may be appended to the same file so that, at the end of the call, the entire transcript is in the file. In various embodiments, fragments are not provided to NLP service1810, as assessing sentiment based on a portion of a turn rather than the entire turn may generate inaccurate sentiment predictions. NLP service1810may be in accordance with those discussed in connection withFIGS.1and2and may be used to perform sentiment analysis, entity detection, keywords and phrases detection, and more. NLP service1810may generate metadata or annotations to transcripts generated by speech-to-text service1808and provide those results to contacts analytics service1806. In at least one embodiment, a categorization service is used to determine whether certain transcripts match a particular rules-based category specified by a client. In at least one embodiment, a category is matched against content and/or conversation characteristics of a customer contact. For example, an audio call may be analyzed to determine whether it includes profanity (e.g., content-based rule) or long periods of silence (e.g., a characteristics-based rule) to determine whether a category applies to a specific customer contact or portion thereof. In some embodiments, audio data is routed through a data streaming service (e.g., Amazon Kinesis or Apache Kafka) offered by a computing resource service provider. In some embodiments, real-time connection is routed to contacts analytics service through indirectly, such as through customer contact service1812. NLP service may generate metadata for each completed turn of a real-time audio communication between a customer and an agent. After calling speech-to-text service1808and NLP service1810to transcribe audio to text, perform sentiment analysis, and extract keywords, phrases, and entities, contacts analytics service1806may call a categorization service to perform additional post-processing and assign categories to the real-time call. For example, a category to identify a potentially problematic call may rely on successive negative sentiment scores, loud volume, profanity uttered by the customer, utterances of the customer referencing competitor products/threats to cancel a subscription, and various combinations thereof. A categorization may be applied to the angry customer illustrated inFIG.18based on a negative-trending customer sentiment, which may be based on a successive run of negative sentiments and/or a downward trend of customer sentiment from positive to negative. Such a category may be presented to supervisor1814via customer contact service1812, which may surface a notification or a dashboard may have a dedicated widget or UI element for surfacing potentially problematic calls. Supervisor1814may then listen in on the agent's call, provide suggestion to the agent to help diffuse the situation, or any of several other actions may be taken by the supervisor1814to improve the customer's sentiment. FIG.19illustrates a computing environment1900in which a customer contact service supports real-time calls between a customer1902and an agent1904. Embodiments in accordance withFIG.19may be implemented in the context of other embodiments described in this disclosure, such as those discussed in connection withFIGS.1and2. Customer1902may be a customer of an organization that employ agent1904either directly or indirectly. Agent1094may be tasked to answer questions from customers, resolve issues, and so on. In at least one embodiment,FIG.19illustrates an architecture in which contacts analytics service1906may be utilized to provide real-time agent assistance. Real-time agent assistance may refer to features in which customer contacts information is piped to a computing resource in real-time. “Real-time” assistance in various contexts described herein may refer to systems in which data (e.g., audio data) from contacts are piped to a service provider as they are received. Real-time features described herein may allow for buffering and tolerances of several seconds may be acceptable as long as responsiveness between a customer and agent allows for such delays. Tolerances of several seconds, tens of seconds, or even minutes may be acceptable based on the context for a customer contact. As an example, a real-time architecture as described herein may buffer source audio for a speaker's ongoing turn and send it to contacts analytics service1906once the speaker has finished, thereby ending that speaker's turn. An ongoing turn may refer to a turn which has not yet completed—for example, if a turn changes each time a speaker changes, an ongoing turn may end when the speaker finishes speaking. Agent1902may have access to a computer system connected to a customer contact service that provides the agents with access to knowledge bases, internal customer resources, and backend system to process returns, update subscriptions, and more. However, there may be an overwhelming amount of information, articles, etc. that it may be difficult for agent1902to determine, in real-time or within several seconds to minutes, where to find certain information that a customer requests. In at least one embodiment, a WebSocket connection is established between agent1904and contacts analytics service1906or a component thereof. In some cases, a scalable service is utilized to ensure that period of high activity do not cause performance bottlenecks at contacts analytics service1906. A WebSocket connection or other suitable real-time connection may be utilized to provide audio from customer-agent calls to contacts analytics service1906. It should be noted that “real-time” in this context may involve some delays for buffering, batches, and some tolerance for delay may be acceptable. For example, audio may be batched in 15 or 30 second increments or for the duration one party speaks. In some embodiments, audio is batched and released when a channel goes silent—for example, if a customer speaks for 25 seconds and then stops to allow an agent to respond, call audio for the customer's 25 seconds of speaking may be batched and then released when the customer stops talking or when the agent begins to talk, thereby signaling the end of the customer's turn, in accordance with at least one embodiment. In some embodiments, audio data is routed through a data streaming service offered by a computing resource service provider. In some embodiments, real-time connection is routed to contacts analytics service through indirectly, such as through a customer contact service such as those described in connection withFIGS.1and2. NLP service may generate metadata for each completed turn of a real-time audio communication between a customer and an agent. In at least some embodiments, a data connection between agent1904and a service provider (e.g., contacts analytics service) is established and is used to provide an audio stream of contacts between agent1904and customers such as customer1902illustrated inFIG.19. In some embodiments, agent1904establishes a connection a real-time communications channel and uses it for multiple calls; in some embodiments, a real-time communications channel is setup when agent1904is connected with a customer and terminated when the contact ends. In at least some embodiments, agent1904sends a stream of audio data to contacts analytics service1906and audio from the stream—or a portion thereof, such as in cases where unfinished turn are buffered and then submitted when addition audio for the remainder of the turn (e.g., when an active speaker finishes speaking) is received—is submitted to speech-to-text service1908and speech-to-text service1908generates a transcript of the portion of the customer contact that was provided. The portion of the transcript may be provided to contacts analytics service1906which may aggregate the received portion with previously received portions to maintain a running record of an active customer contact. The entire running record may be provided to NLP service1910which may generate sentiment scores, detects entities, keywords, and phrases, etc. using any of numerous natural language processing techniques. In some cases, only the most recent portion of the transcript generated is provided to NLP service1910. Additional post-processing may be performed by a categorization service. For example, contacts analytics service may provide a running transcript or a portion thereof to a categorization service to perform additional post-processing and assign categories to the real-time call. For example, a category to identify a potentially problematic call may rely on successive negative sentiment scores, loud volume, profanity uttered by the customer, utterances of the customer referencing competitor products/threats to cancel a subscription, and various combinations thereof. NLP service1901may be used to generate insights, which may include entity detection, sentiment analysis, and more, which are provided to contacts analytics service in any suitable format, such as in a JavaScript Object Notation (JSON) file. In at least some embodiments, various post-processing and analytics performed on the audio contact stream can provide insights which can be relayed back to agent1904. For example, if a customer's sentiment score is trending negative or remains negative, an indication may be surfaced to the agent1904via a notification, a popup, or in a widget that is loaded in a graphical interface that agent1904operates while handling customer calls. As a second example, categories can be matched to an agent, which may remind the agent to, for example, thank the customer for being a subscriber. As yet another example, a category may be based on audio characteristics, such as if the agent's speaking volume is too long, if the agent presents long periods of silence, if the agent is overly apologetic, and other such characteristics. Categories may be matched more broadly to conversation characteristics, which may include characteristics of various types of communications such as text-based communications and audio-based communications: for example, while speaking volume may not make sense in the context of text-based conversation characteristics, long periods of silence may be flagged as a characteristic of a text-based chat conversation. By flagging these characteristics in real-time, agents are able to correct such behavior and provide customers with a better call experience. In some cases, customer1902and agent1904are connected to an audio call and contacts analytics service1906is used to provide suggestions to questions asked by customer1902. An audio stream is transcribed and processed to generate suggestions. In some cases, contacts analytics service or a serviced used by contacts analytics service is unable to determine a suggestion or unable to determine a suggestion with sufficient confidence. Contacts analytics service may provide real-time transcripts and/or metadata to enterprise search service1912and enterprise search service1912may return the most relevant internal documents, knowledge bases, websites, etc. of an organization that match the customer's question. In various embodiments, enterprise search service1912provides references to various internal and/or external documents to contacts analytics service and contacts analytics service provides those to agent1904. Agent1904may look up the most relevant the most relevant internal documents, knowledge bases, websites, etc. to determine a suggestion or answer to customer1902, or may provide links to the publicly available resources to customer1902that may help the customer. In an embodiment, network1914includes any appropriate network, including an intranet, the Internet, a cellular network, a local area network, a satellite network or any other such network and/or combination thereof, and components used for such a system depend at least in part upon the type of network and/or system selected. Many protocols and components for communicating via such a network are well known and will not be discussed herein in detail. In an embodiment, communication over the network is enabled by wired and/or wireless connections and combinations thereof. In some cases, a network may include or refer specifically to a telephone network such as a public switched telephone network or plain old telephone service (POTS). FIG.20shows an illustrative example of a process2000to generate contacts analytics output data, in accordance with at least one embodiment. In at least one embodiment, some or all of the process2000(or any other processes described herein, or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with computer-executable instructions and may be implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof. The code, in at least one embodiment, is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors. The computer-readable storage medium, in at least one embodiment, is a non-transitory computer-readable medium. In at least one embodiment, at least some of the computer-readable instructions usable to perform the process2000are not stored solely using transitory signals (e.g., a propagating transient electric or electromagnetic transmission). A non-transitory computer-readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within transceivers of transitory signals. Process2000can be implemented in the context of embodiments described elsewhere in this disclosure, such as those discussed in connection withFIGS.1-19. In at least one embodiment, a computer system to perform the process executes a program to receive2002a request to process contacts data. In at least one embodiment, the request is a web service API request that is received by a service frontend, authenticated, and routed to a backend service to be processed. In at least one embodiment, a computer system to perform the process executes a program to submit2004a job for the request. The job may be submitted to a database of a metadata service which a job sweeper monitors or periodically queries for new jobs. A new job may be submitted with a job status indicating it has not yet been started. In at least one embodiment, a computer system to perform the process executes a program to use2006a job sweeper to detect the job and initiate a workflow. The job sweeper may be in accordance with those described inFIG.1, and may initialize a step functions workflow. A scalable service may be utilized in execution of the workflow. In at least one embodiment, a computer system to perform the process executes a program to transcribe2008audio from contacts data. Contacts data may be in various forms, such as audio recordings, real-time audio streams, non-audio forms such as chat logs. For audio-based contacts data, a speech-to-text service may be utilized to transcribe the audio into a text-based transcript. In at least one embodiment, a computer system to perform the process executes a program to generate2010metadata for the contacts data using one or more NLP techniques such as those discussed in connection withFIGS.1and2. For example, NLP techniques may include sentiment analysis, entity detection, keyword detection, and more. In at least one embodiment, a computer system to perform the process executes a program to process2012analytics results. Processing analytics results may include generating a human-readable output file in a JSON format. In at least one embodiment, a computer system to perform the process executes a program to apply categories2014. Categories may be triggered based on rules that a customer can define. Categories can be used to identify certain communications and/or points of interest in communications, such as an agent's compliance with an organization's scripts. In at least one embodiment, a computer system to perform the process executes a program to write2016an output file to a customer data store. A customer role may be assumed and, upon assumption, the system performing the process copies an output file to a data bucket of the customer. FIG.21shows an illustrative example of a process2100to implement real-time agent assistance, in accordance with at least one embodiment. In at least one embodiment, some or all of the process2100(or any other processes described herein, or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with computer-executable instructions and may be implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof. The code, in at least one embodiment, is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors. The computer-readable storage medium, in at least one embodiment, is a non-transitory computer-readable medium. In at least one embodiment, at least some of the computer-readable instructions usable to perform the process2100are not stored solely using transitory signals (e.g., a propagating transient electric or electromagnetic transmission). A non-transitory computer-readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within transceivers of transitory signals. Process2100can be implemented in the context of embodiments described elsewhere in this disclosure, such as those discussed in connection withFIGS.1-19. One or more aspects of process2100may be implemented in accordance with embodiments described throughout this disclosure, such as those discussed in connection withFIG.19. A system that implements process2100may include hardware and/or software to detect2102a connection between an agent and a customer for audio communications. The system may establish2104a second connection between the agent and a first service of a computing resource service provider. The system may receive2106, at the first service, audio data over the second connection. The system may, based at least in part on receiving the audio data, execute a workflow2108. The workflow performed by the system may include a step to use2110a second service to transcribe the audio data to generate at least a portion of a transcript. The workflow performed by the system may include a step to use2112a third service to execute one or more natural language processing techniques to generate metadata associated with the transcript. The workflow performed by the system may include a step to use2114use a fourth service to determine, based at least in part on the metadata, whether one or more categories match the transcript. The workflow may have other steps that are omitted from process2100for clarity. For example, there may be additional steps of a step functions workflow to emit events and metering, which may be implemented according to techniques described in connection withFIG.21. The system may generate2116a suggestion based at least in part on the transcript, the metadata, and the one or more categories and provide the suggestion to the agent. FIG.22shows an illustrative example of a process2200to implement real-time supervisor assistance, in accordance with at least one embodiment. In at least one embodiment, some or all of the process2200(or any other processes described herein, or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with computer-executable instructions and may be implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof. The code, in at least one embodiment, is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors. The computer-readable storage medium, in at least one embodiment, is a non-transitory computer-readable medium. In at least one embodiment, at least some of the computer-readable instructions usable to perform the process2200are not stored solely using transitory signals (e.g., a propagating transient electric or electromagnetic transmission). A non-transitory computer-readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within transceivers of transitory signals. Process2200can be implemented in the context of embodiments described elsewhere in this disclosure, such as those discussed in connection withFIGS.1-19. A system that implements process2200may, in at least some embodiments, include software and/or hardware to establish2202a plurality of connections to obtain a plurality of audio data from calls between agents and customers. When an agent is connected to a customer (e.g., phone call), a direct connection may be established between the agent and a service of a computing resource service provider that pipes the audio stream of the agent and customer to the service. The architecture may be in accordance withFIG.18. The system may obtain2204the plurality of audio data at a first service of a computing resource service provider. The plurality of audio data may refer to a plurality of Web Socket connections connected to a contacts analytics service The system may use2206a speech-to-text service to generate transcripts for the plurality of audio data. The system may analyze2208the transcripts using a natural language processing (NLP) service to generate metadata about the calls, such as keyword and phrase matches, entity matches. The system may tag2210the transcripts with categories based at least in part on the set of NLP outputs. A categorization service such as those discussed in connection withFIG.1may be used to determine whether a particular transcript triggers one or more categories. The system may generate2212information for at least a portion of the plurality of connections based on the transcripts and NLP outputs and may provide the information to a supervisor of the agents. The information generated may be information relating to categories or NLP metadata, such as detecting when a customer's sentiment is trending negatively, whether profanity was uttered during the call, loud shouting by either agent or customer, and more. FIG.23shows an illustrative example of a process2300to generate contacts analytics output data, in accordance with at least one embodiment. In at least one embodiment, some or all of the process2300(or any other processes described herein, or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with computer-executable instructions and may be implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof. The code, in at least one embodiment, is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors. The computer-readable storage medium, in at least one embodiment, is a non-transitory computer-readable medium. In at least one embodiment, at least some of the computer-readable instructions usable to perform the process2300are not stored solely using transitory signals (e.g., a propagating transient electric or electromagnetic transmission). A non-transitory computer-readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within transceivers of transitory signals. Process2300can be implemented in the context of embodiments described elsewhere in this disclosure, such as those discussed in connection withFIGS.1-19. A system that performs process2300may obtain2302, at a first service of a computing resource service provider, audio source data from a client of the computing resource service provider. The audio source data may be audio recordings, audio data, audio contacts data, and other variants described herein. Audio source data may refer to a collection of call recordings of a customer contact center where agents of an organization take calls from customers of the organization who may have questions, technical issues, etc. The system may generate2304an output from the audio data, wherein the output encodes: a transcript of the audio data generated by a second service, wherein the transcript is partitioned by speaker; metadata generated by a third service based at least in part on the transcript; and one or more categories triggered by the transcript, wherein a fourth service is used to determine whether the one or more categories match the transcript. The system may be a contacts analytics service as described, for example, in connection withFIGS.1-2. The system may provide2306the output to the client. In various embodiments, the output may be provided to the client in various ways. For example, the output may be copied to a customer data bucket. The data may be indexed on entities, keywords, and phrases and other types of metadata such as audio characteristics so that clients can perform a rich set of searching and filtering on the output data. FIG.24shows an illustrative example of a process2400to implement contacts search and diagnostics capabilities, in accordance with at least one embodiment. In at least one embodiment, some or all of the process2400(or any other processes described herein, or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with computer-executable instructions and may be implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof. The code, in at least one embodiment, is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors. The computer-readable storage medium, in at least one embodiment, is a non-transitory computer-readable medium. In at least one embodiment, at least some of the computer-readable instructions usable to perform the process2400are not stored solely using transitory signals (e.g., a propagating transient electric or electromagnetic transmission). A non-transitory computer-readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within transceivers of transitory signals. Process2400can be implemented in the context of embodiments described elsewhere in this disclosure, such as those discussed in connection withFIGS.1-19. In at least one embodiment, a system is to index2402a plurality of outputs associated with a plurality of customer contacts, wherein the plurality of outputs are generated based at least in part by: a first service that generates transcripts based on audio data of the plurality of customer contacts; a second service that generates metadata based the transcripts using one or more natural language processing (NLP) techniques; a third service that matches categories to the transcripts. The first service may be a speech-to-text service as described throughout this disclosure. The second service may be a NLP service as described throughout this disclosure. The third service may be a categorization service as described throughout this disclosure. Database indices may be generated on metadata such as entities and keywords that were extracted from the contacts data by a NLP service. A system performing the process may provide2404, to a client of a computing resource service provider, a graphical interface to submit a search with a specified set of parameters. The graphical interface may be a contact search page such as those described in connection withFIG.6that generates results in accordance withFIGS.7-11. In at least one embodiment, a system receives2406a request to perform the search with the specified set of parameters, which may be in accordance withFIG.6. In at least one embodiment, the system is to perform the search to obtain2406a search result determined based at least in part on the transcripts, metadata, and categories and provide2408the search result to the client. FIG.25illustrates system architecture of a scaling service2502that may interact with other services in an environment2500in which an embodiment may be practiced. Techniques described in connection withFIG.25may be utilized with embodiments described in connection withFIGS.1-24. As illustrated inFIG.25, the environment2500may include a scaling service2502comprising a scaling service frontend2514, a scaling service backend2528, and a scaling service workflow manager2524. A customer2526may set scaling policies via the scaling service frontend2514and may also set alarm actions with a telemetry service2506that trigger the scaling policies. Calls made to the scaling service frontend2514may be authenticated by an authentication service2516. Scaling policies may be stored with the database service2520by the scaling service backend2528, and scaling actions may be initiated through a scaling service workflow manager2524by the scaling service backend2528. The customer2526may specify, via a policy/role management service (not shown), a role to be assigned to the scaling service2502, and the scaling service2502may obtain a token from a token service2518as proof that the scaling service2502has been granted that role. Upon triggering a scaling policy, the scaling service2502may obtain a resource's current capacity and set the resource's capacity for its respective resource service of the resource services2504under the specified role. The scaling service frontend2514may be the frontend for the scaling service2502. That is, the scaling service frontend2514provides the customer2526with a single endpoint. The customer2526may use an interface console or call an API to instruct the scaling service2502to create scaling policies for their resources. That is, the customer2526may submit scaling service API requests to the scaling service frontend2514. The scaling service frontend2514may pass the requests through to the scaling service backend2528. For example, the customer2526may use a service interface (i.e., via the scaling service frontend2514) to register a scalable target. The scalable target may refer to a dimension of the resource that the customer2526may scale. In some examples, the scalable target may include a service ID or namespace, a resource ID, and/or a dimension name or identifier such that the scalable target uniquely identifies which dimension of the particular resource of the particular service to scale. Once the scalable target is registered, the customer2526may create a scaling policy to be associated with the scalable target. The scaling service backend2528may be the backend data and/or control plane for the scaling service2502. The scaling service backend2528may receive and process scaling requests (e.g., via a control plane) and create, read, update, and delete in response to corresponding API requests (e.g., via a data plane). For scaling requests, the scaling service backend2528may calculate a new desired capacity and launch a scaling workflow via the workflow service2522, which in itself may interact with the target resource and use a control plane service to track and record the interaction. The policies, scaling activities, and identities of scalable targets may be stored with a database service2520, and then a workflow service2522may be used to orchestrate the scaling workflow. The computing resource service provider may provide general APIs for managing the scaling of various resource service types so that the customer2526need learn only one API to scale all their resources. In order for the scaling service2502to determine which resource to scale, in some examples a resource is individually identifiable and has one or more scalability measures (e.g., scalable dimensions) that may be independently increased or decreased. That is, the customer2526identifies the resource they want to auto-scale. For example, in some implementations a resource can be identified by a URI. Additionally or alternatively, in some implementations a resource can be identified by a service name specified by the customer2526. A resource may be unambiguously identified based on the partition, service, region, account ID, and/or resource identifier, and the combination of service namespace, resource ID, and scalable dimension may uniquely identify a scalable target. Among these pieces of information, the scaling service may only require the service and resource identifier (ID) from the customer2526. Using a combination of service namespace and resource ID may have advantages over using URIs. For example, the customer2526may describe the customer's resources registered in the scaling service2502with reference to service namespace and resource ID or by service namespace only and, in this way, the customer2526need not construct or keep track of URIs. Such an implementation would then accommodate resource services that do not use URIs. In some embodiments, the customer2526can specify a URI in the resource ID, and the system will assume that the service namespace is the one in the URI. In some implementations, alternative to or in addition to individual resource scaling, the scaling service2502provides application scaling. In some examples, “application scaling” may refer to scaling a group of related resources that form an application stack of the customer2526. For the purpose of scaling, the group of related resources, itself, would be a resource and would be uniquely identifiable. Therefore, the concepts of service namespace and resource ID also apply to application scaling. However, if the customer2526only intends to scale one resource, the scaling service need not know that it belongs to a group. On the other hand, if the intention is to scale the group as a whole, the customer2526should consider scaling the group versus scaling the resources in it. It should be the job of the scaling service2502to determine how to scale the resources. Regarding scalable dimensions, identifying the resource alone may not be sufficient to determine what dimension of the resource to scale. For example, as noted above, the customer2526may separately scale the read and write provisioned throughputs of a database service table. In general, a resource may have more than one scalable dimension that may be changed independently. Therefore, in addition to service namespace and resource ID, the scaling service2502may require the customer2526to specify which “dimension” of a resource the customer2526wants to scale. As an example, a database service table, or global secondary index (GSI), has read and write provisioned throughputs that can be changed independently and that can be regarded as scalable dimensions. For database service tables and GSIs, there may be at least two scalable dimensions for read and write provisioned throughputs, respectively. The customer2526may define maximum and minimum boundaries and scaling policies per table/GSI and per scalable dimension. Determination of whether to trigger a scaling policy and the scaling service2502may be made by a source external to the scaling service2502, such as the telemetry service2506. That is, a scaling policy may be attached to a telemetry service alarm of the telemetry service2506by the customer2526, and the scaling policy may be triggered by the telemetry service alarm. For example, the customer2526could create a telemetry service alarm with the telemetry service2506on any measurement being aggregated by the telemetry service (e.g., processor utilization). At the telemetry service2506, one or more thresholds may be specified for the telemetry service alarm; for example, the customer2526may specify that the telemetry service alarm should fire when processor utilization reaches 30 percent utilization. Once the telemetry service alarm is set up, the customer2526may attach any scaling policy to it, such that when the alarm fires (i.e., the measurement value exceeds the threshold), it may trigger the scaling policy. The telemetry service2506may call the scaling service2502to invoke a scaling policy when an associated alarm enters a state that triggers the scaling policy. In some cases, the telemetry service2506may periodically (e.g., every minute) invoke the scaling policy for as long as the alarm remains in that state. In some embodiments, the telemetry service2506invokes a scaling policy only once per alarm state, and then a workflow may be performed after performing a scaling action to check the alarm state to determine if further scaling is needed. As a result of the alarm firing, a notification of the alarm is sent to the scaling service frontend2514. The scaling service frontend2514passes this information to the scaling service backend2528, which then fetches the corresponding scaling policy from the database service2520. The scaling service backend2528examines the parameters in the retrieved scaling policy, obtains the current capacity of the resource to be scaled from the appropriate resource service, and performs the calculations specified by the scaling policy in view of the current capacity to determine that the new desired capacity for the resource needs to be scaled. Note that for some policy types, like a step policy, the scaling service2502will get information about the metric in order to determine which steps in the scaling policy to apply to the resource. For example, the customer2526may create a scaling policy for scaling up and down a resource based on a metric that is an indication of application load or traffic volume by setting up an alarm to trigger at certain thresholds of application load or traffic volume and attaching a policy to it. In this example, triggering the alarm will invoke the policy so that when traffic volume goes up and down, the resource will be scaled as dictated by the scaling policy. In some embodiments, the telemetry service2506sends alarms in response to the occurrence of certain specified events (i.e., telemetry events). Examples of such events include sending a message via a message queuing service or executing certain functions in a software container. Additionally or alternatively, in some embodiments scaling policies can be triggered according to a predefined schedule. For example, the customer2526may set a scaling schedule that triggers a scaling policy at 6:00 PM every day. Interruption of the telemetry service2506may result in delayed scaling due to the delay in a telemetry service alarm being sent to the scaling service2502to trigger execution of a scaling policy. Although metric-based alarms may be impacted due to unavailability of the telemetry service2506, on-demand (e.g., the customer2526via the scaling service frontend2514) and scheduled scaling (e.g., command sent to the scaling service frontend2514according to a schedule) would not be affected. Upon receiving a call from the telemetry service2506to invoke a scaling policy, the scaling service backend2528may synchronously calculate the new desired capacity for the scalable target, and the scaling service workflow manager2524may asynchronously set the desired capacity for the scalable target. The scaling service workflow manager2524may contain workflow and activity definitions that are used when effecting and monitoring changes to the target service. Workflows may be launched by the scaling service workflow manager2524, which may utilize a control plane service to record, in the database service2520, interactions with the target service. Besides setting desired capacity, the scaling service workflow manager2524may also record scaling activities. In some embodiments, the scaling service workflow manager2524can also send notifications and/or publish events. The scaling service backend2528may be responsible for starting workflow executions (e.g., via the workflow service2522). In some embodiments, a message queuing service is located between the scaling service backend2528and the workflow service2522for queuing workflow commands. The database service2520may be used to track the state of scaling activities, to store identities of scalable targets registered by the customer2526, and to store scaling policies defined by the customer2526. The scaling policies may be stored with the database service2520in any applicable format, such as in a JavaScript Object Notation format in a table with the database service2520. However, the scaling policy may be automatically generated by the scaling service2502so that the customer2526need not directly provide the scaling policy. If the database service2520has an outage, various methods may be performed to minimize adverse impact to the scaling service2502. For example, scalable targets and scaling policies may be cached; in this manner, new entities may not be created but the scaling service2502will continue to automatically scale existing scalable targets. As another example, recording of the scaling history is made as a best effort; in other words, accuracy of the scaling history is traded for availability, and “dangling” scaling activities may be closed. As still another example, the process of writing scaling tasks to the database service2520could be bypassed; for example, the scaling service backend2528may put, in a queue of a message queuing service, a message for a scaling task that includes all of the data that the workflow service2522needs in the message. Note that althoughFIG.25shows the database service2520as residing external to the scaling service2502, it is contemplated that, in some embodiments, the functionality provided by the database service2520may be found wholly or partially within the scaling service2502. The resource services2504may be services provided by a computing resource service provider hosting resources with scalable dimensions. If a resource service has a problem, scaling may be impacted as the scaling service2502may be unable to get the current capacity of or update the resources of the resource service. In some embodiments, the resource service is able to continue accepting and queuing scaling requests even if the resource service is offline, although processing such requests may be impacted. The customer2526may execute a scaling policy in a variety of ways. For example, in some embodiments the customer2526can execute the policy using a command line interface, a software development kit, or a console interface (e.g., accessible via a browser). As another example, in some embodiments the customer2526can have the policy invoked in response to receiving an alarm from the telemetry service2506. As still another example, the customer2526can have the policy invoked by the occurrence of an event detected by the telemetry service2506. In yet another example, the customer2526can have the policy invoked according to a schedule specified to the telemetry service2506by the customer2526. Each scaling action (i.e., each change made to a resource's scalable dimension) may have associated metadata, such as a unique activity identifier (ID), resource URI, description, cause, start time, end time, and/or status. This associated metadata may be recorded/logged with the database service2520in conjunction with each scaling action performed by the scaling service2502. The customer2526may subsequently query the scaling activities of a particular resource service by its URI. Scaling actions may cause a telemetry service event to be published. After each change to the scalable dimension (e.g., the desired task count of the service construct), the system may check the current alarm state to see if additional scaling is required. The behavior may be as follows: If scaling policy is an action for OK state (i.e., maintain current state), no action is taken. If scaling policy is an action for ALARM or INSUFFICIENT DATA state:Get the alarm's current state.If the alarm's current state matches the configured policy:If timeout has expired, reset alarm state to OK (this ensures that if the state goes into ALARM or INSUFFICIENT DATA again, the telemetry service2506may call the scaling service2502to execute the policy again.If timeout has not expired:If current time is after cooldown expiration time, call InvokeAlarmAction( ) to execute the policy again.Otherwise, wait an amount of time (e.g., one minute) and repeat the process step, starting from getting alarm state (e.g., an alarm is evaluated every minute). If the scaling policy is triggered manually by the customer2526, by the occurrence of an event or according to a schedule, rather than by an alarm of the telemetry service2506, the desired task count of the service construct may be changed based on the current running count and the scaling adjustment specified in the policy, within the minimum and maximum capacity. The scaling service2502may apply the scaling adjustment specified in the policy to the current running count of the service construct. The running count may be the actual processing capacity, as opposed to the desired task count, which is what the processing capacity is supposed to be. Calculating the new desired task count from the running count may prevent excessive scaling. For example, if the scaling service2502has increased the desired task count by 1, the alarm that triggered the scaling policy may still be active during the time that the task is being launched. However, once the new task is fully launched, the alarm may be deactivated, ensuring that the scaling service2502does not scale-out further. In some embodiments, scale-out is prioritized over scale-in; i.e., a scale-out will override an in-progress scale-in but not vice versa. In other embodiments, the reverse is true. An in-progress scale-in may be indicated by the running count being greater than the desired task count. In this situation, the scaling service2502may allow a scale-out to increase the desired task count in a manner that optimally maintains application availability. Conversely, an in-progress scale-out may be indicated by the running count being less than the desired task count, in which case the scaling service2502may not allow a scale-in to decrease the desired task count in order to optimally protect application availability. The combination of Resource URI and Context may uniquely identify a scalable resource. Supported policy types for scaling may include “SimpleScaling,” “StepScaling,” and “TargetUtilizationScaling.” Each policy type has its own configuration parameters. For “Simple Scaling,” the policy configuration may have the following parameters:AdjustmentType: “PercentChangeInCapacity,” “ChangeInCapacity,” or “ExactCapacity.”ScalingAdjustment: a number whose meaning depends on adjustment type; e.g., if scaling adjustment is 10 and adjustment type is percentage change in capacity, then the adjustment is plus 10 percent of actual capacity.MinAdjustmentMagnitude: may only be applicable when AdjustmentType is “PercentChangeInCapacity,” to protect against an event where the specified percentage of the current capacity results in a very small number.Cooldown: allows the customer2526to specify an amount of time to pass (e.g., number of seconds) before allowing additional scaling actions; it starts once a scaling action has been completed, and no further scaling actions are allowed until after it has expired. As noted, in some implementations, a scaling policy may be stored as parameters in persistent storage, such as a data store. In other implementations, a scaling policy may be a document in data format such as eXtensible Markup Language (XML) or JavaScript Object Notation (JSON). An illustrative example of a policy document is shown below: {“policyName”: “MyServiceScalingPolicy1”,“serviceNamespace”: “MyService”,“resourceId”: “VMResourceGroup1”,“scalableDimension”: “NumVMs”,“policyType”: “StepScaling”,“stepScalingPolicyConfiguration”:{“adjustmentType”: “PercentChangeInCapacity”,“stepAdjustments”: [{“metricintervalLowerBound”: “10”,“metricintervalUpperBound”: “100”,“scalingAdjustment”: “5”}],“minAdjustmentMagnitude”: “1”,“cooldown”: “120”,},} The scaling service2502may also utilize a timeout. The timeout may serve at least two purposes. First, the scaling service2502may utilize a timeout in a check alarm state workflow in an event that a scaling action becomes stuck for an excessive (i.e., greater than a defined threshold) period of time; for example, a service construct cluster that does not have enough capacity for new tasks may not respond to a demand to increase the number of tasks. In such an event, the alarm could remain in breach for a long time, and the timeout prevents the scaling service2502from continually checking its state. Second, the scaling service2502may prioritize scale-out/scale-up over scale-in/scale-down, but the scaling service2502should not let a stuck scale-out/scale-up (e.g., due to an InsufficientCapacityException) prevent a scale-in/scale-down from occurring. Thus, a timeout may allow the scaling service2502to unblock the scale-in. Note that in some implementations the timeout is user-configurable; whereas in other implementations the timeout is a user-non-configurable value which the scaling service2502uses to determine whether to give up on a stuck scale-out. The scaling service2502may be designed as a layer on top of the resource services2504that calls into those services on behalf of the customer2526. This ensures that the scaling service2502provides the customer2526with a consistent automatic scaling experience for all resource services. The customer2526may first create an alarm, or the customer may choose an existing alarm, in a console of the telemetry service2506and then apply a scaling policy to the alarm. One scaling policy type is a “step” policy, which allows the customer2526to define multiple steps of scaling adjustments with respect to the measurement that triggers execution of the scaling policy. For example, the customer2526may specify to scale-up a scalable dimension of the resource if processor utilization reaches certain threshold steps. For example, the customer2526may specify to scale-up the scalable dimension of the resource by 10 percent if processor utilization is between 30 and 60 percent. The customer may further specify to scale-up the scalable dimension by 30 percent if processor utilization is between 60 and 70 percent, scale-up the scalable dimension by 30 percent if processor utilization is above 70 percent, and so on. In this manner the customer2526can define multiple steps and/or multiple responses with different magnitudes with respect to the specified metrics. The API of the scaling service2502may be designed to operate as a separate service from the resource services2504such that it is not integrated into any particular service of the resource services2504. In this manner, the scaling service2502is not dependent upon any particular service of the resource services2504. In order to set up a particular resource service to be scaled by the scaling service2502, the scaling service2502simply needs information about the APIs of the particular resource service to call in order to direct the particular resource service to scale-up or down. The scaling service2502is able to maintain this independence by specifying which dimension of which resource of the particular resource service to scale and whether to scale-up or down; the logistics of how the particular resource should be scaled (e.g., which tasks to terminate, which container instances that do tasks should be launched, etc.) in response to direction from the scaling service2502is determined by the particular resource service itself. In some embodiments, additional components not pictured inFIG.25are present within the scaling service2502. For example, in certain embodiments a control plane service is present between the scaling service workflow manager2524and external services such as the authentication service2516and the database service2520. For example, the control plane service may provide API operations for updating scaling history. Furthermore, having certain functions performed by the control plane instead of the scaling service backend2528may mitigate performance impact if the scaling service backend2528receives requests for many data retrieval operations from the customer2526. With a separate control plane, the effect on the scaling service2502of the increased volume of retrieval operations is minimized. The control plane service may exist in addition to the backend service and may track and record all persistent service (e.g., database service2520, authentication service2516, etc.) interactions. In other embodiments, however, control plane functionality is integrated into the scaling service backend2528. Also in some embodiments, service adapters are present within the scaling service2502between the resource services2504and certain scaling service components, such as the scaling service backend2528and the scaling service workflow manager2524. The service adapters may be responsible for routing the scaling request through appropriate APIs for the target service. In alternative embodiments, the service adapter functionality is present within the scaling service workflow manager2524and/or the scaling service backend2528. However, because the scaling service2502is decoupled from the resource services2504, the scaling service2502relies on a response from the particular resource service in order to determine whether a scaling request has been fulfilled. The workflow service2522may be a collection of computing devices and other resources collectively configured to perform task coordination and management services that enable executing computing tasks across a plurality of computing environments and platforms. The workflow service2522may provide a workflow engine used to effect asynchronous changes in the scaling service2502. The workflow service2522may be used to update target resources and may also be used as a lock to control concurrent scaling requests. The workflow service2522may track the progress of workflow execution and perform the dispatching and holding of tasks. Further, the workflow service2522may control the assignment of hosts or physical or virtual computing machines used for executing the tasks. For example, a user can define a workflow for execution such that the workflow includes one or more tasks using an API function call to the workflow service2522. Further, the user may specify task order for the workflow, conditional flows, and timeout periods for restarting or terminating the execution of the workflow. In addition, execution loops for the workflow may be defined. Workflow execution may be asynchronous and may be preceded by synchronous execution of database writes. Note that althoughFIG.25shows the workflow service2522as residing external to the scaling service2502, it is contemplated that, in some embodiments, the functionality provided by the workflow service2522may be found wholly or partially within the scaling service2502. Interruption of the workflow service2522may cause delayed scaling because the asynchronous processing of scaling requests may be adversely impacted. One way to mitigate delayed scaling may be to do only what is absolutely required to scale synchronously via the scaling service frontend2514. At a minimum, the scaling service may attempt to set desired capacity and record scaling history. From a performance standpoint, this may be acceptable because it just requires an API call to the resource service owning the resource to be scaled and a minimum of extra writes to the database service2520. Although this may result in losing features of workflow service2522(e.g., retry mechanism, history tracking, etc.), at least the system will perform the operations that are required to scale. The scalable targets (i.e., scalable resources) may reside with the resource services2504. A scalable target may be uniquely identified from the triple combination of service (e.g., service namespace), resource (e.g., resource ID), and scalable dimension. The resource services2504represent the services that actually manage the resources that the customer2526wants to be automatically scaled. In this manner, the scaling service2502exists as a separate service from the resource services2504whose resources are caused to be scaled by the scaling service2502. The resource services2504, as noted, may include services such as a software container service, a database service, a streaming service, and so on. The scaling service2502may take the scaling policies created by the customer2526and, when the scaling policies are invoked (e.g., by an alarm from the telemetry service2506), the scaling service2502may perform the calculations to determine, given the particular policy and the current capacity of the resource, whether to increase or decrease the capacity to a new value. In order to get the current capacity of the resource, the scaling service backend2528may make a service call to the resource service2504of the resource to be scaled. In response, the resource service2504may provide the scaling service2502with the current capacity (e.g., “five tasks”). The scaling service workflow manager2524may then make a service call to the resource service2504that actually owns the resource to be scaled to cause the scaling action to be performed. In other words, because the scaling service2502is a separate service from the resource service2504that hosts the resources, the scaling service2502will make service calls to the resource service that owns the resource in order to get the state of the resource and also to change the state of the resource. The authentication service2516may be a service used for authenticating users and other entities (e.g., other services). For example, when a customer of a computing resource service provider interacts with an API of the computing resource service provider, the computing resource service provider queries the authentication service2516to determine whether the customer is authorized to have the API request fulfilled. In the process of creating a scaling policy, the customer2526may assign the scaling service2502to a role that authorizes fulfillment of certain requests, and the scaling service2502may then assume that role in order to make appropriate requests to cause a resource service associated with the policy to scale resources. In this manner, the role (supported by a role management service) gives the scaling service2502the necessary permission to access the resource that lives in the resource services2504. The customer2526may create a role supported by a role management service through an interface console. The interface console may allow the customer2526to click an appropriate button or consent checkbox in the interface console, and the underlying system may create the role with the necessary permissions. The token service2518may provide the scaling service2502with session credentials based on a role or roles specified by the customer2526. These session credentials may be used by the scaling service2502to interact with the resource services2504on behalf of the customer2526. The token service2518may provide a token to the scaling service2502that the scaling service may include with requests that provide evidence that the scaling service2502has been granted the appropriate role to cause scalable dimensions of a resource in the resource services2504to be manipulated. The role may be utilized by the automatic scaling service to call a resource service's APIs on behalf of the customer2526. Interruption of the token service2518may result in the scaling service2502being unable to assume a role supported by a role management service, with the scaling service2502thereby being unable to scale a resource of the customer2526. In some embodiments, the scaling service2502caches temporary credentials (e.g., they may be valid for 15 minutes, etc.) that the scaling service2502can use when assuming a role. As described in the present disclosure, the scaling service2502, itself, does not determine whether conditions that trigger a scaling policy are met. Rather, an external entity, such as the telemetry service2506, determines whether conditions have been met (e.g., by an alarm specified by the customer2526) and, if met, sends a notification to the scaling service2502that triggers execution of the appropriate scaling policy. Thus, a scaling policy may be triggered by an alarm sent by this telemetry service2506, by the occurrence of an event that triggers notification from an external entity, on demand by the customer2526, according to a notification that is sent to the scaling service2502according to a schedule, or by some other external notification. As noted, in some embodiments the scaling service supports application scaling. In some examples, the term “application stack” may refer to a grouped set of resources, for example, for executing an application (e.g., comprising an application of the customer, such as a virtual machine from a virtual computer system service and a database from a database service). Through the scaling service interface, the customer2526may group different resources together under a common name for scaling. For example, if the customer2526has resources that use a database service, virtual computing system service, load balancing service, and a streaming service, the customer2526may use a group scaling policy to scale-up or scale-down scalable dimensions of the resource of the group based on a particular trigger (e.g., alarm of the telemetry service2506). Based at least in part on the policy, the scaling service2502knows which scaling commands to send to which service. In this manner, the customer can group together some or all of the customer's services/resources and perform scaling for that group of services as opposed to scaling resources individually. For example, a scaling policy triggered by a telemetry service alarm may specify to increase the group by three more database service instances, 10 more virtual machines, and four load balancers. Additionally or alternatively, in some embodiments the scaling service2502supports “target tracking metrics.” In some examples, “target tracking metrics” may refer to measurements that the customer2526wants to keep within a specific range. This simplifies the user experience because the customer2526simply specifies the metric of a resource and the particular range, and the scaling service2502determines how to scale the resource to keep the measurements within the particular range. For example, if the scalable dimension is processor utilization and the customer specifies to keep the scalable dimension between 40 and 60 percent, the scaling service2502determines how to keep the measurements within this range. Consequently, the customer is spared having to define, for example, within a first range to scale-up by a first amount, within a second range to scale-up by a second amount, and so on. FIG.26illustrates an environment in which various embodiments can be implemented. The computing environment2600illustrates an example where an event-driven compute service2604may be utilized to invoke various event-driven functions. An event-driven compute service2604may receive and/or monitor events2602in the manner described above. In some embodiments, the events that the event-driven compute service2604monitors include the multimedia manipulation service receiving an insertion segment. An event-driven compute service2604may receive a notification that indicates the multimedia manipulation service received an insertion segment and/or the multimedia selection service provided an insertion segment to the multimedia manipulation service and inspected the notification to determine whether to invoke various types of business logic. The event-driven compute service2604, which may be implemented in accordance with those described above in connection withFIGS.1-24, may be further configured to receive events from multiple requests for multimedia streams (e.g., different requests for different broadcasts or different requests for the same broadcast by different users or devices). The event-driven compute service2604may receive the events2602and determine, either internally (e.g., using a component of the event-driven compute service) or externally (e.g., by delegating to another service) how to splice the events which may operate on different logics and/or different tables. As an example, the event-driven compute service2604may include a mapping of event-driven functions to content providers or multimedia input streams. Event-driven functions2606A,2606B, and2606C may include executable code, source code, applications, scripts, routines, function pointers, input parameters to a routine, callback functions, API requests, or any combination thereof. As an example, the event-driven compute service2604may include a mapping of compliance routines to events that indicate which routines should be invoked. Invoking a routine may include executing code or providing executable code as part of a request.FIG.26shows multiple events2602that are received by the event-driven compute service2604and spliced such that a particular event-driven function is run based on the type of error that caused the segment to have degraded quality. The event-driven function2606A that is run in response to a first event2602A may be different from the event-driven function2606B that is run in response to a second event2602B but need not be the case—the event-driven function may, in some cases, be the same either literally (e.g., both events utilize a function pointer that runs the same executable code from memory) or logically (e.g., the same functional outcome). In some cases, the event-driven function may use information included in the events2602A,2602B, and2602C to perform a workflow. An event may be generated in response to the application of a security policy or one or more downstream actions resulting from applying a security policy. For example, the event may be triggered by a web API call to apply a security policy, storing the policy in the policy repository, logging the application of the security policy and/or the storing of the policy to a policy repository, or some combination thereof. An event-driven compute service2604may determine when an event occurs and perform custom logic in response to the event being triggered. An event trigger may, for example, be detected when a request to receive a job is added to a metadata service or may be determined at a later point in time, such as in cases where an asynchronous process (e.g., run daily) processes logging events and detects that jobs to run. The event-driven compute service2604may be implemented using software, hardware, or some combination thereof. In some embodiments, distributed computing resources may provision and load custom logic/code in response to the event, run the code, and then unload the code and de-provision the computing resource. In some embodiments, a virtual machine is instantiated, custom logic/code is loaded to the virtual machine, the custom logic/code is executed, and then the virtual machine is terminated upon successful execution of the custom logic/code. The event-driven compute service2604may be a component of a computing resource service provider or may be a separate component. An event-driven compute service2604may be implemented using an event-driven architecture. When a specific event such as a web API request to start a job, the event-driven compute service2604may be notified (e.g., by the authentication service) of that event and the event-driven compute service2604may further receive additional information regarding the request, which may be obtained separately (e.g. from the policy management service that the request is directed towards). The event-driven compute service2604may determine how to handle the event, which may be handled in part by custom code or logic that is selected based on information obtained about the request—for example, the custom logic may differ for different jobs based on metadata included in the job (e.g., specifying a specific workflow). In some cases, different workflows are run for different customers. In some embodiments, the event-driven compute service2604may subscribe to notification messages from the authentication service for events and the authentication service may invoke callback function (such as a lambda expression) in response to an event that the event-drive platform subscribes to receive notifications for. The event-driven compute service2604may receive the events2602and determine, either internally (e.g., using a component of the event-driven compute service2604) or externally (e.g., by delegating to another service) how to handle the events. As an example, the event-driven compute service2604may include rules regarding which, among a list of custom logics, should be invoked based on the specific type of job that is being started or other metadata associated with the job. A mapping of job types or workflows to custom logics may exist. For example, a first custom logic may be invoked based on a first job applying to a first customer and a second custom logic may be invoked based on a second job applying to a second customer. FIG.27illustrates aspects of an example system2700for implementing aspects in accordance with an embodiment. As will be appreciated, although a web-based system is used for purposes of explanation, different systems may be used, as appropriate, to implement various embodiments. In an embodiment, the system includes an electronic client device2702, which includes any appropriate device operable to send and/or receive requests, messages, or information over an appropriate network2704and convey information back to a user of the device. Examples of such client devices include personal computers, cellular or other mobile phones, handheld messaging devices, laptop computers, tablet computers, set-top boxes, personal data assistants, embedded computer systems, electronic book readers, and the like. In an embodiment, the network includes any appropriate network, including an intranet, the Internet, a cellular network, a local area network, a satellite network or any other such network and/or combination thereof, and components used for such a system depend at least in part upon the type of network and/or system selected. Many protocols and components for communicating via such a network are well known and will not be discussed herein in detail. In an embodiment, communication over the network is enabled by wired and/or wireless connections and combinations thereof. In an embodiment, the network includes the Internet and/or other publicly addressable communications network, as the system includes a web server2706for receiving requests and serving content in response thereto, although for other networks an alternative device serving a similar purpose could be used as would be apparent to one of ordinary skill in the art. In an embodiment, the illustrative system includes at least one application server2708and a data store2710, and it should be understood that there can be several application servers, layers or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. Servers, in an embodiment, are implemented as hardware devices, virtual computer systems, programming modules being executed on a computer system, and/or other devices configured with hardware and/or software to receive and respond to communications (e.g., web service application programming interface (API) requests) over a network. As used herein, unless otherwise stated or clear from context, the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed, virtual or clustered system. Data stores, in an embodiment, communicate with block-level and/or object-level interfaces. The application server can include any appropriate hardware, software and firmware for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling some or all of the data access and business logic for an application. In an embodiment, the application server provides access control services in cooperation with the data store and generates content including but not limited to text, graphics, audio, video and/or other content that is provided to a user associated with the client device by the web server in the form of HyperText Markup Language (“HTML”), Extensible Markup Language (“XML”), JavaScript, Cascading Style Sheets (“CSS”), JavaScript Object Notation (JSON), and/or another appropriate client-side or other structured language. Content transferred to a client device, in an embodiment, is processed by the client device to provide the content in one or more forms including but not limited to forms that are perceptible to the user audibly, visually and/or through other senses. The handling of all requests and responses, as well as the delivery of content between the client device2702and the application server2708, in an embodiment, is handled by the web server using PHP: Hypertext Preprocessor (“PHP”), Python, Ruby, Perl, Java, HTML, XML, JSON, and/or another appropriate server-side structured language in this example. In an embodiment, operations described herein as being performed by a single device are performed collectively by multiple devices that form a distributed and/or virtual system. The data store2710, in an embodiment, includes several separate data tables, databases, data documents, dynamic data storage schemes and/or other data storage mechanisms and media for storing data relating to a particular aspect of the present disclosure. In an embodiment, the data store illustrated includes mechanisms for storing production data2712and user information2716, which are used to serve content for the production side. The data store also is shown to include a mechanism for storing log data2714, which is used, in an embodiment, for reporting, computing resource management, analysis or other such purposes. In an embodiment, other aspects such as page image information and access rights information (e.g., access control policies or other encodings of permissions) are stored in the data store in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store2710. The data store2710, in an embodiment, is operable, through logic associated therewith, to receive instructions from the application server2708and obtain, update or otherwise process data in response thereto, and the application server2708provides static, dynamic, or a combination of static and dynamic data in response to the received instructions. In an embodiment, dynamic data, such as data used in web logs (blogs), shopping applications, news services, and other such applications, are generated by server-side structured languages as described herein or are provided by a content management system (“CMS”) operating on or under the control of the application server. In an embodiment, a user, through a device operated by the user, submits a search request for a certain type of item. In this example, the data store accesses the user information to verify the identity of the user, accesses the catalog detail information to obtain information about items of that type, and returns the information to the user, such as in a results listing on a web page that the user views via a browser on the user device2702. Continuing with this example, information for a particular item of interest is viewed in a dedicated page or window of the browser. It should be noted, however, that embodiments of the present disclosure are not necessarily limited to the context of web pages, but are more generally applicable to processing requests in general, where the requests are not necessarily requests for content. Example requests include requests to manage and/or interact with computing resources hosted by the system2700and/or another system, such as for launching, terminating, deleting, modifying, reading, and/or otherwise accessing such computing resources. In an embodiment, each server typically includes an operating system that provides executable program instructions for the general administration and operation of that server and includes a computer-readable storage medium (e.g., a hard disk, random access memory, read only memory, etc.) storing instructions that, if executed by a processor of the server, cause or otherwise allow the server to perform its intended functions (e.g., the functions are performed as a result of one or more processors of the server executing instructions stored on a computer-readable storage medium). The system2700, in an embodiment, is a distributed and/or virtual computing system utilizing several computer systems and components that are interconnected via communication links (e.g., transmission control protocol (TCP) connections and/or transport layer security (TLS) or other cryptographically protected communication sessions), using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate in a system having fewer or a greater number of components than are illustrated inFIG.27. Thus, the depiction of the system2700inFIG.27should be taken as being illustrative in nature and not limiting to the scope of the disclosure. The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices that can be used to operate any of a number of applications. In an embodiment, user or client devices include any of a number of computers, such as desktop, laptop or tablet computers running a standard operating system, as well as cellular (mobile), wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols, and such a system also includes a number of workstations running any of a variety of commercially available operating systems and other known applications for purposes such as development and database management. In an embodiment, these devices also include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network, and virtual devices such as virtual machines, hypervisors, software containers utilizing operating-system level virtualization and other virtual devices or non-virtual devices supporting virtualization capable of communicating via a network. In an embodiment, a system utilizes at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially available protocols, such as Transmission Control Protocol/Internet Protocol (“TCP/IP”), User Datagram Protocol (“UDP”), protocols operating in various layers of the Open System Interconnection (“OSI”) model, File Transfer Protocol (“FTP”), Universal Plug and Play (“UpnP”), Network File System (“NFS”), Common Internet File System (“CIFS”) and other protocols. The network, in an embodiment, is a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, a satellite network, and any combination thereof. In an embodiment, a connection-oriented protocol is used to communicate between network endpoints such that the connection-oriented protocol (sometimes called a connection-based protocol) is capable of transmitting data in an ordered stream. In an embodiment, a connection-oriented protocol can be reliable or unreliable. For example, the TCP protocol is a reliable connection-oriented protocol. Asynchronous Transfer Mode (“ATM”) and Frame Relay are unreliable connection-oriented protocols. Connection-oriented protocols are in contrast to packet-oriented protocols such as UDP that transmit packets without a guaranteed ordering. In an embodiment, the system utilizes a web server that runs one or more of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers, Apache servers, and business application servers. In an embodiment, the one or more servers are also capable of executing programs or scripts in response to requests from user devices, such as by executing one or more web applications that are implemented as one or more scripts or programs written in any programming language, such as Java®, C, C#or C++, or any scripting language, such as Ruby, PHP, Perl, Python or TCL, as well as combinations thereof. In an embodiment, the one or more servers also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, and IBM® as well as open-source servers such as MySQL, Postgres, SQLite, MongoDB, and any other server capable of storing, retrieving, and accessing structured or unstructured data. In an embodiment, a database server includes table-based servers, document-based servers, unstructured servers, relational servers, non-relational servers, or combinations of these and/or other database servers. In an embodiment, the system includes a variety of data stores and other memory and storage media as discussed above that can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In an embodiment, the information resides in a storage-area network (“SAN”) familiar to those skilled in the art and, similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices are stored locally and/or remotely, as appropriate. In an embodiment where a system includes computerized devices, each such device can include hardware elements that are electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or “processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), at least one output device (e.g., a display device, printer, or speaker), at least one storage device such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc., and various combinations thereof. In an embodiment, such a device also includes a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above where the computer-readable storage media reader is connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. In an embodiment, the system and various devices also typically include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. In an embodiment, customized hardware is used and/or particular elements are implemented in hardware, software (including portable software, such as applets), or both. In an embodiment, connections to other computing devices such as network input/output devices are employed. In an embodiment, storage media and computer readable media for containing code, or portions of code, include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the subject matter set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood however, that there is no intention to limit the subject matter recited by the claims to the specific form or forms disclosed but, on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of this disclosure, as defined in the appended claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Similarly, use of the term “or” is to be construed to mean “and/or” unless contradicted explicitly or by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. The use of the term “set” (e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, the term “subset” of a corresponding set does not necessarily denote a proper subset of the corresponding set, but the subset and the corresponding set may be equal. The use of the phrase “based on,” unless otherwise explicitly stated or clear from context, means “based at least in part on” and is not limited to “based solely on.” Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “at least one of A, B and C,” (i.e., the same phrase with or without the Oxford comma) unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood within the context as used in general to present that an item, term, etc., may be either A or B or C, any nonempty subset of the set of A and B and C, or any set not contradicted by context or otherwise excluded that contains at least one A, at least one B, or at least one C. For instance, in the illustrative example of a set having three members, the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}, and, if not contradicted explicitly or by context, any set having {A}, {B}, and/or {C} as a subset (e.g., sets with multiple “A”). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. Similarly, phrases such as “at least one of A, B, or C” and “at least one of A, B or C” refer to the same as “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}, unless differing meaning is explicitly stated or clear from context. In addition, unless otherwise noted or contradicted by context, the term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). The number of items in a plurality is at least two but can be more when so indicated either explicitly or by context. Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In an embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In an embodiment, the code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. In an embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In an embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause the computer system to perform operations described herein. The set of non-transitory computer-readable storage media, in an embodiment, comprises multiple non-transitory computer-readable storage media, and one or more of individual non-transitory storage media of the multiple non-transitory computer-readable storage media lack all of the code while the multiple non-transitory computer-readable storage media collectively store all of the code. In an embodiment, the executable instructions are executed such that different instructions are executed by different processors—for example, in an embodiment, a non-transitory computer-readable storage medium stores instructions and a main CPU executes some of the instructions while a graphics processor unit executes other instructions. In another embodiment, different components of a computer system have separate processors and different processors execute different subsets of the instructions. Accordingly, in an embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein, and such computer systems are configured with applicable hardware and/or software that enable the performance of the operations. Further, a computer system, in an embodiment of the present disclosure, is a single device and, in another embodiment, is a distributed computer system comprising multiple devices that operate differently such that the distributed computer system performs the operations described herein and such that a single device does not perform all operations. The use of any and all examples or exemplary language (e.g., “such as”) provided herein is intended merely to better illuminate various embodiments and does not pose a limitation on the scope of the claims unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of inventive subject material disclosed herein. Embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out inventive concepts described herein. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for embodiments of the present disclosure to be practiced otherwise than as specifically described herein. Accordingly, the scope of the present disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the scope of the present disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. All references including publications, patent applications, and patents cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
231,933
11862149
DETAILED DESCRIPTION Automatic speech recognition (ASR) is a field of computer science, artificial intelligence, and linguistics concerned with transforming audio data associated with speech into text representative of that speech. Similarly, natural language understanding (NLU) is a field of computer science, artificial intelligence, and linguistics concerned with enabling computers to derive meaning from text input containing natural language. ASR and NLU are often used together as part of a speech processing system. Text-to-speech (TTS) is a field of concerning transforming textual data into audio data that is synthesized to resemble human speech. Certain systems may be configured to perform actions responsive to user inputs. For example, for the user input of “Alexa, play Adele music,” a system may output music sung by an artist named Adele. For further example, for the user input of “Alexa, what is the weather,” a system may output synthesized speech representing weather information for a geographic location of the user. In a further example, for the user input of “Alexa, book me a ride to the airport,” a system may schedule a car ride to the airport with a ride sharing service. A system may receive a user input as speech. For example, a user may speak an input to a device. The device may send audio data, representing the spoken input, to a server(s). The server(s) may perform ASR processing on the audio data to generate text data representing the user input. The server(s) may perform NLU processing on the text data to determine an intent of the user input as well as portions of the text data that may be used by one or more skills to perform an action responsive to the user input. As used herein, a skill, skill component, or the like may be software running on a server(s)120that is akin to a software application running on a traditional computing device. The functionality described herein as a skill may be referred to using many different terms, such as an action, bot, app, or the like. In at least some examples, a “skill,” “skill component,” and the like may be software running on a computing device, similar to a traditional software application running on a computing device. Such skill may include a voice user interface in addition to or instead of, in at least some instances, a graphical user interface, smart home device interface, and/or other type of interface. In at least some examples, a “skill,” “skill component,” and the like may be software that is run by a third party, to the herein disclosed system, without the third party provisioning or managing one or more servers for executing the skill. In such an implementation, the system may be triggered to run a skill in response to the third party calling the system via the Internet or a mobile application. Such implementation may include, but is not limited to, Amazon's AWS Lambda. In at least some examples, a “skill,” “skill component,” and the like may be securely run by a third party, to the herein disclosed system, without the third party's device(s) being connected to the Internet. Internet of Things (IoT) devices of a third party may use, for example Amazon's AWS lambda functions, to interact with system resources and transmit data to the system (which may, in at least some implementations, be configured by the backend or other type of remote system). Such implementation may include, but is not limited to, Amazon's AWS Greengrass. For example, AWS Greengrass may extend the herein disclosed system to IoT devices so such devices can act locally on data they generate, while still using the herein disclosed system for data management, analytics, and storage. During processing of a user input, situations may occur that cause a skill to perform an action that is not properly responsive to a user input. For example, a user input may request music of a certain artist be output, but the skill may cause music of a different artist to be output. For further example, a user input may request the output of weather information for a particular city in a particular state, but the skill may cause weather information for a city, having the same name but in a different state, to be output. In an example, a skill may perform an incorrect action in response to ASR processing outputting incorrect text data (e.g., text data that is not an accurate transcription of a spoken user input). In another example, a skill may perform an incorrect action in response to NLU processing outputting incorrect NLU results (e.g., outputting an incorrect intent and/or identifying portions of text data that are not usable by the skill to perform an action responsive to the user input). The present disclosure reduces friction between users and systems by configuring such systems to rewrite user-specific inputs for NLU processing. For example, when a system is unable to understand a user's input, the system may respond with “sorry, I do not know that” or may do something unrelated to the user input. The present disclosure leverages user interaction patterns, user feedback, and other data inputs to continuously and automatically improve the systems' understanding of user inputs. The present disclosure improves such systems to decrease (or eliminate) the possibility of a skill performing an action that is not responsive to a corresponding user input. The present disclosure provides a mechanism that may use user feedback to (i) detect when a skill has performed an action not responsive to a user input and (ii) correct such action so same happens with respect to future user inputs with less occurrence. User feedback may be explicit or implicit. Explicit user feedback refers, at least in part, to subsequent user inputs that explicitly indicate a performed action was not responsive to a corresponding user input. In an example, a user may say “play music by Adele” and the system may output music by an artist other than Adele. In response to the system outputting such music, the user may provide a subsequent input corresponding to “stop,” “cancel,” or the like. The foregoing subsequent user input may be considered an explicit user feedback. Implicit user feedback refers, at least in part, to subject user inputs that implicitly indicate a performed action was not responsive to a corresponding user input. In an example, a user may say “play music by Adele” and the system may output music by an artist other than Adele. In response to the system outputting such music, the user may provide a subsequent input that rephrases the previous user input. Using the above example, a rephrased user input may correspond to “play music by the artist Adele,” “play Adele music,” or the like. Such rephrases may be considered implicit user feedback. A system may train one or more machine learning models with respect to user inputs, which resulted in incorrect actions being performed by skills, and corresponding user inputs, which resulted in the correct action being performed. The system may use the trained machine learning model(s) to rewrite user inputs that, if not rewritten, may result in incorrect actions being performed. The system may implement the trained machine learning model(s) with respect to ASR output text data to determine if the ASR output text data corresponds (or substantially corresponds) to previous ASR output text data that resulted in an incorrect action being performed. If the trained machine learning model(s) indicates the present ASR output text data corresponds (or substantially corresponds) to such previous ASR output text data, the system may rewrite the present ASR output text data to correspond to text data representing a rephrase of the user input that will (or is more likely to) result in a correct action being performed. Teachings of the present disclosure have several benefits. For example, teachings of the present disclosure decrease a likelihood of a system performing an action that is not responsive to a corresponding user input. Teachings of the present disclosure achieve this benefit by, for example, fixing ASR transcription errors, disambiguating entities identified during named entity recognition processing (which is a part of NLU processing described herein), fixing slot classification errors resulting during NLU processing, intent classification errors resulting from intent classification processing, skill processing errors, and user errors (e.g., user inadvertently speaks a user input in an incorrect manner, commonly referred to as slip of the tongue). Teachings of the present disclosure may be ordinarily configured to be opt-in features of a system. For example, while a system may be configured to perform the teachings herein, the system may not perform such teachings with respect to a user unless the user has explicitly provided permission for the system to perform the teachings herein with respect to the user. In addition, a system may be configured to enable a user to opt-out of the teachings herein, resulting in the system no longer performing the teachings herein with respect to that user. In addition, such opting out by a user may result in the system no longer using that user's data to perform the teachings herein with respect to one or more other users of the system. As such, it will be appreciated that a user may have significant control over when a system uses that user's data. FIG.1Aillustrates a system configured to use user feedback to train at least one machine learning model to rewrite user inputs.FIG.1Billustrates a system configured to rewrite user inputs. Although the figures and discussion herein illustrate certain operational steps of the system in a particular order, the steps described may be performed in a different order (as well as certain steps removed or added) without departing from the intent of the disclosure. As illustrated inFIGS.1A and1B, the system may include one or more devices (110a/110b) local to a user5and one or more servers120connected across one or more networks199. Referring toFIG.1A, the device110amay receive audio11representing a spoken user input of the user5. The device110amay generate audio data representing the audio11and send the audio data to the server(s)120, which the server(s)120receives (132). Alternatively, the device110bmay receive a text input representing a text-based user input of the user5. The device110bmay generate text data representing the text input and may send the text data to the server(s)120, which the server(s)120receives (132). Depending on configuration, the device (110a/110b) may send audio data or text data to the server(s)120via a companion application installed on the device (110a/110b). The companion application may enable the device110to communicate with the server(s)120via the network(s)199. An example of a companion application is the Amazon Alexa application that operates on a phone/tablet. The server(s)120may perform (134) an action potentially responsive to the user input. If the user input is received as audio data, the server(s)120may perform ASR processing on the audio data to generate text data. The server(s)120may perform NLU processing on text data (either as received at step132or as output from ASR processing) to determine the action to be performed. The action may correspond to the outputting of content (e.g., music, weather information, etc.) or may correspond to the performance of some other action (e.g., booking of a reservation, creation of an electronic calendar event, setting of a timer, etc.) At least partially contemporaneous to the action being performed, or after the action is performed, the device (110a/110b) may receive a subsequent user input. The device (110a/110b) may send data representing the subsequent user input to the server(s)120, which the server(s)120receives (136). The server(s)120may determine (138) the subsequent user input represents the action is or was not a correct response to the initial user input. For example, the server(s)120may determine the subsequent user input corresponds to explicit user feedback (e.g., may determine the subsequent user input explicitly indicates the action is or was not a correct response to the initial user input). For further example, the server(s)120may determine the subsequent user input corresponds to implicit user feedback (e.g., may determine the subsequent user input corresponds to a rephrasing of the initial user input). The server(s)120may at least partially train (140) at least one machine learning model, using the original user input and the subsequent user input, to detect when future user inputs should be rewritten. Referring toFIG.1B, sometime after the server(s)120trains the at least one machine learning model, the server(s)120may receive (142) data representing a user input. The server(s)120may use (144) the trained one or more machine learning models to determine the user input corresponds to a user input with respect to which a nonresponsive action was performed. The server(s)120may generate (146) data representing a rewritten form of the user input received at step142. For example, the user inputs received at step142may correspond to “what is the weather in Petersburg.” The server(s)120may have previously output weather information for a city named “Petersburg” located closest to the device (110a/110b) when the user intended the system to output weather information for a different Petersburg (e.g., Petersburg, Alaska). With respect to the previous system output, the server(s)120may have received a rephrase of the user input corresponding to “what is the weather in Petersburg, Alaska.” Based on these previous user/system interactions, the server(s)120may rewrite the user input to correspond to “what is the weather in Petersburg” (received at step144) to correspond to “what is the weather in Petersburg, Alaska.” The server(s)120may perform (148) an action potentially responsive to the rewritten user input. The server(s)120may determine (150) the action, performed in response to the rewritten user input, was correct. Such determination may be based, at least in part, on explicit user feedback and/or implicit user feedback. The server(s)120may retrain (152) the at least one trained machine learning model using the determination that the action, performed in response to the rewritten user input, was correct. Steps132through152may be performed with respect to various user inputs received by the server(s)120. Thus, one skilled in the art will appreciate that the number of user inputs that may be correctly rewritten by the server(s)120may grow as user inputs and associated user feedback are received and processed by the server(s)120. The system may operate using various components as illustrated inFIG.2. The various components may be located on same or different physical devices. Communication between various components may occur directly or across a network(s)199. An audio capture component(s), such as a microphone or array of microphones of a device110, captures audio11. The device110processes audio data, representing the audio11, to determine whether speech is detected. The device110may use various techniques to determine whether audio data includes speech. In some examples, the device110may apply voice activity detection (VAD) techniques. Such techniques may determine whether speech is present in audio data based on various quantitative aspects of the audio data, such as the spectral slope between one or more frames of the audio data; the energy levels of the audio data in one or more spectral bands; the signal-to-noise ratios of the audio data in one or more spectral bands; or other quantitative aspects. In other examples, the device110may implement a limited classifier configured to distinguish speech from background noise. The classifier may be implemented by techniques such as linear classifiers, support vector machines, and decision trees. In still other examples, the device110may apply Hidden Markov Model (HMM) or Gaussian Mixture Model (GMM) techniques to compare the audio data to one or more acoustic models in storage, which acoustic models may include models corresponding to speech, noise (e.g., environmental noise or background noise), or silence. Still other techniques may be used to determine whether speech is present in audio data. Once speech is detected in audio data representing the audio11, the device110may use a wakeword detection component220to perform wakeword detection to determine when a user intends to speak an input to the device110. An example wakeword is “Alexa.” Wakeword detection is typically performed without performing linguistic analysis, textual analysis, or semantic analysis. Instead, the audio data representing the audio11is analyzed to determine if specific characteristics of the audio data match preconfigured acoustic waveforms, audio signatures, or other data to determine if the audio data “matches” stored audio data corresponding to a wakeword. Thus, the wakeword detection component220may compare audio data to stored models or data to detect a wakeword. One approach for wakeword detection applies general large vocabulary continuous speech recognition (LVCSR) systems to decode audio signals, with wakeword searching being conducted in the resulting lattices or confusion networks. LVCSR decoding may require relatively high computational resources. Another approach for wakeword detection builds HMIs for each wakeword and non-wakeword speech signals, respectively. The non-wakeword speech includes other spoken words, background noise, etc. There can be one or more HMMs built to model the non-wakeword speech characteristics, which are named filler models. Viterbi decoding is used to search the best path in the decoding graph, and the decoding output is further processed to make the decision on wakeword presence. This approach can be extended to include discriminative information by incorporating a hybrid DNN-HMM decoding framework. In another example, the wakeword detection component220may be built on deep neural network (DNN)/recursive neural network (RNN) structures directly, without HMI being involved. Such an architecture may estimate the posteriors of wakewords with context information, either by stacking frames within a context window for DNN, or using RNN. Follow-on posterior threshold tuning or smoothing is applied for decision making. Other techniques for wakeword detection, such as those known in the art, may also be used. Once the wakeword is detected, the device110may “wake” and begin transmitting audio data211, representing the audio11, to the server(s)120. The audio data211may include data corresponding to the wakeword, or the portion of the audio data211corresponding to the wakeword may be removed by the device110prior to sending the audio data211to the server(s)120. Upon receipt by the server(s)120, the audio data211may be sent to an orchestrator component230. The orchestrator component230may include memory and logic that enables the orchestrator component230to transmit various pieces and forms of data to various components of the system, as well as perform other operations as described herein. The orchestrator component230sends the audio data211to an ASR component250. The ASR component250transcribes the audio data211into text data potentially representing speech represented in the audio data211. The ASR component250interprets the speech in the audio data211based on a similarity between the audio data211and pre-established language models. For example, the ASR component250may compare the audio data211with models for sounds (e.g., subword units, such as phonemes, etc.) and sequences of sounds to identify words that match the sequence of sounds of the speech represented in the audio data211. The ASR component250sends the text data generated thereby to an NLU component260, for example via the orchestrator component230. The text data output by the ASR component250may include a top scoring ASR hypothesis or may include multiple ASR hypotheses. Each ASR hypothesis may be associated with a respective score representing a confidence of ASR processing performed to generate the ASR hypothesis with which the score is associated. The device110may send text data213to the server(s)120. Upon receipt by the server(s)120, the text data213may be sent to the orchestrator component230, which may send the text data213to the NLU component260. The NLU component260attempts to make a semantic interpretation of the phrase(s) or statement(s) represented in the text data input therein. That is, the NLU component260determines one or more meanings associated with the user input represented in the text data based on one or more words represented in the text data. The NLU component260determines an intent representing an action that a user desires be performed as well as pieces of the text data that allow a device (e.g., the device110, the server(s)120, a skill290, a skill server(s)225, etc.) to execute the intent. For example, if the text data corresponds to “play Adele music,” the NLU component260may determine an intent that the system output music and may identify “Adele” as an artist. For further example, if the text data corresponds to “what is the weather,” the NLU component260may determine an intent that the system output weather information associated with a geographic location of the device110(or a geographic location represented in a user profile). In another example, if the text data corresponds to “turn off the lights,” the NLU component260may determine an intent that the system turn off lights associated with the device110(or another device represented in a user profile). The NLU component260may send NLU results data (which may include tagged text data, indicators of intent, etc.) to the orchestrator component230. The orchestrator component230may send the NLU results data to a skill(s)290configured to perform an action believed at least partially responsive the user input. The NLU results data may include a single NLU hypothesis, or may include multiple NLU hypotheses. An NLU hypothesis may correspond to an intent indicator and corresponding tagged text data. A “skill” may be software running on the server(s)120that is akin to a software application running on a traditional computing device. That is, a skill290may enable the server(s)120to execute specific functionality in order to provide data or produce some other requested output. The server(s)120may be configured with more than one skill290. For example, a weather service skill may enable the server(s)120to provide weather information, a car service skill may enable the server(s)120to book a trip with respect to a taxi or ride sharing service, a restaurant skill may enable the server(s)120to order a pizza with respect to the restaurant's online ordering system, etc. A skill290may operate in conjunction between the server(s)120and other devices, such as the device110, in order to complete certain functions. Inputs to a skill290may come from speech processing interactions or through other interactions or input sources. A skill290may include hardware, software, firmware, or the like that may be dedicated to a particular skill290or shared among different skills290. In addition or alternatively to being implemented by the server(s)120, a skill290may be implemented at least partially by a skill server(s)225. Such may enable a skill server(s)225to execute specific functionality in order to provide data or perform some other action requested by a user. Types of skills include home automation skills (e.g., skills that enable a user to control home devices such as lights, door locks, cameras, thermostats, etc.), entertainment device skills (e.g., skills that enable a user to control entertainment devices such as smart televisions), video skills, flash briefing skills, as well as custom skills that are not associated with any pre-configured type of skill. The server(s)120may be configured with a single skill290dedicated to interacting with more than one skill server225. The server(s)120may be configured with a skill290that communicates with more than one type of device (e.g., different types of home automation devices). Unless expressly stated otherwise, reference to a skill, skill device, or skill component may include a skill290operated by the server(s)120and/or the skill server(s)225. Moreover, the functionality described herein as a skill may be referred to using many different terms, such as an action, bot, app, or the like. The server(s)120may include a TTS component280that generates audio data (e.g., synthesized speech) from text data using one or more different methods. Text data input to the TTS component280may come from a skill290, the orchestrator component230, or another component of the system. In one method of synthesis called unit selection, the TTS component280matches text data against a database of recorded speech. The TTS component280selects matching units of recorded speech and concatenates the units together to form audio data. In another method of synthesis called parametric synthesis, the TTS component280varies parameters such as frequency, volume, and noise to create audio data including an artificial speech waveform. Parametric synthesis uses a computerized voice generator, sometimes called a vocoder. The server(s)120may include profile storage270. The profile storage270may include a variety of information related to individual users, groups of users, devices, etc. that interact with the system. A “profile” refers to a set of data associated with a user, device, etc. A profile may include preferences specific to a user, device, etc.; input and output capabilities of one or more devices; internet connectivity information; user bibliographic information; subscription information; as well as other information. The profile storage270may include one or more user profiles, with each user profile being associated with a different user identifier. Each user profile may include various user identifying information. Each user profile may also include preferences of the user and/or one or more device identifiers, representing one or more devices of the user. The profile storage270may include one or more group profiles. Each group profile may be associated with a different group profile identifier. A group profile may be specific to a group of users. That is, a group profile may be associated with two or more individual user profiles. For example, a group profile may be a household profile that is associated with user profiles associated with users corresponding to a single household. A group profile may include preferences shared by all the user profiles associated therewith. Each user profile associated with a group profile may additionally include preferences specific to the user associated therewith. That is, each user profile may include preferences unique from one or more other user profiles associated with the same group profile. A user profile may be a stand-alone profile or may be associated with a group profile. A group profile may include one or more device identifiers representing one or more devices associated with the group profile. The profile storage270may include one or more device profiles. Each device profile may be associated with a different device identifier. Each device profile may include various device identifying information. Each device profile may also include one or more user identifiers, representing one or more users associated with the device. For example, a household device's profile may include the user identifiers of users of the household. The system may be configured to incorporate user permissions and may only perform activities disclosed herein if approved by a user. As such, the systems, devices, components, and techniques described herein would be typically configured to restrict processing where appropriate and only process user information in a manner that ensures compliance with all appropriate laws, regulations, standards, and the like. The herein disclosed system and techniques can be implemented on a geographic basis to ensure compliance with laws in various jurisdictions and entities in which the components of the system and/or user(s) are located. The server(s)120may include a user recognition component295that recognizes one or more users associated with data input to the system. The user recognition component295may take as input the audio data211. The user recognition component295may perform user recognition by comparing audio characteristics in the audio data211to stored audio characteristics of users. The user recognition component295may also or alternatively perform user recognition by comparing biometric data (e.g., fingerprint data, iris data, etc.), received by the system in correlation with the present user input, to stored biometric data of users. The user recognition component295may also or alternatively perform user recognition by comparing image data (e.g., including a representation of at least a feature of a user), received by the system in correlation with the present user input, with stored image data including representations of features of different users. The user recognition component295may perform additional user recognition processes, including those known in the art. For a particular user input, the user recognition component295may perform processing with respect to stored data of users associated with the device110that captured the user input. The user recognition component295determines whether user input originated from a particular user. For example, the user recognition component295may generate a first value representing a likelihood that the user input originated from a first user, a second value representing a likelihood that the user input originated from a second user, etc. The user recognition component295may also determine an overall confidence regarding the accuracy of user recognition operations. The user recognition component295may output a single user identifier corresponding to the most likely user that originated the user input, or may output multiple user identifiers with respective values representing likelihoods of respective users originating the user input. The output of the user recognition component295may be used to inform NLU processing, processing performed by a skill290, as well as processing performed by other components of the system. FIG.3illustrates how NLU processing is performed on text data. Generally, the NLU component260attempts to make a semantic interpretation of text data input thereto. That is, the NLU component260determines the meaning behind text data based on the individual words and/or phrases represented therein. The NLU component260interprets text data to derive an intent of the user as well as pieces of the text data that allow a device (e.g., the device110, the server(s)120, skill server(s)225, etc.) to complete that action. The NLU component260may process text data including several ASR hypotheses. The NLU component260may process all (or a portion of) the ASR hypotheses input therein. Even though the ASR component250may output multiple ASR hypotheses, the NLU component260may be configured to only process with respect to the top scoring ASR hypothesis. The NLU component260may include one or more recognizers363. Each recognizer363may be associated with a different skill290. Each recognizer363may process with respect to text data input to the NLU component260. Each recognizer363may operate at least partially in parallel with other recognizers363of the NLU component260. Each recognizer363may include a named entity recognition (NER) component362. The NER component362attempts to identify grammars and lexical information that may be used to construe meaning with respect to text data input therein. The NER component362identifies portions of text data that correspond to a named entity that may be applicable to processing performed by a skill290. The NER component362(or other component of the NLU component260) may also determine whether a word refers to an entity whose identity is not explicitly mentioned in the text data, for example “him,” “her,” “it” or other anaphora, exophora or the like. Each recognizer363, and more specifically each NER component362, may be associated with a particular grammar model and/or database373, a particular set of intents/actions374, and a particular personalized lexicon386. Each gazetteer384may include skill-indexed lexical information associated with a particular user and/or device110. For example, a Gazetteer A (384a) includes skill-indexed lexical information386aato386an. A user's music skill lexical information might include album titles, artist names, and song names, for example, whereas a user's contact list skill lexical information might include the names of contacts. Since every user's music collection and contact list is presumably different, this personalized information improves entity resolution. An NER component362applies grammar models376and lexical information386to determine a mention of one or more entities in text data. In this manner, the NER component362identifies “slots” (corresponding to one or more particular words in text data) that may be used for later processing. The NER component362may also label each slot with a type (e.g., noun, place, city, artist name, song name, etc.). Each grammar model376includes the names of entities (i.e., nouns) commonly found in speech about the particular skill290to which the grammar model376relates, whereas the lexical information386is personalized to the user and/or the device110from which the user input originated. For example, a grammar model376associated with a shopping skill may include a database of words commonly used when people discuss shopping. Each recognizer363may also include an intent classification (IC) component364. An IC component364parses text data to determine an intent(s). An intent represents an action a user desires be performed. An IC component364may communicate with a database374of words linked to intents. For example, a music intent database may link words and phrases such as “quiet,” “volume off,” and “mute” to a <Mute> intent. An IC component364identifies potential intents by comparing words and phrases in text data to the words and phrases in an intents database374. The intents identifiable by a specific IC component364are linked to skill-specific grammar frameworks376with “slots” to be filled. Each slot of a grammar framework376corresponds to a portion of text data that the system believes corresponds to an entity. For example, a grammar framework376corresponding to a <PlayMusic> intent may correspond to sentence structures such as “Play {Artist Name},” “Play {Album Name},” “Play {Song name},” “Play {Song name} by {Artist Name},” etc. However, to make resolution more flexible, grammar frameworks376may not be structured as sentences, but rather based on associating slots with grammatical tags. For example, an NER component362may parse text data to identify words as subject, object, verb, preposition, etc. based on grammar rules and/or models prior to recognizing named entities in the text data. An IC component364(e.g., implemented by the same recognizer363as the NER component362) may use the identified verb to identify an intent. The NER component362may then determine a grammar model376associated with the identified intent. For example, a grammar model376for an intent corresponding to <PlayMusic> may specify a list of slots applicable to play the identified “object” and any object modifier (e.g., a prepositional phrase), such as {Artist Name}, {Album Name}, {Song name}, etc. The NER component362may then search corresponding fields in a lexicon386, attempting to match words and phrases in text data the NER component362previously tagged as a grammatical object or object modifier with those identified in the lexicon386. An NER component362may perform semantic tagging, which is the labeling of a word or combination of words according to their type/semantic meaning. An NER component362may parse text data using heuristic grammar rules, or a model may be constructed using techniques such as hidden Markov models, maximum entropy models, log linear models, conditional random fields (CRF), and the like. For example, an NER component362implemented by a music recognizer may parse and tag text data corresponding to “play mother's little helper by the rolling stones” as {Verb}: “Play,” {Object}: “mother's little helper,” {Object Preposition}: “by,” and {Object Modifier}: “the rolling stones.” The NER component362identifies “Play” as a verb, which an IC component364may determine corresponds to a <PlayMusic> intent. At this stage, no determination has been made as to the meaning of “mother's little helper” and “the rolling stones,” but based on grammar rules and models, the NER component362has determined the text of these phrases relates to the grammatical object (i.e., entity) of the user input represented in the text data. The frameworks linked to the intent are then used to determine what database fields should be searched to determine the meaning of these phrases, such as searching a user's gazetteer384for similarity with the framework slots. For example, a framework for a <PlayMusic> intent might indicate to attempt to resolve the identified object based on {Artist Name}, {Album Name}, and {Song name}, and another framework for the same intent might indicate to attempt to resolve the object modifier based on {Artist Name}, and resolve the object based on {Album Name} and {Song Name} linked to the identified {Artist Name}. If the search of the gazetteer384does not resolve a slot/field using gazetteer information, the NER component362may search a database of generic words (e.g., in the knowledge base372). For example, if the text data includes “play songs by the rolling stones,” after failing to determine an album name or song name called “songs” by “the rolling stones,” the NER component362may search the database for the word “songs.” In the alternative, generic words may be checked before the gazetteer information, or both may be tried, potentially producing two different results. An NER component362may tag text data to attribute meaning thereto. For example, an NER component362may tag “play mother's little helper by the rolling stones” as: {skill} Music, {intent} <PlayMusic>, {artist name} rolling stones, {media type} SONG, and {song title} mother's little helper. For further example, the NER component362may tag “play songs by the rolling stones” as: {skill} Music, {intent} <PlayMusic>, {artist name} rolling stones, and {media type} SONG. The NLU component260may generate cross-skill N-best list data440, which may include a list of NLU hypotheses output by each recognizer363(as illustrated inFIG.4). A recognizer363may output tagged text data generated by an NER component362and an IC component364operated by the recognizer363, as described above. Each NLU hypothesis including an intent indicator and text/slots may be grouped as an NLU hypothesis represented in the cross-skill N-best list data440. Each NLU hypothesis may also be associated with one or more respective score(s) for the NLU hypothesis. For example, the cross-skill N-best list data440may be represented as, with each line representing a separate NLU hypothesis: [0.95] Intent: <PlayMusic> ArtistName: Lady Gaga SongName: Poker Face [0.95] Intent: <PlayVideo> ArtistName: Lady Gaga VideoName: Poker Face [0.01] Intent: <PlayMusic> ArtistName: Lady Gaga AlbumName: Poker Face [0.01] Intent: <PlayMusic> SongName: Pokerface The NLU component260may send the cross-skill N-best list data440to a pruning component450. The pruning component450may sort the NLU hypotheses represented in the cross-skill N-best list data440according to their respective scores. The pruning component450may then perform score thresholding with respect to the cross-skill N-best list data440. For example, the pruning component450may select NLU hypotheses represented in the cross-skill N-best list data440associated with confidence scores satisfying (e.g., meeting and/or exceeding) a threshold confidence score. The pruning component450may also or alternatively perform number of NLU hypothesis thresholding. For example, the pruning component450may select a maximum threshold number of top scoring NLU hypotheses. The pruning component450may generate cross-skill N-best list data460including the selected NLU hypotheses. The purpose of the pruning component450is to create a reduced list of NLU hypotheses so that downstream, more resource intensive, processes may only operate on the NLU hypotheses that most likely represent the user's intent. The NLU component260may also include a light slot filler component452. The light slot filler component452can take text data from slots represented in the NLU hypotheses output by the pruning component450and alter it to make the text data more easily processed by downstream components. The light slot filler component452may perform low latency operations that do not involve heavy operations, such as those requiring reference to a knowledge base. The purpose of the light slot filler component452is to replace words with other words or values that may be more easily understood by downstream system components. For example, if an NLU hypothesis includes the word “tomorrow,” the light slot filler component452may replace the word “tomorrow” with an actual date for purposes of downstream processing. Similarly, the light slot filler component452may replace the word “CD” with “album” or the words “compact disc.” The replaced words are then included in the cross-skill N-best list data460. The NLU component260sends the cross-skill N-best list data460to an entity resolution component470. The entity resolution component470can apply rules or other instructions to standardize labels or tokens from previous stages into an intent/slot representation. The precise transformation may depend on the skill290. For example, for a travel skill, the entity resolution component470may transform text data corresponding to “Boston airport” to the standard BOS three-letter code referring to the airport. The entity resolution component470can refer to a knowledge base that is used to specifically identify the precise entity referred to in each slot of each NLU hypothesis represented in the cross-skill N-best list data460. Specific intent/slot combinations may also be tied to a particular source, which may then be used to resolve the text data. In the example “play songs by the stones,” the entity resolution component470may reference a personal music catalog, Amazon Music account, user profile data, or the like. The entity resolution component470may output text data including an altered N-best list that is based on the cross-skill N-best list data460, and that includes more detailed information (e.g., entity IDs) about the specific entities mentioned in the slots and/or more detailed slot data that can eventually be used by downstream components to perform an action responsive to the user input. The NLU component260may include multiple entity resolution components470and each entity resolution component470may be specific to one or more skills290. The entity resolution component270may not be successful in resolving every entity and filling every slot represented in the cross-skill N-best list data460. This may result in the entity resolution component470outputting incomplete results. The NLU component260may include a ranker component490. The ranker component490may assign a particular confidence score to each NLU hypothesis input therein. The confidence score of an NLU hypothesis may represent a confidence of the system in the NLU processing performed with respect to the NLU hypothesis. The confidence score of a particular NLU hypothesis may be affected by whether the NLU hypothesis has unfilled slots. For example, if an NLU hypothesis associated with a first skill includes slots that are all filled/resolved, that NLU hypothesis may be assigned a higher confidence score than another NLU hypothesis including at least some slots that are unfilled/unresolved by the entity resolution component470. The ranker component490may apply re-scoring, biasing, and/or other techniques to determine the top scoring NLU hypotheses. To do so, the ranker component490may consider not only the data output by the entity resolution component470, but may also consider other data491. The other data491may include a variety of information. The other data491may include skill490rating or popularity data. For example, if one skill290has a particularly high rating, the ranker component490may increase the score of an NLU hypothesis associated with that skill290, and vice versa. The other data491may include information about skills290that have been enabled for the user identifier and/or device identifier associated with the current user input. For example, the ranker component490may assign higher scores to NLU hypotheses associated with enabled skills290than NLU hypotheses associated with non-enabled skills290. The other data491may include data indicating user usage history, such as if the user identifier associated with the current user input is regularly associated with user inputs that invoke a particular skill290or does so at particular times of day. The other data491may include data indicating date, time, location, weather, type of device110, user identifier, device identifier, context, as well as other information. For example, the ranker component490may consider when any particular skill290is currently active (e.g., music being played, a game being played, etc.) with respect to the user or device110associated with the current user input. The other data491may include device type information. For example, if the device110does not include a display, the ranker component490may decrease the score associated with an NLU hypothesis that would result in displayable content being presented to a user, and vice versa. Following ranking by the ranker component490, the NLU component260may output NLU results data485to the orchestrator component230. The NLU results data485may include the top scoring NLU hypotheses as determined by the ranker component490. Alternatively, the NLU results data485may include the top scoring NLU hypothesis as determined by the ranker component490. The orchestrator component230may select a skill290, based on the NLU results data485, for performing an action responsive to the user input. In an example, the orchestrator component230may send all (or a portion of) the NLU results data485to a skill290that is represented in the NLU results data485and to be invoked to perform an action responsive to the user input. The server(s)120may include a user input rewrite service285. The user input rewrite service285may include a model building component510(as illustrated inFIG.5). The model building component510may train one or more machine learning models to determine when user inputs should be rewritten. One skilled in the art will appreciate that the model building component510is merely illustrative, and that the user input rewrite service285may also or additionally include one or more other components for rewriting user inputs. For example, the user input rewrite service285may include one or more components that build one or more graphs and/or one or more rules for determining when user inputs should be rewritten. The model building component510may train the one or more machine learning models during offline operations. The model building component510may train the one or more machine learning models using various data. The trained one or more machine learning models may be configured to output, for a given user input, a value representing a confidence that the user input should be rewritten. The value may be a scalar value from, for example, 1 to 5 (e.g., comprising the integers 1, 2, 3, 4, and 5). In an example, a value of “1” may represent a lowest confidence that a user input should be rewritten. In another example, a value of “5” may represent a highest confidence that a user input should be rewritten. In other examples, the value may be a binned value (e.g., corresponding to high, medium, or low). Data input to the model building component510may include output of the ASR component250(e.g., ASR hypotheses), output of the NLU component260(e.g., NLU hypotheses), audio data211, a time at which a user input was received by the system, barge-in data (e.g., data representing detection of a wakeword while the system is outputting content believed responsive to a previous user input), data representing an action performed in response to a previous user input, data representing a time since a most recent user input was received by the system from a particular user and/or device, data representing explicit user feedback, data representing implicit user feedback, data representing a number of user profiles associated with a given user input, etc. Data input to the model building component510may include data representing a length of NLU processing, data representing a number of barge-ins received with respect to a particular action performed in response to a particular user input, data representing a diversity of intents generated for a particular user input, data indicating a number of turns in a particular dialog, data representing user input rephrasing (e.g., data representing that a user input corresponds to a rephrasing of a previous user input), etc. As used herein, a “dialog” may correspond to various user inputs and system outputs. When the server(s)120receives a user input, the server(s)120may associate the data (e.g., audio data or text data) representing the user input with a session identifier. The session identifier may be associated with various speech processing data (e.g., an intent indicator(s), a category of skill to be invoked in response to the user input, etc.). When the system invokes a skill, the system may send the session identifier to the skill in addition to NLU results data. If the skill outputs data for presentment to the user, the skill may associate the data with the session identifier. The foregoing is illustrative and, thus, one skilled in the art will appreciate that a session identifier may be used to track data transmitted between various components of the system. A user input and corresponding action performed by a system may be referred to as a dialog “turn.” The model building component510may at least partially train one or more machine learning models using previous user input data505. The previous user input data505may be represented as audio data, an ASR hypothesis, and/or an NLU hypotheses. The model building component510may expand an ASR hypothesis and/or NLU hypothesis to more accurately reflect a corresponding user input. For example, if an ASR hypothesis corresponds to “play Adele,” the model building component510may expand the ASR hypothesis to correspond to “play music by Adele.” For further example, if an NLU hypothesis includes an intent indicator corresponding to <Play> and a resolved slot corresponding to {artistname: Adele}, the model building component510may rewrite the NLU hypothesis to include an intent indicator corresponding to <PlayMusic> and a resolved slot corresponding to {artistname: Adele}. The model building component510may use instances of original and rewritten ASR hypotheses and/or NLU hypotheses to at least partially train at least one machine learning model to determine when user inputs should be rewritten. The model building component510may at least partially train a machine learning model(s) using feedback data515. The feedback data515may represent explicit user feedback, such as user ratings, spoken or textual user inputs, etc. The feedback data515may also represent sentiment data. Sentiment data may comprise positive, negative, and neutral feedback captured in spoken and textual user inputs. Sentiment data may include expressed frustration or satisfaction using polarized language (e.g., positive or negative expression). For example, if a user says “you are awesome!”, sentiment data may reflect user satisfaction. Sentiment data may be captured during runtime operations. In various examples, sentiment data may be identified by comparing input data to known sentiment data (e.g., stored in a table or other data structure). The model building component510may at least partially train a machine learning model(s) using behavioral data525. Behavioral data525may represent one or more characteristics of one or more user inputs. In at least some examples, behavioral data525and/or feedback data515may represent user sentiment regarding a user's interaction with the system. Behavioral data515may include user input rephrasing data (e.g., implicit user feedback). User input rephrasing data may represent similarities between consecutive user inputs received from a user during a dialog. Accordingly, user input rephrasing represents examples where users rephrase a particular input when the system does not understand the user input correctly the first time. The behavior data525may include intent and slot repetition data. Similar to user input rephrasing data, intent and slot repetition data may represent the repetition of intents (with associated slots) such as when the system does not interpret the user input correctly the first time. The behavioral data525may include barge-in data. Barge-in data may represent instances when the system detects a wakeword while the system is performing an action believed responsive to a user input (e.g., the user interrupts or “barges in” with a subsequent user input while the system is performing an action). The behavioral data525may include termination data. Termination data may represent instances when a user instructs the system to stop what the system is currently doing. For example, the system may be performing an action (such as outputting music) and the user may state “stop!” or the like. The behavioral data525may include user question data. User question data may represent scenarios in which a user inquires why the system has performed a particular action. For example, a user may provide an input corresponding to “why did you say that” or the like. The behavioral data525may include confirmation and/or negation data. Confirmation data may represent scenarios when users confirm suggestions from the system. For example, the system may suggest a particular song and the user may say “yes” or “of course” or some other confirmation utterance. Negation data represents scenarios where the user negates or responds negatively to a suggestion. The behavioral data525may include duration data that may represent a time difference between consecutive user inputs. Behavioral data525may include length of speech data that may indicate the length of time that a user input lasts. The behavioral data525may include filler word data. Filler word data may indicate the presence of filler words (e.g., “umm”, “ahh”, “well”, etc.) in user speech. The model building component510may at least partially train a machine learning model(s) using response characteristic data535. Response characteristic data535may include coherence data representing a degree of coherence between a response of the system and the user input for the same turn. In an example, if a response of the system and the user input are related to the same question, an indication of coherence for the turn may be sent to model building component510. The response characteristic data535may include response length data. Response length data may represent a length of the system's response to a user input. The response characteristic data535may include apology data. Apology data represents instances in which the system apologizes. For example, if the user requests an answer to a question and the system responds “I am sorry; I don't know the answer to that question,” or the like, apology data may be generated and sent to the model building component510. The response characteristic data535may include affirmation and/or negation data. Affirmation data may represent system responses such as “Yes”, “Absolutely”, “Sure”, etc. Negation data may represent system responses such as “No”, “I don't know”, “I don't understand”, etc. The response characteristic data535may include filler word data. Filler word data may represent the presence of filler words (e.g., “umm”, “ahh”, “well”, etc.) in system responses. The response characteristic data535may include confirmation request data. Confirmation request data may represent scenarios in which the system seeks to confirm a user selection and/or user intent. For example, the user may request the playing of a Song A. The system may be unable to locate Song A and may ask “Did you mean Song B?”. An indication of such a confirmation request may be represented by response characteristic data535. The model building component510may at least partially train a machine learning model(s) using aggregate characteristic data545. Aggregate characteristic data545may include user input frequency data, intent frequency data, and/or slot frequency data. User input frequency data may represent the frequency of a particular user input for a particular user (or multiple users). Intent frequency data may represent the frequency of a particular intent determined for a single user (or multiple users). Slot frequency data may represent the frequency of slots corresponding to a particular user's (or multiple users') inputs. In at least some examples, the aggregate characteristic data545may include data comprising a ratio of user input frequency to the number of unique users. The aggregate characteristic data545may include data representing a popularity (e.g., a score) of a user input, intent, and/or slot over one or more users and/or over a particular time period. The model building component510may at least partially train a machine learning model(s) using session characteristic data555. The session characteristic data555may include dialog length data, which may comprise the current number of turns in a dialog session between a user and the system. In at least some examples, for a skill implemented by a skill server(s)225, a dialog session may commence upon a user invoking the skill and may end when the session with the skill is terminated (e.g., through user termination or through a session timeout). In at least some examples, for a skill290implemented by the server(s)120, a dialog session may commence upon a user initiating a dialog with the system (e.g., by uttering a wakeword followed by user input). In the context of a skill290implemented by the server(s)120, the dialog session may end after a pre-defined amount of time (e.g., after 45 seconds, or some other amount of time, having elapsed since commencement of the dialog session). The session characteristic data555may include data representing a total number of times a barge-in occurs during a dialog session. The session characteristic data555may include intent diversity data for a dialog session. Intent diversity data may represent the percentage of distinct intents invoked in a dialog session relative to the total number of intents invoked during the dialog session. For example, if during a particular dialog session, a user invokes three separate instances of the same intent, the intent diversity data may reflect that ⅓ of the intents were distinct. In at least some examples, intent diversity data may indicate whether or not a user was satisfied with a particular interaction. Determining whether a user is satisfied with their interactions with a system may be more difficult relative to determining that the user is frustrated. When a user receives a satisfactory response, the user may take one of a diverse set of actions, such as leave the conversation, continue the dialog, leave explicit positive feedback, etc. Intent diversity data is the percentage of distinct intents in a dialog session. Accordingly, in at least some examples, higher intent diversity during a dialog session may indicate that the user is satisfied. For example, a user continuing dialog in a given dialog session and covering a plurality of different intents within the dialog session may positively correlate with high user satisfaction. The model building component510may at least partially train a machine learning model(s) using user preference data565. User preference data565may represent average dialog session length for a given user, intent and slot data (e.g., popularity) for a given user, etc. The user preference data565may represent the amount of time a user has been actively using the system (e.g., using a particular skill). The user preference data565may represent the average number of turns per dialog session for a particular user. The user preference data565may represent the average number of turns for a particular skill for a user. In general, user preference data565may correlate dialog session length and/or number of turns per dialog session to particular users. As a result, in at least some examples, users that tend to have shorter dialog sessions and/or fewer turns per dialog session are not necessarily assumed to be unsatisfied with their interactions with a system based on the relative brevity of their interactions therewith. Similarly, a user associated with user preference data565that indicates that the user tends to have longer dialog sessions with the system may not necessarily be deemed to be satisfied with their interactions with the system's responses based on the relative lengthiness of their interactions therewith. The user preference data565may be represented as a personal graph. The model building component510may at least partially train a machine learning model(s) using user input processing error data575. User input processing error data575may include ASR processing confidence values, NLU processing confidence values, response-error data, turn-by-turn error data, NLU error probability (e.g., the probability of an error by the NLU component260), ASR error probability (e.g., the probability of an error in output text from the ASR component250), etc. Response-error data may represent the system was unable to process a particular user input. Turn-by-turn error data may represent if there is a system error in user input processing components. Data input to the model building component510may be associated with data representing when the data was generated. The model building component510may use such data to at least partially train the at least one machine learning model, as older data may be weighted less than newer data. The following data may favor user input rewriting: presence of apology and negation in system response, high probability of intent and/or slot recognition error by the NLU component260, barge-ins, empty (null) response to user inputs by the system, user termination of a session, similarity between consecutive user inputs, number of barge-ins in a current session, negative sentiment in user inputs, the system asking a question, and intent and slot repetition in user inputs. The aforementioned data is not exhaustive. The following data may favor not rewriting a user input: low probability of speech recognition error, longer dialog length, high intent diversity, coherence between user input and system response, longer user utterances, user continuing after saying “stop”, user asking a question, user input rephrasing, and the system providing affirmative responses. The aforementioned data is not exhaustive. The model building component510may also train one or more machine learning models to rewrite user inputs. These one or more machine learning models may the same as the model(s) trained to determine when a user input should be rewritten, or they may be different. The model building component510may train the one or more machine learning models to rewrite user inputs using text data representing original user inputs (that resulted in incorrect actions being performed by the system) and text data representing corresponding rephrases of the original user inputs. Such text data may correspond to ASR hypotheses of the original and rephrased user inputs. The model building component510may limit such training to include only rephrases that are associated with “correct” actions being performed by the system (e.g., are associated with positive user feedback, etc.). The model building component510may use phonetic similarly to train the one or more machine learning model(s) to rewrite user inputs. For example, the model building component510may train the model(s) based on linguistic structures and common language patterns. Such training may enable the model(s), at runtime, to rewrite user inputs that include user errors (e.g., due to slips of the tongue). The model building component510may have access to NLU hypotheses associated with original user inputs (that resulted in incorrect actions being performed by the system) and rephrased user inputs. Thus, when the model(s) is used at runtime, they system may rewrite a user input and associate the rewritten user input with a NLU hypothesis. This may prevent at least some NLU processing from needing to be performed on the rewritten user input. Data used to train the one or more machine learning models may be labeled with respect to a user identifier (representing a user associated with the data). As such, one skilled in the art will appreciate that the trained machine learning model(s) may be wholly generic to various users of the system; wholly specific to a particular user; or may include a portion trained with respect to various users of the system, and one or more portions that are individualized to specific users of the system. The model building component510may generate one or more trained models (e.g., resulting from the retraining of a trained model(s)) on a periodic basis (e.g., once every few hours, once a day, etc.). A machine learning model may be trained and operated according to various machine learning techniques. Such techniques may include, for example, neural networks (such as deep neural networks and/or recurrent neural networks), inference engines, trained classifiers, HMMs, Markov chains, probabilistic graphical models (PGMs), etc. Examples of trained classifiers include Support Vector Machines (SVMs), neural networks, decision trees, AdaBoost (short for “Adaptive Boosting”) combined with decision trees, and random forests. Focusing on SVM as an example, SVM is a supervised learning model with associated learning algorithms that analyze data and recognize patterns in the data, and which are commonly used for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. More complex SVM models may be built with the training set identifying more than two categories, with the SVM determining which category is most similar to input data. An SVM model may be mapped so that the examples of the separate categories are divided by clear gaps. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gaps they fall on. Classifiers may issue a “score” indicating which category the data most closely matches. The score may provide an indication of how closely the data matches the category. In order to apply machine learning techniques, machine learning processes themselves need to be trained. Training a machine learning component requires establishing a “ground truth” for the training examples. In machine learning, the term “ground truth” refers to the accuracy of a training set's classification for supervised learning techniques. Various techniques may be used to train models, including backpropagation, statistical learning, supervised learning, semi-supervised learning, stochastic learning, or other known techniques. The one or more trained machine learning models, generated by the model building component510, may be implemented at runtime to determine when and how to rewrite a user input (as illustrated inFIG.6). If a user input is received as audio11(e.g., is a spoken user input), the orchestrator component230may send audio data211, representing the audio11, to the ASR component250. The ASR component250may transcribe the audio data211into one or more ASR hypotheses605, which the ASR component250may send to the orchestrator component230. The orchestrator component230may send one or more ASR hypotheses605to a rewrite initiation component610of the user input rewrite service285. The rewrite initiation component610may process the ASR hypothesis(es)605to determine whether one or more of the ASR hypothesis(es)605should be rewritten. The rewrite initiation component610may implement the trained one or more machine learning models (generated by the model building component510) to determine whether the present user input should be rewritten. For example, the rewrite initiation component610may process an ASR hypothesis to determine whether the ASR hypothesis is similar to previous user inputs that were rephrased, associated with negative user feedback, etc. The rewrite initiation component610may process with respect to a user identifier associated with the ASR hypothesis(es)605. For example, the rewrite initiation component610may receive a user identifier (corresponding to a user profile associated with the present user input). The rewrite initiation component610may implement a portion of the trained machine learning model(s), trained using data associated with the user identifier, to determine if the ASR hypothesis(es)605should be rewritten. If the rewrite initiation component610determines the ASR hypothesis(es)605should not be rewritten, the rewrite initiation component610may cause the ASR hypothesis(es)605to be sent to the NLU component260(not illustrated). If the rewrite initiation component610determines at least one of the ASR hypothesis(es)605should be rewritten, the rewrite initiation component610sends the at least one of the ASR hypothesis(es)605to a rewriter component620of the user input rewrite service285. At least some systems may be configured to determine every user input should be rewritten. In at least some systems, this configuration may be too computationally costly. Thus, the rewrite initiation component610may be configured to determine a percentage of user inputs should be rewritten. The rewriter component620may implement one or more trained machine learning models (generated by the model building component510as described above) to generate one or more alternate ASR hypotheses615from an ASR hypothesis605input thereto. The rewriter component620may be configured to generate as many alternate ASR hypotheses615for a single ASR hypothesis605as possible, with the caveat that the rewriter component620should have at least a minimum confidence that the alternate ASR hypotheses615wouldn't be triggered for rewriting if they were processed by the rewrite initiation component610. At least some systems may be configured to generate no more than a maximum number of alternate ASR hypotheses615for a given ASR hypothesis605(e.g., since the number of alternate ASR hypotheses615generated corresponds to computing costs attributed to NLU processing of the user input). The rewriter component620may consider personalized context information for a user (associated with the user input) when determining how to rewrite a user input. For example, an electronic calendar associated with the user's profile may include an entry representing the user is going on vacation to Alaska. If the user asks the system “what is the weather in Petersburg,” the system may determine “Petersburg” is ambiguous. Using the electronic calendar information, the system could rewrite the user input to correspond to “what is the weather in Petersburg, Alaska.” The rewriter component620may generate two or more functionally equivalent alternate ASR hypotheses. When this occurs, the rewriter component620may be configured to send only one of the functionally equivalent alternate ASR hypotheses to the orchestrator component230. This prevents NLU processing from being performed with respect to functionally equivalent data, which may decrease latency. The rewriter component620may perform various types of rewrites (as illustrated inFIG.7). The rewriter component620may narrow down an ASR hypothesis605such that the alternate ASR hypothesis(es)615generated therefrom is/are narrower than the ASR hypothesis605. The rewriter component620may generalize an ASR hypothesis605such that the alternate ASR hypothesis(es)615generated therefrom is/are broader than the ASR hypothesis605. The rewriter component620may fix slip of the tongue issues in an ASR hypothesis605such that the alternate ASR hypothesis(es)615generated therefrom fix one or more errors in the ASR hypothesis605. The rewriter component620may reformulate an ASR hypothesis605such that the alternate ASR hypothesis(es)615generated therefrom is/are “clearer” than the ASR hypothesis605. Other types of rewrites are possible. Referring back toFIG.6, the rewriter component620may send the alternate ASR hypothesis(es)615to the orchestrator component230. The orchestrator component230may send the ASR hypothesis(es) and alternate ASR hypothesis(es) (collectively illustrated as625) to the NLU component260. The rewriter component620may generate a respective confidence value for each alternate ASR hypothesis. Such a confidence value may represent the rewriter component620's confidence that the alternate ASR hypothesis represents a more beneficial ASR hypothesis than the ASR hypothesis from which the alternate ASR hypothesis was generated. Such confidence value may be a numeric value (e.g., on a scale of 0-10 or some other scale) or a binned value (e.g., high, medium, low, etc.). Numerical values may correspond to binned values (e.g., a low value may correspond to numeric values of 0-3, a medium value may correspond to numeric values of 4-6, and a high value may corresponds to numeric values of 7-10). In at least some examples, the rewriter component620may be configured to only output alternate ASR hypotheses corresponding to confidence values that satisfy a threshold confidence value. In at least some examples, if none of the alternate ASR hypotheses satisfy the threshold confidence value, the rewriter component620may output the single alternate ASR hypothesis having the highest confidence value. One skilled in the art will appreciate that some or all of the types of data considered by the model building component510to generate the one or more trained machine learning models may be considered by the rewriter component620at runtime. For example, the rewriter component620may consider data representing a sentiment of the user input, information representing one or more previous turns of a dialog, which skills have been enabled with respect to a user profile, which smart home devices have been enabled with respect to a user profile, etc. When a user input is received by a device110, the device110may generate a user input identifier corresponding to the user input. The system may maintain a record of processing performed with respect to the user input using the user input identifier. For example, the audio data211may be associated with the user input identifier when the orchestrator component230sends the audio data211to the ASR component250; the ASR hypothesis(es)605may be associated with the user input identifier when the ASR component250sends the ASR hypothesis(es)605to the orchestrator component230; the ASR hypothesis(es)605may be associated with the user input identifier when the orchestrator component250sends the ASR hypothesis(es)605to the rewrite initiation component610; the ASR hypothesis(es)605may be associated with the user input identifier when the rewrite initiation component610sends the ASR hypothesis(es)605to the rewriter component620; the alternate ASR hypothesis(es)615may be associated with the user input identifier when the rewriter component620sends the alternate ASR hypothesis(es)615to the orchestrator component230; the hypotheses625may be associated with the user input identifier when the orchestrator component230sends the hypotheses625to the NLU component260; etc. The orchestrator component230may cause the ASR hypothesis(es)605and associated user input identifier to be stored after the orchestrator component230receives same from the ASR component250. When the orchestrator component230receives the alternate ASR hypothesis(es)615associated with the user identifier, the orchestrator component230may recall the ASR hypothesis(es)605, associated with the same user input identifier, from storage and send the hypotheses625to the NLU component260. Alternatively, the rewriter component620may send the alternate ASR hypothesis(es)615and the ASR hypothesis(es)605to the orchestrator component230, and the orchestrator component230may simply send the received hypotheses625to the NLU component260. This may prevent the orchestrator component230from needing to maintain a record of ASR hypotheses and corresponding user input identifiers. The NLU component260may perform NLU processing with respect to the received hypotheses625. The NLU component260may process two or more of the hypotheses625at least partially in parallel. The NLU component260may output multiple NLU hypotheses635. Each NLU hypothesis may be associated with a value representing a confidence that the NLU hypothesis represents the user input. An NLU hypothesis corresponding to an alternate ASR hypothesis615may be associated with a flag representing the NLU hypothesis was generated from an alternate ASR hypothesis615. Such flagging may be beneficial when, for example, NLU hypotheses generated from ASR and alternate ASR hypotheses are substantially similar or identical. The NLU hypotheses635may be sent to a ranker component630. As illustrated, the ranker component630is implemented by the orchestrator component230. However, one skilled in the art will appreciate that the ranker component630may be implemented in other areas of the system, such as within the ASR component250, the NLU component260, or the user input rewrite service286, for example. Moreover, one ranker component630may be implemented, or multiple ranker component630's may be implemented. The ranker component630ranks the NLU hypotheses635(generated from ASR hypotheses605and alternate ASR hypotheses615) using various data, and selects the top ranking NLU hypothesis as being the best representation of the user input. While the goal of the rewrite component620is to generate accurate representations of the user input, the goal of the ranker component630is to select the best representation. The rewrite initiation component610may generate a value representing the rewrite initiation component610's confidence that one or more ASR hypotheses605should be rewritten. The ranker component630may consider the rewrite initiation component610generated confidence value. For example, the higher the rewrite initiation component610generated confidence value, the more weight the ranker component630may assign to the NLU hypotheses associated with flags representing the NLU hypotheses were generated from alternate ASR hypotheses. In other words, the more confidence the rewrite initiation component610is that the user input should be rewritten, the more weight the ranker component630may assign to NLU hypotheses associated with alternate ASR hypotheses generated by the rewriter component620. In at least some implementations, the ranker component630may generate a rewrite confidence value, rather than considering the value generated by the rewrite initiation component610. The NLU component260may assign a respective NLU confidence value to each NLU hypothesis. The ranker component630may be configured to weight NLU hypotheses (generated from ASR hypotheses605) more than NLU hypotheses (generated from alternate ASR hypotheses615) when the NLU confidence values are close (e.g., within a threshold deviation). In least some examples, the ranker component630may not be able to disambiguate the NLU hypotheses to a level at which the ranker component630is confident in selecting a single NLU hypothesis for downstream processing. That is, a deviation between a score of a scoring NLU hypothesis and a score of a next-top-scoring NLU hypothesis may not be large enough. When this occurs, the ranker component630may cause a dialog management component640to be invoked. The dialog management component640may be configured to engage a user, through a user interface, for the purpose of the user selecting which interpretation of the user input is most accurate (and should be used for downstream processing). This user interface may be implemented as a VUI and/or a GUI. For example, the dialog management component640may cause a device110to display text representing different ASR hypotheses (e.g., both output by the ASR component250and the user input rewrite service285) and/or may cause a device110to output audio requesting the user indicate (audibly or via a tactile input) which ASR hypothesis most correctly represents the user input. As described above, the user input rewrite service285may receive one or more ASR hypotheses605when the user input is a spoken user input. One skilled in the art will appreciate that the user input rewrite service285may receive text data (representing a text based user input) and may process as described above with the text based user input without departing from the present disclosure. As described above, the user input rewrite service285is implemented pre-NLU in a user input processing pipeline. Alternatively, the user input rewrite service285may be implemented at least partially in parallel or post-NLU (as illustrated inFIG.8). A decision on where in the user input processing pipeline to implement the user input rewrite service285may be based, at least in part, on latency considerations since, as described below with respect toFIG.8, implementing the user input rewrite service285in parallel with or after NLU may result in the NLU component260being called more than once with respect to the same user input. One or more ASR hypotheses605(or text data as received from a user device, representing a text based user input) may be sent to the NLU component260. The NLU component260may generate NLU hypotheses805representing the ASR hypothesis(es)605(or other received text data), and may send same to the rewrite initiation component610. The rewrite initiation component610may process with respect to the ASR hypothesis(es)605(or other text data representing a text based user input) and the NLU hypotheses805to determine whether the user input should be rewritten. If the rewrite initiation component610determines the user input should be rewritten (e.g., determines the ASR hypothesis(es)605or text data representing a text based user input should be rewritten), the rewrite initiation component610may send the ASR hypothesis(es) (or other text data) to be rewritten along with its corresponding NLU hypothesis (collectively illustrated as815) to the rewriter component620. The rewriter component620may generate at least one alternate ASR hypothesis for a received ASR hypothesis (or other text data). The rewriter component620may, in at least some examples, generate a corresponding alternate NLU hypothesis based on a received NLU hypothesis. For example, the rewriter component620may populate one or more slots of an NLU hypothesis with different values, may delete one or more slots from an NLU hypothesis, may add one or more slots (and optionally corresponding values) to the NLU hypothesis, etc. The rewriter component620may send the alternate ASR and/or NLU hypotheses (collectively illustrated as825) to the NLU component260. The NLU component260may perform NLU processing with respect to the received alternate hypothesis(es)825to generate further NLU hypotheses. The NLU component260may output all (or a portion of) the NLU hypotheses635, generated for the present user input, to the ranker component630, which may process as described above with respect toFIG.6. In the example ofFIG.8, the rewrite initiation component610and/or the rewriter component620may implement one or more machine learning models that are trained with respect to specific types of skills (e.g., music skills, video skills, smart home skills, etc.). In an example, a trained machine learning model implemented by the rewrite initiation component610and/or the rewriter component620may include a portion trained with respect to all types of skills, and various other portions that are each trained with respect to a specific type of skill. The rewrite initiation component610and/or the rewriter component620, as implemented inFIG.8, may be configured to consider the various kinds of data at runtime described above with respect toFIG.6. FIG.9is a block diagram conceptually illustrating a device110that may be used with the system.FIG.10is a block diagram conceptually illustrating example components of a remote device, such as the server(s)120, which may assist with ASR processing, NLU processing, etc., and the skill server(s)225. The term “server” as used herein may refer to a traditional server as understood in a server/client computing structure but may also refer to a number of different computing components that may assist with the operations discussed herein. For example, a server may include one or more physical computing components (such as a rack server) that are connected to other devices/components either physically and/or over a network and is capable of performing computing operations. A server may also include one or more virtual machines that emulates a computer system and is run on one or across multiple devices. A server may also include other combinations of hardware, software, firmware, or the like to perform operations discussed herein. The server(s) may be configured to operate using one or more of a client-server model, a computer bureau model, grid computing techniques, fog computing techniques, mainframe techniques, utility computing techniques, a peer-to-peer model, sandbox techniques, or other computing techniques. Multiple servers (120/225) may be included in the system, such as one or more servers120for performing ASR processing, one or more servers120for performing NLU processing, one or more skill server(s)225for performing actions responsive to user inputs, etc. In operation, each of these servers (or groups of servers) may include computer-readable and computer-executable instructions that reside on the respective device (120/225), as will be discussed further below. Each of these devices (110/120/225) may include one or more controllers/processors (904/1004), which may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory (906/1006) for storing data and instructions of the respective device. The memories (906/1006) may individually include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive memory (MRAM), and/or other types of memory. Each device (110/120/225) may also include a data storage component (908/1008) for storing data and controller/processor-executable instructions. Each data storage component (908/1008) may individually include one or more non-volatile storage types such as magnetic storage, optical storage, solid-state storage, etc. Each device (110/120/225) may also be connected to removable or external non-volatile memory and/or storage (such as a removable memory card, memory key drive, networked storage, etc.) through respective input/output device interfaces (902/1002). Computer instructions for operating each device (110/120/225) and its various components may be executed by the respective device's controller(s)/processor(s) (904/1004), using the memory (906/1006) as temporary “working” storage at runtime. A device's computer instructions may be stored in a non-transitory manner in non-volatile memory (906/1006), storage (908/1008), or an external device(s). Alternatively, some or all of the executable instructions may be embedded in hardware or firmware on the respective device in addition to or instead of software. Each device (110/120/225) includes input/output device interfaces (902/1002). A variety of components may be connected through the input/output device interfaces (902/1002), as will be discussed further below. Additionally, each device (110/120/225) may include an address/data bus (924/1024) for conveying data among components of the respective device. Each component within a device (110/120/225) may also be directly connected to other components in addition to (or instead of) being connected to other components across the bus (924/1024). Referring toFIG.9, the device110may include input/output device interfaces902that connect to a variety of components such as an audio output component such as a speaker912, a wired headset or a wireless headset (not illustrated), or other component capable of outputting audio. The device110may also include an audio capture component. The audio capture component may be, for example, a microphone920or array of microphones, a wired headset or a wireless headset (not illustrated), etc. If an array of microphones is included, approximate distance to a sound's point of origin may be determined by acoustic localization based on time and amplitude differences between sounds captured by different microphones of the array. The device110may additionally include a display916for displaying content. The device110may further include a camera918. Via antenna(s)914, the input/output device interfaces902may connect to one or more networks199via a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, 4G network, 5G network, etc. A wired connection such as Ethernet may also be supported. Through the network(s)199, the system may be distributed across a networked environment. The I/O device interface (902/1002) may also include communication components that allow data to be exchanged between devices such as different physical servers in a collection of servers or other components. The components of the device(s)110, the server(s)120, or the skill server(s)225may include their own dedicated processors, memory, and/or storage. Alternatively, one or more of the components of the device(s)110, the server(s)120, or the skill server(s)225may utilize the I/O interfaces (902/1002), processor(s) (904/1004), memory (906/1006), and/or storage (908/1008) of the device(s)110server(s)120, or the skill server(s)225, respectively. Thus, the ASR component250may have its own I/O interface(s), processor(s), memory, and/or storage; the NLU component260may have its own I/O interface(s), processor(s), memory, and/or storage; and so forth for the various components discussed herein. As noted above, multiple devices may be employed in a single system. In such a multi-device system, each of the devices may include different components for performing different aspects of the system's processing. The multiple devices may include overlapping components. The components of the device110, the server(s)120, and the skill server(s)225, as described herein, are illustrative, and may be located as a stand-alone device or may be included, in whole or in part, as a component of a larger device or system. As illustrated inFIG.11, multiple devices (110a-110g,120,225) may contain components of the system and the devices may be connected over a network(s)199. The network(s)199may include a local or private network or may include a wide network such as the Internet. Devices may be connected to the network(s)199through either wired or wireless connections. For example, a speech-detection device110a, a smart phone110b, a smart watch110c, a tablet computer110d, a vehicle110e, a display device110f, and/or a smart television110gmay be connected to the network(s)199through a wireless service provider, over a WiFi or cellular network connection, or the like. Other devices are included as network-connected support devices, such as the server(s)120, the skill server(s)225, and/or others. The support devices may connect to the network(s)199through a wired connection or wireless connection. Networked devices may capture audio using one-or-more built-in or connected microphones or other audio capture devices, with processing performed by ASR components, NLU components, or other components of the same device or another device connected via the network(s)199, such as the ASR component250, the NLU component260, etc. of one or more servers120. The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, speech processing systems, and distributed computing environments. The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers and speech processing should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein. Aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk, and/or other media. In addition, components of system may be implemented as in firmware or hardware, such as an acoustic front end (AFE), which comprises, among other things, analog and/or digital filters (e.g., filters configured as firmware to a digital signal processor (DSP)). Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
97,259
11862150
DETAILED DESCRIPTION In order to make the objectives, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described are merely some but not all of the embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by ordinary skilled in the art without inventive efforts shall fall within the scope of the present invention. FIG.1is a flowchart of a skill dispatching method for a speech dialogue platform according to an embodiment of the present invention. The method is applied to a server and includes the following steps. In S11, a central control dispatching service receives a semantic result of recognizing a user's speech sent by a data distribution service. In S12, the central control dispatching service schedules a plurality of skill services related to the semantic result in parallel, and obtains parsing results fed back by the plurality of skill services. In S13, the plurality of parsing results are sorted based on priorities of the skill services, and the skill parsing result with the highest priority is exported to a skill realization discrimination service for judging whether the skill parsing result with the highest priority is capable of realizing the function of the semantic result. In S14, when the skill realization discrimination service feeds back a failed realization, a skill parsing result with the highest priority is selected among the rest of skill parsing results and exported to the skill realization discrimination service, and when the skill realization discrimination service feeds back a successful realization, the skill parsing result with the highest priority is sent to the data distribution service for feedback to the user. In this embodiment, when a user uses a smart device, he/she will have a dialogue with a smart device, such as “play XX song” or “ask XX question”. The smart device sends the collected speech to the data distribution service. The speech is sent to the recognition service for semantic recognition through the data distribution service. The data distribution service obtains the semantic result of the user's speech and sends the semantic result to the central control dispatching service. In step S11, the central control dispatching service receives the semantic recognition result of the user's speech sent by the data distribution service, for example, “play XX song”. In step S12, upon receiving the semantic recognition result, the central control dispatching service does not send it directly to one skill service, but schedules a plurality of skill services related to the semantic result in parallel, and sends the semantic result in parallel to the plurality of skill services related to the semantic result simultaneously. In this case, the central control dispatching service will receive parsing results fed back by the plurality of skill services. The central control dispatching service sends “play XX song” to a plurality of related skill services in parallel, such as QQ Music, NetEase Cloud Music, Kugou Music, Kuwo Music, Xiami Music, etc. and receives parsing results fed back by the speech skills. In a conventional method, a determined semantic recognition result is directly sent to the skill service with the highest priority. If the skill service cannot realize the function of the semantic result, the central control dispatching service needs to re-send the semantic recognition result to other skill services. Such a dispatching method requires incessant trying by the central control dispatching service, and the efficiency is low. In step S13, since different skills have different priorities in a speech product design stage, the skill parsing result with the highest priority is first exported to the skill realization discrimination service to judge whether the skill with the highest priority is capable of realizing the function of the semantic result. For example, the semantic result is “play ‘This is love’”, and various skills return corresponding parsing results. In this case, the skill with the first priority is QQ Music. The parsing result of the QQ Music skill is exported to the skill realization discrimination service. In step S14, when the skill realization discrimination service feeds back a failed realization, for example, there is no original edition of the song ‘This is love’ in QQ Music and the user's needs cannot be met, then a skill parsing result with the highest priority is selected among the rest of skill parsing results and exported to the skill realization discrimination service. For example, if the skill with the highest priority is NetEase Cloud Music in this case, the parsing result of the NetEase Cloud Music skill will be exported to the skill realization discrimination service. When the skill realization discrimination service feeds back a successful realization, it means that the song ‘This is love’” is available in NetEase Cloud Music. The parsing result of the NetEase Cloud Music skill is sent to the data distribution service for feedback to the user. It can be seen from this embodiment that sending the semantic recognition result to a plurality of skill services in parallel and sending the parsing results of various skill services to the skill realization discrimination service for sequential discrimination merely requires one dispatching between the central control dispatching service and the skill services to determine the parsing result of the speech skill which can meet the user's needs, reducing the number of times of dispatching between the central control dispatching service and the skill services. Even when a large number of users send requests, the efficiency of skill dispatching can be ensured, delay can be reduced, and user experience can be improved. As an implementation, in this embodiment, the skill realization discrimination service includes: receiving the skill parsing result with the highest priority sent by the central control dispatching service; and performing dialogue state tracking on the skill parsing result, and judging whether the skill parsing result is capable of realizing the function of the semantic result based on the determined dialog state. In this embodiment, the skill realization discrimination service determines the corresponding dialog state by performing dialog state tracking on the skill parsing result, thereby judging whether the skill parsing result is capable of realizing the function of the semantic result. For example, it can be applied to some search skills except music skills. Due to the differences among various search engines, when the same keyword is entered, different search results may be acquired. Some search skills are good at searching for gossip-type information, and some search skills are good at searching for academic-type information. The dialogue states obtained by the dialogue state tracking are also different, and therefore whether the parsing results of different skills can realize the function of the semantic result can be judged. It can be seen from this embodiment that the implementation of providing skill realization discrimination can ensure that the content fed back is the content expected by the user, thereby further improving user experience. As an implementation, in this embodiment, the priority includes at least skill priority and context priority. Context priority may be interpreted in the following manner. For example, an A-engine search skill is good at searching for gossip information, and a B-engine search skill is good at searching for academic information. Considering that users may not ask academic questions very often, the A-engine search skill is given priority over the B-engine search skill. When a user inputs an academic question request in the first round of dialogue, the academic question request may be sent to the A-engine search skill and the B-engine search skill in parallel simultaneously according to the above method. Priority is given to judging whether the parsing result of the A-engine's search skills can meet the user's needs. If it is determined that the A-engine search skill cannot meet the user's needs but the B-engine search skill can, the parsing result of the B-engine search skill is fed back to the user. The user asks another academic-type question in the second round of dialogue. In this case, it is determined according to the context that the B-engine search skill in the first round of dialogue can meet the user's needs, so the B-engine search skill will be given priority in the second round of dialogue. It can be seen that in this embodiment, a variety of priority discrimination methods are provided, the dispatching logic is further optimized, and the skill dispatching efficiency is improved. FIG.2is a schematic structural diagram of a skill dispatching apparatus for a speech dialogue platform according to an embodiment of the present invention. The apparatus may perform the skill dispatching method for a speech dialogue platform in any of the above embodiments, and is configured in a terminal. The skill dispatching apparatus for a speech dialogue platform according to this embodiment includes a semantic receiving program module11, a skill parsing program module12, a skill realization identifying program module13and a dispatching program module14. The semantic receiving program module11is configured to receive, by a central control dispatching service, a semantic result of recognizing a user's speech sent by a data distribution service. The skill parsing program module12is configured to schedule, by the central control dispatching service, a plurality of skill services related to the semantic result in parallel, and obtain skill parsing results fed back by the plurality of skill services. The skill realization identifying program module13is configured to sort the skill parsing results by priorities of the skill services, and export a skill parsing result with the highest priority to a skill realization discrimination service for judging whether the skill parsing result with the highest priority is capable of realizing the function of the semantic result. The scheduler module14is configured to, when the skill realization discrimination service feeds back a failed realization, select another skill parsing result with the highest priority among the rest of skill parsing results and export the same to the skill realization discrimination service, and when the skill realization discrimination service feeds back a successful realization, send the skill parsing result with the highest priority to the data distribution service for feedback to the user. The skill realization discriminator module is configured to: receive the skill parsing result with the highest priority sent by the central control dispatching service; and perform dialogue state tracking on the skill parsing result, and judging whether the skill parsing result is capable of realizing the function of the semantic result based on the determined dialog state. Further, the priority includes at least a skill priority and a context priority. Further, the skill service includes a question-and-answer skill service and a task-based skill service. An embodiment of the present invention further provides a non-volatile computer storage medium storing computer-executable instructions which are capable of performing the skill dispatching method for a speech dialogue platform in any of the above method embodiments. As an implementation, the computer-executable instructions stored in the non-volatile computer storage medium according to the present invention can be set so that, a central control dispatching service receives a semantic result of recognizing a user's speech sent by a data distribution service; the central control dispatching service schedules in parallel a plurality of skill services related to the semantic result and obtains skill parsing results fed back by the plurality of skill services; the skill parsing results are sorted based on priorities of the skill services, and the skill parsing result with the highest priority is exported to a skill realization discrimination service for judging whether the skill parsing result with the highest priority is capable of realizing the function of the semantic result; when the skill realization discrimination service feeds back a failed realization, another skill parsing result with the highest priority among the rest of the skill parsing results is selected and exported to the skill realization discrimination service, and when the skill realization discrimination service feeds back a successful realization, the skill parsing result with the highest priority is sent to the data distribution service for feedback to the user. As a non-volatile computer-readable storage medium, it may store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the methods in the embodiments of the present invention. One or more program instructions are stored in the non-volatile computer-readable storage medium, and when being executed by a processor, perform the skill dispatching method for a speech dialogue platform in any of the above method embodiments. The non-volatile computer-readable storage medium may include a program storage area and a data storage area. The program storage area may store an operating system and an application program required for at least one function. The data storage area may store data created according to the use of the device and the like. In addition, the non-volatile computer-readable storage medium may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device. In some embodiments, the non-volatile computer-readable storage medium may optionally include memories located remotely from the processor, which may be connected to the device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof. An embodiment of the present invention also provides an electronic device, including at least one processor and a memory communicatively coupled to the at least one processor. The memory stores instructions executable by the at least one processor to enable the at least one processor to enable: receiving, by a central control dispatching service, a semantic result of recognizing a user's speech sent by a data distribution service; dispatching, by the central control dispatching service, a plurality of skill services related to the semantic result in parallel, and obtaining skill parsing results fed back by the plurality of skill services; sorting the skill parsing results based on priorities of the skill services, and exporting the skill parsing result with the highest priority to a skill realization discrimination service for judging whether the skill parsing result with the highest priority is capable of realizing the function of the semantic result; when the skill realization discrimination service feeds back a failed realization, selecting the skill parsing result with the highest priority among the rest of skill parsing results and exporting the same to the skill realization discrimination service, and when the skill realization discrimination service feeds back a successful realization, sending the skill parsing result with the highest priority to the data distribution service for feedback to the user. In some embodiments, the skill realization identifying service includes: receiving the skill parsing result with the highest priority sent by the central control dispatching service; and performing dialogue state tracking on the skill parsing result, and judging whether the skill parsing result is capable of realizing the function of the semantic result based on the determined dialog state. In some embodiments, the priority includes at least skill priority and context priority. In some embodiments, the skill service includes a question-and-answer skill service and a task-based skill service. FIG.3is a schematic diagram of a hardware structure of an electronic device for performing a skill dispatching method for a speech dialogue platform according to another embodiment of the present invention. As shown inFIG.3, the device includes: one or more processors310and a memory320, in which one processor310is taken as an example inFIG.3. The apparatus for performing the skill dispatching method for a speech dialogue platform may further include an input means330and an output means340. Processors310, memory320, input means330and output means340may be connected through a bus or in other ways. InFIG.3, bus is used as an example. Memory320is a non-volatile computer-readable storage medium, which may store non-volatile software programs, non-volatile computer-executable programs and modules, such as program instructions/modules corresponding to the skill dispatching method for a speech dialogue platform in the embodiment of the present invention. Processor310executes various functional applications and data processing of a server by running the non-volatile software programs, instructions and modules stored in the memory320to implement the skill dispatching method for a speech dialogue platform in the above method embodiments. Memory320may include a program storage area and a data storage area. The program storage area may store an operating system and an application program required for at least one function. The data storage area may store data created according to the use of the device and the like. In addition, memory320may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory320may optionally include memories located remotely from processor310, which may be connected to the skill dispatching apparatus for a speech dialogue platform through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof. Input means330may receive input numerical or character information, and generate signals related to user settings and function control of the skill dispatching apparatus for a speech dialogue platform. Output means340may include a display device such as a display screen. One or more modules are stored in memory320, and when being executed by one or more processors310, perform the skill dispatching for a speech dialogue platform in any of the above method embodiments. The electronic device in the embodiments of the present application exists in various forms, including but not limited to: (1) Mobile communication device which features in its mobile communication function and the main goal thereof is to provide voice and data communication, such as smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones; (2) Ultra-mobile personal computer device which belongs to the category of personal computers and has computing and processing functions and generally mobile Internet access capability, such as PDA, MID and UMPC devices, e.g., iPad; (3) Portable entertainment devices which can display and play multimedia content, such as audio and video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices; and (4) Other electronic devices with data interaction function. It should be noted that in this specification, terms such as first and second are used only to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply there is any such actual relationship or order among these entities or operations. Moreover, terms such as “including” and “comprising” shall mean that not only those elements described, but also other elements not explicitly listed, or elements inherent to the described processes, methods, objects, or devices, are included. In the absence of specific restrictions, elements defined by the phrase “comprising . . . ” do not mean excluding other identical elements from process, method, article or device involving these mentioned elements. The device embodiments described above are only exemplary. The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or it can be distributed to multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the object of the solution of this embodiment. Through the description of the above embodiments, those skilled in the art can clearly understand that each embodiment can be implemented by means of software plus a common hardware platform, and of course, it can also be implemented by hardware. Based on this understanding, the above technical solutions can essentially be embodied in the form of software products that contribute to related technologies, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic disks, CD-ROM, etc., including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform the method described in each embodiment or some parts of the embodiment. Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application, rather than limitation. Although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that the technical solutions described in the foregoing embodiments can be modified, or some of the technical features can be equivalently replaced without deviating from the spirit and scope of the technical solutions of the embodiments of the present application.
22,620
11862151
DETAILED DESCRIPTION In the following description of examples, reference is made to the accompanying drawings in which are shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the various examples. As discussed above, digital assistants implemented on mobile computing platforms can suffer from longer processing times and thus greater latency in responding to user requests. In particular, certain processes performed by digital assistants, such as natural language processing, task flow processing, and/or speech synthesis, can be computationally intensive and contribute significantly to the response latency. In some digital assistant systems, the aforementioned processes are initiated only after a speech end-point condition is detected. Detecting a speech end-point condition establishes that the user has finished providing his/her spoken request. However, a speech end-point condition is frequently detected based on an absence of user speech for greater than a predetermined duration (e.g., 600 ms, 700 ms, or 800 ms). This means that the total latency experienced by the user after the user finishes providing his/her spoken request can include the predetermined duration needed to detect a speech end-point condition and the computational time required for the digital assistant system to process the spoken request (e.g., by performing natural language processing, task flow processing, and/or speech synthesis). Given the limited computational resources of mobile computing platforms, this total latency can be considerable enough to significantly impact user engagement. It can thus be desirable to reduce the total latency experienced by the user from the time the user finishes providing his/her spoken request to the time the digital assistant system presents a response to the user request. Techniques for reducing response latency for digital assistant systems are described herein. In particular, in some exemplary processes, natural language processing, task flow processing, and/or speech synthesis can be initiated during the predetermined duration needed to detect a speech end-point condition. For instance, in a specific example, natural language processing, task flow processing, and/or speech synthesis can be initiated upon detecting a short pause (e.g., 50 ms, 75 ms, or 100 ms) in user speech. If the short pause develops into a long pause (e.g., 600 ms, 700 ms, or 800 ms) that corresponds to a speech end-point condition, natural language processing, task flow processing, and/or speech synthesis would be at least partially completed at the time the speech end-point condition is detected. This can result in a reduction in the total latency experienced by the user. In one example process for reducing response latency in a digital assistant system, a stream of audio is received. In particular, a first portion of the stream of audio containing a user utterance is received from a first time to a second time and a second portion of the stream of audio is received from the second time to a third time. The process determines whether the first portion of the stream of audio satisfies a predetermined condition. In response to determining that the first portion of the stream of audio satisfies a predetermined condition, operations are at least partially performed between the second time and the third time. The operations include determining, based on one or more candidate text representations of the user utterance, a plurality of candidate user intents for the user utterance. Each candidate user intent of the plurality of candidate user intents corresponds to a respective candidate task flow of a plurality of candidate task flows. The operations also include selecting a first candidate task flow of the plurality of candidate task flows. In addition, the operations include executing the first candidate task flow without providing an output to a user of the device. In some examples, executing the first candidate task flow includes generating spoken dialogue that is responsive to the user utterance without outputting the generated spoken dialogue. The process determines whether a speech end-point condition is detected between the second time and the third time. In response to determining that a speech end-point condition is detected between the second time and the third time, results from executing the selected first candidate task flow are presented to the user. In some examples, presenting the results includes outputting the generated spoken dialogue. Techniques for robust operation of a digital assistant are also described herein. In particular, due to the inherent ambiguity in human speech, there is inherent uncertainty during speech recognition and natural language processing of human speech. As a result, speech recognition and natural language processing errors can frequently occur when digital assistant systems process spoken user requests. Such errors, when propagated through task flow processing, can at times result in fatal errors (e.g., no response) or in the performance of tasks that do not correspond to the user's desired goal. An illustrative example of a task flow processing error caused by a speech recognition error is provided. In this example, the user provides the spoken request “What are Mike Dunleavy's stats?” During speech recognition processing, the digital assistant system can erroneously transcribe the spoken request as “What are Mike Dunleavey's stats?” Subsequently, during natural language processing, the digital assistant system can recognize (e.g., based on the word “stats”) that the user is requesting for sports information and that “Mike Dunleavey” is a sports-related entity. Based on this interpretation, the digital assistant system can perform task flow processing and select a task flow that includes procedures for searching sports-related data sources for “Mike Dunleavey.” However, during execution of the selected task flow, the digital assistant system may be unable to locate any information related to “Mike Dunleavey” in the sports-related sources due to the speech recognition error. As a result, the digital assistant system can fail to provide any substantive response to the user's request. In another illustrative example of a task flow processing error, the user can provide the spoken request “Directions to Fidelity Investments.” In this example, the digital assistant system can successfully transcribe the spoken request as “Directions to Fidelity Investments.” During subsequent natural language processing, the digital assistant system can recognize (e.g., based on the word “directions”) that the user is requesting for directions. However, rather than interpreting “Fidelity Investments” as a business, the digital assistant system can erroneously interpret “Fidelity Investments” as a person in the user's contact list (e.g., based on the existence of an entry corresponding to “Fidelity Investments” in the user's contact list). Based on this erroneous interpretation, the digital assistant system can perform task flow processing and select a task flow that includes procedures for searching the user's contact list for an address corresponding to “Fidelity Investments” and obtaining directions to that address. However, during execution of the selected task flow, the digital assistant system may be unable to find any address corresponding to “Fidelity Investments” in the user's contact list. Specifically, although the user has an entry corresponding to “Fidelity Investments” in his/her contact list, the entry may only include phone number information, but not address information. As a result, the digital assistant system can fail to provide any substantive response to the user's request. Based on the illustrative examples described above, a digital assistant system that implements more robust task flow processing can be desirable. In accordance with some techniques described herein, multiple candidate task flows associated with multiple candidate user intents can be evaluated for reliability prior to selecting and executing a particular task flow. The evaluation process can be based on task flow scores determined for every candidate task flow. The task flow score for a respective candidate task flow can be based on, for example, a speech recognition confidence score of a respective speech recognition result, an intent confidence score of a respective natural language processing result, a flow parameter score of the respective candidate task flow, or any combination thereof. In some examples, the flow parameter score can be based on whether one or more missing flow parameters for the respective candidate task flow can be resolved. For example, referring to the above illustrative examples, the flow parameter score can be based on whether missing flow parameters (e.g., “sports entity” and “address” flow parameters) associated with “Mike Dunleavey” and “Fidelity Investments” can be resolved. In these examples, the flow parameter scores can be low because the missing flow parameters cannot be resolved. The digital assistant system can select a suitable candidate task flow based on the task flow scores of the candidate task flows. For example, a candidate task flow having a task flow score that is maximized based on the combined speech recognition confidence score, intent confidence score, and flow parameter score can be selected. By selecting a suitable candidate task flow based on determined task flow scores for every candidate task flow, the selected candidate task flow can be more likely to coincide with the user's intended goal. Moreover, fatal error may be less likely to occur during execution of a selected candidate task flow. In an example process for robust operation of a digital assistant, a user utterance is received. Based on a plurality of candidate text representations of the user utterance, a plurality of candidate user intents for the user utterance are determined. Each candidate user intent of the plurality of candidate user intents corresponds to a respective candidate task flow of a plurality of candidate task flows. A plurality of task flow scores for the plurality of candidate task flows are determined. Each task flow score of the plurality of task flow scores corresponds to a respective candidate task flow of the plurality of candidate task flows. Based on the plurality of task flow scores, a first candidate task flow of the plurality of candidate task flows is selected. The first candidate task flow is executed, including presenting, to the user, results from executing the first candidate task flow. Although the following description uses the terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first input could be termed a second input, and, similarly, a second input could be termed a first input, without departing from the scope of the various described examples. The first input and the second input are both inputs and, in some cases, are separate and different inputs. The terminology used in the description of the various described examples herein is for the purpose of describing particular examples only and is not intended to be limiting. As used in the description of the various described examples and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. 1. System and Environment FIG.1illustrates a block diagram of system100according to various examples. In some examples, system100implements a digital assistant. The terms “digital assistant,” “virtual assistant,” “intelligent automated assistant,” or “automatic digital assistant” refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent, and performs actions based on the inferred user intent. For example, to act on an inferred user intent, the system performs one or more of the following: identifying a task flow with steps and parameters designed to accomplish the inferred user intent, inputting specific requirements from the inferred user intent into the task flow; executing the task flow by invoking programs, methods, services, APIs, or the like; and generating output responses to the user in an audible (e.g., speech) and/or visual form. Specifically, a digital assistant is capable of accepting a user request at least partially in the form of a natural language command, request, statement, narrative, and/or inquiry. Typically, the user request seeks either an informational answer or performance of a task by the digital assistant. A satisfactory response to the user request includes a provision of the requested informational answer, a performance of the requested task, or a combination of the two. For example, a user asks the digital assistant a question, such as “Where am I right now?” Based on the user's current location, the digital assistant answers, “You are in Central Park near the west gate.” The user also requests the performance of a task, for example, “Please invite my friends to my girlfriend's birthday party next week.” In response, the digital assistant can acknowledge the request by saying “Yes, right away,” and then send a suitable calendar invite on behalf of the user to each of the user's friends listed in the user's electronic address book. During performance of a requested task, the digital assistant sometimes interacts with the user in a continuous dialogue involving multiple exchanges of information over an extended period of time. There are numerous other ways of interacting with a digital assistant to request information or performance of various tasks. In addition to providing verbal responses and taking programmed actions, the digital assistant also provides responses in other visual or audio forms, e.g., as text, alerts, music, videos, animations, etc. As shown inFIG.1, in some examples, a digital assistant is implemented according to a client-server model. The digital assistant includes client-side portion102(hereafter “DA client102”) executed on user device104and server-side portion106(hereafter “DA server106”) executed on server system108. DA client102communicates with DA server106through one or more networks110. DA client102provides client-side functionalities such as user-facing input and output processing and communication with DA server106. DA server106provides server-side functionalities for any number of DA clients102each residing on a respective user device104. In some examples, DA server106includes client-facing I/O interface112, one or more processing modules114, data and models116, and I/O interface to external services118. The client-facing I/O interface112facilitates the client-facing input and output processing for DA server106. One or more processing modules114utilize data and models116to process speech input and determine the user's intent based on natural language input. Further, one or more processing modules114perform task execution based on inferred user intent. In some examples, DA server106communicates with external services120through network(s)110for task completion or information acquisition. I/O interface to external services118facilitates such communications. User device104can be any suitable electronic device. In some examples, user device is a portable multifunctional device (e.g., device200, described below with reference toFIG.2A), a multifunctional device (e.g., device400, described below with reference toFIG.4), or a personal electronic device (e.g., device600, described below with reference toFIG.6A-B.) A portable multifunctional device is, for example, a mobile telephone that also contains other functions, such as PDA and/or music player functions. Specific examples of portable multifunction devices include the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California Other examples of portable multifunction devices include, without limitation, laptop or tablet computers. Further, in some examples, user device104is a non-portable multifunctional device. In particular, user device104is a desktop computer, a game console, a television, or a television set-top box. In some examples, user device104includes a touch-sensitive surface (e.g., touch screen displays and/or touchpads). Further, user device104optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick. Various examples of electronic devices, such as multifunctional devices, are described below in greater detail. Examples of communication network(s)110include local area networks (LAN) and wide area networks (WAN), e.g., the Internet. Communication network(s)110is implemented using any known network protocol, including various wired or wireless protocols, such as, for example, Ethernet, Universal Serial Bus (USB), FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol. Server system108is implemented on one or more standalone data processing apparatus or a distributed network of computers. In some examples, server system108also employs various virtual devices and/or services of third-party service providers (e.g., third-party cloud service providers) to provide the underlying computing resources and/or infrastructure resources of server system108. In some examples, user device104communicates with DA server106via second user device122. Second user device122is similar or identical to user device104. For example, second user device122is similar to devices200,400, or600described below with reference toFIGS.2A,4, and6A-B. User device104is configured to communicatively couple to second user device122via a direct communication connection, such as Bluetooth, NFC, BTLE, or the like, or via a wired or wireless network, such as a local Wi-Fi network. In some examples, second user device122is configured to act as a proxy between user device104and DA server106. For example, DA client102of user device104is configured to transmit information (e.g., a user request received at user device104) to DA server106via second user device122. DA server106processes the information and return relevant data (e.g., data content responsive to the user request) to user device104via second user device122. In some examples, user device104is configured to communicate abbreviated requests for data to second user device122to reduce the amount of information transmitted from user device104. Second user device122is configured to determine supplemental information to add to the abbreviated request to generate a complete request to transmit to DA server106. This system architecture can advantageously allow user device104having limited communication capabilities and/or limited battery power (e.g., a watch or a similar compact electronic device) to access services provided by DA server106by using second user device122, having greater communication capabilities and/or battery power (e.g., a mobile phone, laptop computer, tablet computer, or the like), as a proxy to DA server106. While only two user devices104and122are shown inFIG.1, it should be appreciated that system100, in some examples, includes any number and type of user devices configured in this proxy configuration to communicate with DA server system106. Although the digital assistant shown inFIG.1includes both a client-side portion (e.g., DA client102) and a server-side portion (e.g., DA server106), in some examples, the functions of a digital assistant are implemented as a standalone application installed on a user device. In addition, the divisions of functionalities between the client and server portions of the digital assistant can vary in different implementations. For instance, in some examples, the DA client is a thin-client that provides only user-facing input and output processing functions, and delegates all other functionalities of the digital assistant to a backend server. 2. Electronic Devices Attention is now directed toward embodiments of electronic devices for implementing the client-side portion of a digital assistant.FIG.2Ais a block diagram illustrating portable multifunction device200with touch-sensitive display system212in accordance with some embodiments. Touch-sensitive display212is sometimes called a “touch screen” for convenience and is sometimes known as or called a “touch-sensitive display system.” Device200includes memory202(which optionally includes one or more computer-readable storage mediums), memory controller222, one or more processing units (CPUs)220, peripherals interface218, RF circuitry208, audio circuitry210, speaker211, microphone213, input/output (I/O) subsystem206, other input control devices216, and external port224. Device200optionally includes one or more optical sensors264. Device200optionally includes one or more contact intensity sensors265for detecting intensity of contacts on device200(e.g., a touch-sensitive surface such as touch-sensitive display system212of device200). Device200optionally includes one or more tactile output generators267for generating tactile outputs on device200(e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system212of device200or touchpad455of device400). These components optionally communicate over one or more communication buses or signal lines203. As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button). As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user. It should be appreciated that device200is only one example of a portable multifunction device, and that device200optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown inFIG.2Aare implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application-specific integrated circuits. Memory202includes one or more computer-readable storage mediums. The computer-readable storage mediums are, for example, tangible and non-transitory. Memory202includes high-speed random access memory and also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller222controls access to memory202by other components of device200. In some examples, a non-transitory computer-readable storage medium of memory202is used to store instructions (e.g., for performing aspects of processes described below) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, the instructions (e.g., for performing aspects of the processes described below) are stored on a non-transitory computer-readable storage medium (not shown) of the server system108or are divided between the non-transitory computer-readable storage medium of memory202and the non-transitory computer-readable storage medium of server system108. Peripherals interface218is used to couple input and output peripherals of the device to CPU220and memory202. The one or more processors220run or execute various software programs and/or sets of instructions stored in memory202to perform various functions for device200and to process data. In some embodiments, peripherals interface218, CPU220, and memory controller222are implemented on a single chip, such as chip204. In some other embodiments, they are implemented on separate chips. RF (radio frequency) circuitry208receives and sends RF signals, also called electromagnetic signals. RF circuitry208converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry208optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry208optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The RF circuitry208optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11ac), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document. Audio circuitry210, speaker211, and microphone213provide an audio interface between a user and device200. Audio circuitry210receives audio data from peripherals interface218, converts the audio data to an electrical signal, and transmits the electrical signal to speaker211. Speaker211converts the electrical signal to human-audible sound waves. Audio circuitry210also receives electrical signals converted by microphone213from sound waves. Audio circuitry210converts the electrical signal to audio data and transmits the audio data to peripherals interface218for processing. Audio data are retrieved from and/or transmitted to memory202and/or RF circuitry208by peripherals interface218. In some embodiments, audio circuitry210also includes a headset jack (e.g.,312,FIG.3). The headset jack provides an interface between audio circuitry210and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone). I/O subsystem206couples input/output peripherals on device200, such as touch screen212and other input control devices216, to peripherals interface218. I/O subsystem206optionally includes display controller256, optical sensor controller258, intensity sensor controller259, haptic feedback controller261, and one or more input controllers260for other input or control devices. The one or more input controllers260receive/send electrical signals from/to other input control devices216. The other input control devices216optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s)260are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse. The one or more buttons (e.g.,308,FIG.3) optionally include an up/down button for volume control of speaker211and/or microphone213. The one or more buttons optionally include a push button (e.g.,306,FIG.3). A quick press of the push button disengages a lock of touch screen212or begin a process that uses gestures on the touch screen to unlock the device, as described in U.S. patent application Ser. No. 11/322,549, “Unlocking a Device by Performing Gestures on an Unlock Image,” filed Dec. 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety. A longer press of the push button (e.g.,306) turns power to device200on or off. The user is able to customize a functionality of one or more of the buttons. Touch screen212is used to implement virtual or soft buttons and one or more soft keyboards. Touch-sensitive display212provides an input interface and an output interface between the device and a user. Display controller256receives and/or sends electrical signals from/to touch screen212. Touch screen212displays visual output to the user. The visual output includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output correspond to user-interface objects. Touch screen212has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen212and display controller256(along with any associated modules and/or sets of instructions in memory202) detect contact (and any movement or breaking of the contact) on touch screen212and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen212. In an exemplary embodiment, a point of contact between touch screen212and the user corresponds to a finger of the user. Touch screen212uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies may be used in other embodiments. Touch screen212and display controller256detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen212. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, California. A touch-sensitive display in some embodiments of touch screen212is analogous to the multi-touch sensitive touchpads described in the following U.S. Pat. No. 6,323,846 (Westerman et al.), U.S. Pat. No. 6,570,557 (Westerman et al.), and/or U.S. Pat. No. 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen212displays visual output from device200, whereas touch-sensitive touchpads do not provide visual output. A touch-sensitive display in some embodiments of touch screen212is as described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, “Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. patent application Ser. No. 10/840,862, “Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. patent application Ser. No. 10/903,964, “Gestures For Touch Sensitive Input Devices,” filed Jul. 30, 2004; (4) U.S. patent application Ser. No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed Jan. 31, 2005; (5) U.S. patent application Ser. No. 11/038,590, “Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices,” filed Jan. 18, 2005; (6) U.S. patent application Ser. No. 11/228,758, “Virtual Input Device Placement On A Touch Screen User Interface,” filed Sep. 16, 2005; (7) U.S. patent application Ser. No. 11/228,700, “Operation Of A Computer With A Touch Screen Interface,” filed Sep. 16, 2005; (8) U.S. patent application Ser. No. 11/228,737, “Activating Virtual Keys Of A Touch-Screen Virtual Keyboard,” filed Sep. 16, 2005; and (9) U.S. patent application Ser. No. 11/367,749, “Multi-Functional Hand-Held Device,” filed Mar. 3, 2006. All of these applications are incorporated by reference herein in their entirety. Touch screen212has, for example, a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user makes contact with touch screen212using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user. In some embodiments, in addition to the touch screen, device200includes a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is a touch-sensitive surface that is separate from touch screen212or an extension of the touch-sensitive surface formed by the touch screen. Device200also includes power system262for powering the various components. Power system262includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices. Device200also includes one or more optical sensors264.FIG.2Ashows an optical sensor coupled to optical sensor controller258in I/O subsystem206. Optical sensor264includes charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. Optical sensor264receives light from the environment, projected through one or more lenses, and converts the light to data representing an image. In conjunction with imaging module243(also called a camera module), optical sensor264captures still images or video. In some embodiments, an optical sensor is located on the back of device200, opposite touch screen display212on the front of the device so that the touch screen display is used as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that the user's image is obtained for video conferencing while the user views the other video conference participants on the touch screen display. In some embodiments, the position of optical sensor264can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a single optical sensor264is used along with the touch screen display for both video conferencing and still and/or video image acquisition. Device200optionally also includes one or more contact intensity sensors265.FIG.2Ashows a contact intensity sensor coupled to intensity sensor controller259in I/O subsystem206. Contact intensity sensor265optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor265receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system212). In some embodiments, at least one contact intensity sensor is located on the back of device200, opposite touch screen display212, which is located on the front of device200. Device200also includes one or more proximity sensors266.FIG.2Ashows proximity sensor266coupled to peripherals interface218. Alternately, proximity sensor266is coupled to input controller260in I/O subsystem206. Proximity sensor266is performed as described in U.S. patent application Ser. No. 11/241,839, “Proximity Detector In Handheld Device”; Ser. No. 11/240,788, “Proximity Detector In Handheld Device”; Ser. No. 11/620,702, “Using Ambient Light Sensor To Augment Proximity Sensor Output”; Ser. No. 11/586,862, “Automated Response To And Sensing Of User Activity In Portable Devices”; and Ser. No. 11/638,251, “Methods And Systems For Automatic Configuration Of Peripherals,” which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor turns off and disables touch screen212when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call). Device200optionally also includes one or more tactile output generators267.FIG.2Ashows a tactile output generator coupled to haptic feedback controller261in I/O subsystem206. Tactile output generator267optionally includes one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). Contact intensity sensor265receives tactile feedback generation instructions from haptic feedback module233and generates tactile outputs on device200that are capable of being sensed by a user of device200. In some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system212) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device200) or laterally (e.g., back and forth in the same plane as a surface of device200). In some embodiments, at least one tactile output generator sensor is located on the back of device200, opposite touch screen display212, which is located on the front of device200. Device200also includes one or more accelerometers268.FIG.2Ashows accelerometer268coupled to peripherals interface218. Alternately, accelerometer268is coupled to an input controller260in I/O subsystem206. Accelerometer268performs, for example, as described in U.S. Patent Publication No. 20050190059, “Acceleration-based Theft Detection System for Portable Electronic Devices,” and U.S. Patent Publication No. 20060017692, “Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer,” both of which are incorporated by reference herein in their entirety. In some embodiments, information is displayed on the touch screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device200optionally includes, in addition to accelerometer(s)268, a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device200. In some embodiments, the software components stored in memory202include operating system226, communication module (or set of instructions)228, contact/motion module (or set of instructions)230, graphics module (or set of instructions)232, text input module (or set of instructions)234, Global Positioning System (GPS) module (or set of instructions)235, Digital Assistant Client Module229, and applications (or sets of instructions)236. Further, memory202stores data and models, such as user data and models231. Furthermore, in some embodiments, memory202(FIG.2A) or470(FIG.4) stores device/global internal state257, as shown inFIGS.2A and4. Device/global internal state257includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen display212; sensor state, including information obtained from the device's various sensors and input control devices216; and location information concerning the device's location and/or attitude. Operating system226(e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. Communication module228facilitates communication with other devices over one or more external ports224and also includes various software components for handling data received by RF circuitry208and/or external port224. External port224(e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices. Contact/motion module230optionally detects contact with touch screen212(in conjunction with display controller256) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module230includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module230receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module230and display controller256detect contact on a touchpad. In some embodiments, contact/motion module230uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device200). For example, a mouse “click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter). Contact/motion module230optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event. Graphics module232includes various known software components for rendering and displaying graphics on touch screen212or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like. In some embodiments, graphics module232stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module232receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller256. Haptic feedback module233includes various software components for generating instructions used by tactile output generator(s)267to produce tactile outputs at one or more locations on device200in response to user interactions with device200. Text input module234, which is, in some examples, a component of graphics module232, provides soft keyboards for entering text in various applications (e.g., contacts237, email240, IM241, browser247, and any other application that needs text input). GPS module235determines the location of the device and provides this information for use in various applications (e.g., to telephone module238for use in location-based dialing; to camera module243as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets). Digital assistant client module229includes various client-side digital assistant instructions to provide the client-side functionalities of the digital assistant. For example, digital assistant client module229is capable of accepting voice input (e.g., speech input), text input, touch input, and/or gestural input through various user interfaces (e.g., microphone213, accelerometer(s)268, touch-sensitive display system212, optical sensor(s)264, other input control devices216, etc.) of portable multifunction device200. Digital assistant client module229is also capable of providing output in audio (e.g., speech output), visual, and/or tactile forms through various output interfaces (e.g., speaker211, touch-sensitive display system212, tactile output generator(s)267, etc.) of portable multifunction device200. For example, output is provided as voice, sound, alerts, text messages, menus, graphics, videos, animations, vibrations, and/or combinations of two or more of the above. During operation, digital assistant client module229communicates with DA server106using RF circuitry208. User data and models231include various data associated with the user (e.g., user-specific vocabulary data, user preference data, user-specified name pronunciations, data from the user's electronic address book, to-do lists, shopping lists, etc.) to provide the client-side functionalities of the digital assistant. Further, user data and models231include various models (e.g., speech recognition models, statistical language models, natural language processing models, ontology, task flow models, service models, etc.) for processing user input and determining user intent. In some examples, digital assistant client module229utilizes the various sensors, subsystems, and peripheral devices of portable multifunction device200to gather additional information from the surrounding environment of the portable multifunction device200to establish a context associated with a user, the current user interaction, and/or the current user input. In some examples, digital assistant client module229provides the contextual information or a subset thereof with the user input to DA server106to help infer the user's intent. In some examples, the digital assistant also uses the contextual information to determine how to prepare and deliver outputs to the user. Contextual information is referred to as context data. In some examples, the contextual information that accompanies the user input includes sensor information, e.g., lighting, ambient noise, ambient temperature, images or videos of the surrounding environment, etc. In some examples, the contextual information can also include the physical state of the device, e.g., device orientation, device location, device temperature, power level, speed, acceleration, motion patterns, cellular signals strength, etc. In some examples, information related to the software state of DA server106, e.g., running processes, installed programs, past and present network activities, background services, error logs, resources usage, etc., and of portable multifunction device200is provided to DA server106as contextual information associated with a user input. In some examples, the digital assistant client module229selectively provides information (e.g., user data231) stored on the portable multifunction device200in response to requests from DA server106. In some examples, digital assistant client module229also elicits additional input from the user via a natural language dialogue or other user interfaces upon request by DA server106. Digital assistant client module229passes the additional input to DA server106to help DA server106in intent deduction and/or fulfillment of the user's intent expressed in the user request. A more detailed description of a digital assistant is described below with reference toFIGS.7A-7C. It should be recognized that digital assistant client module229can include any number of the sub-modules of digital assistant module726described below. Applications236include the following modules (or sets of instructions), or a subset or superset thereof:Contacts module237(sometimes called an address book or contact list);Telephone module238;Video conference module239;E-mail client module240;Instant messaging (IM) module241;Workout support module242;Camera module243for still and/or video images;Image management module244;Video player module;Music player module;Browser module247;Calendar module248;Widget modules249, which includes, in some examples, one or more of: weather widget249-1, stocks widget249-2, calculator widget249-3, alarm clock widget249-4, dictionary widget249-5, and other widgets obtained by the user, as well as user-created widgets249-6;Widget creator module250for making user-created widgets249-6;Search module251;Video and music player module252, which merges video player module and music player module;Notes module253;Map module254; and/orOnline video module255. Examples of other applications236that are stored in memory202include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication. In conjunction with touch screen212, display controller256, contact/motion module230, graphics module232, and text input module234, contacts module237are used to manage an address book or contact list (e.g., stored in application internal state292of contacts module237in memory202or memory470), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone module238, video conference module239, e-mail240, or IM241; and so forth. In conjunction with RF circuitry208, audio circuitry210, speaker211, microphone213, touch screen212, display controller256, contact/motion module230, graphics module232, and text input module234, telephone module238are used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module237, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As noted above, the wireless communication uses any of a plurality of communications standards, protocols, and technologies. In conjunction with RF circuitry208, audio circuitry210, speaker211, microphone213, touch screen212, display controller256, optical sensor264, optical sensor controller258, contact/motion module230, graphics module232, text input module234, contacts module237, and telephone module238, video conference module239includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions. In conjunction with RF circuitry208, touch screen212, display controller256, contact/motion module230, graphics module232, and text input module234, e-mail client module240includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module244, e-mail client module240makes it very easy to create and send e-mails with still or video images taken with camera module243. In conjunction with RF circuitry208, touch screen212, display controller256, contact/motion module230, graphics module232, and text input module234, the instant messaging module241includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS). In conjunction with RF circuitry208, touch screen212, display controller256, contact/motion module230, graphics module232, text input module234, GPS module235, map module254, and music player module, workout support module242includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store, and transmit workout data. In conjunction with touch screen212, display controller256, optical sensor(s)264, optical sensor controller258, contact/motion module230, graphics module232, and image management module244, camera module243includes executable instructions to capture still images or video (including a video stream) and store them into memory202, modify characteristics of a still image or video, or delete a still image or video from memory202. In conjunction with touch screen212, display controller256, contact/motion module230, graphics module232, text input module234, and camera module243, image management module244includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images. In conjunction with RF circuitry208, touch screen212, display controller256, contact/motion module230, graphics module232, and text input module234, browser module247includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages. In conjunction with RF circuitry208, touch screen212, display controller256, contact/motion module230, graphics module232, text input module234, e-mail client module240, and browser module247, calendar module248includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do lists, etc.) in accordance with user instructions. In conjunction with RF circuitry208, touch screen212, display controller256, contact/motion module230, graphics module232, text input module234, and browser module247, widget modules249are mini-applications that can be downloaded and used by a user (e.g., weather widget249-1, stocks widget249-2, calculator widget249-3, alarm clock widget249-4, and dictionary widget249-5) or created by the user (e.g., user-created widget249-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets). In conjunction with RF circuitry208, touch screen212, display controller256, contact/motion module230, graphics module232, text input module234, and browser module247, the widget creator module250are used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget). In conjunction with touch screen212, display controller256, contact/motion module230, graphics module232, and text input module234, search module251includes executable instructions to search for text, music, sound, image, video, and/or other files in memory202that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions. In conjunction with touch screen212, display controller256, contact/motion module230, graphics module232, audio circuitry210, speaker211, RF circuitry208, and browser module247, video and music player module252includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen212or on an external, connected display via external port224). In some embodiments, device200optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.). In conjunction with touch screen212, display controller256, contact/motion module230, graphics module232, and text input module234, notes module253includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions. In conjunction with RF circuitry208, touch screen212, display controller256, contact/motion module230, graphics module232, text input module234, GPS module235, and browser module247, map module254are used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions. In conjunction with touch screen212, display controller256, contact/motion module230, graphics module232, audio circuitry210, speaker211, RF circuitry208, text input module234, e-mail client module240, and browser module247, online video module255includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port224), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module241, rather than e-mail client module240, is used to send a link to a particular online video. Additional description of the online video application can be found in U.S. Provisional Patent Application No. 60/936,562, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Jun. 20, 2007, and U.S. patent application Ser. No. 11/968,067, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Dec. 31, 2007, the contents of which are hereby incorporated by reference in their entirety. Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules can be combined or otherwise rearranged in various embodiments. For example, video player module can be combined with music player module into a single module (e.g., video and music player module252,FIG.2A). In some embodiments, memory202stores a subset of the modules and data structures identified above. Furthermore, memory202stores additional modules and data structures not described above. In some embodiments, device200is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device200, the number of physical input control devices (such as push buttons, dials, and the like) on device200is reduced. The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device200to a main, home, or root menu from any user interface that is displayed on device200. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad. FIG.2Bis a block diagram illustrating exemplary components for event handling in accordance with some embodiments. In some embodiments, memory202(FIG.2A) or470(FIG.4) includes event sorter270(e.g., in operating system226) and a respective application236-1(e.g., any of the aforementioned applications237-251,255,480-490). Event sorter270receives event information and determines the application236-1and application view291of application236-1to which to deliver the event information. Event sorter270includes event monitor271and event dispatcher module274. In some embodiments, application236-1includes application internal state292, which indicates the current application view(s) displayed on touch-sensitive display212when the application is active or executing. In some embodiments, device/global internal state257is used by event sorter270to determine which application(s) is (are) currently active, and application internal state292is used by event sorter270to determine application views291to which to deliver event information. In some embodiments, application internal state292includes additional information, such as one or more of: resume information to be used when application236-1resumes execution, user interface state information that indicates information being displayed or that is ready for display by application236-1, a state queue for enabling the user to go back to a prior state or view of application236-1, and a redo/undo queue of previous actions taken by the user. Event monitor271receives event information from peripherals interface218. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display212, as part of a multi-touch gesture). Peripherals interface218transmits information it receives from I/O subsystem206or a sensor, such as proximity sensor266, accelerometer(s)268, and/or microphone213(through audio circuitry210). Information that peripherals interface218receives from I/O subsystem206includes information from touch-sensitive display212or a touch-sensitive surface. In some embodiments, event monitor271sends requests to the peripherals interface218at predetermined intervals. In response, peripherals interface218transmits event information. In other embodiments, peripherals interface218transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration). In some embodiments, event sorter270also includes a hit view determination module272and/or an active event recognizer determination module273. Hit view determination module272provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display212displays more than one view. Views are made up of controls and other elements that a user can see on the display. Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is called the hit view, and the set of events that are recognized as proper inputs is determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture. Hit view determination module272receives information related to sub events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module272identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module272, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view. Active event recognizer determination module273determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module273determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module273determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views. Event dispatcher module274dispatches the event information to an event recognizer (e.g., event recognizer280). In embodiments including active event recognizer determination module273, event dispatcher module274delivers the event information to an event recognizer determined by active event recognizer determination module273. In some embodiments, event dispatcher module274stores in an event queue the event information, which is retrieved by a respective event receiver282. In some embodiments, operating system226includes event sorter270. Alternatively, application236-1includes event sorter270. In yet other embodiments, event sorter270is a stand-alone module, or a part of another module stored in memory202, such as contact/motion module230. In some embodiments, application236-1includes a plurality of event handlers290and one or more application views291, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view291of the application236-1includes one or more event recognizers280. Typically, a respective application view291includes a plurality of event recognizers280. In other embodiments, one or more of event recognizers280are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application236-1inherits methods and other properties. In some embodiments, a respective event handler290includes one or more of: data updater276, object updater277, GUI updater278, and/or event data279received from event sorter270. Event handler290utilizes or calls data updater276, object updater277, or GUI updater278to update the application internal state292. Alternatively, one or more of the application views291include one or more respective event handlers290. Also, in some embodiments, one or more of data updater276, object updater277, and GUI updater278are included in a respective application view291. A respective event recognizer280receives event information (e.g., event data279) from event sorter270and identifies an event from the event information. Event recognizer280includes event receiver282and event comparator284. In some embodiments, event recognizer280also includes at least a subset of: metadata283, and event delivery instructions288(which include sub-event delivery instructions). Event receiver282receives event information from event sorter270. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device. Event comparator284compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator284includes event definitions286. Event definitions286contain definitions of events (e.g., predefined sequences of sub-events), for example, event1(287-1), event2(287-2), and others. In some embodiments, sub-events in an event (287) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event1(287-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event2(287-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display212, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers290. In some embodiments, event definition287includes a definition of an event for a respective user-interface object. In some embodiments, event comparator284performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display212, when a touch is detected on touch-sensitive display212, event comparator284performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler290, the event comparator uses the result of the hit test to determine which event handler290should be activated. For example, event comparator284selects an event handler associated with the sub-event and the object triggering the hit test. In some embodiments, the definition for a respective event (287) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type. When a respective event recognizer280determines that the series of sub-events do not match any of the events in event definitions286, the respective event recognizer280enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture. In some embodiments, a respective event recognizer280includes metadata283with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata283includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata283includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy. In some embodiments, a respective event recognizer280activates event handler290associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer280delivers event information associated with the event to event handler290. Activating an event handler290is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer280throws a flag associated with the recognized event, and event handler290associated with the flag catches the flag and performs a predefined process. In some embodiments, event delivery instructions288include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process. In some embodiments, data updater276creates and updates data used in application236-1. For example, data updater276updates the telephone number used in contacts module237, or stores a video file used in video player module. In some embodiments, object updater277creates and updates objects used in application236-1. For example, object updater277creates a new user-interface object or updates the position of a user-interface object. GUI updater278updates the GUI. For example, GUI updater278prepares display information and sends it to graphics module232for display on a touch-sensitive display. In some embodiments, event handler(s)290includes or has access to data updater276, object updater277, and GUI updater278. In some embodiments, data updater276, object updater277, and GUI updater278are included in a single module of a respective application236-1or application view291. In other embodiments, they are included in two or more software modules. It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices200with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized. FIG.3illustrates a portable multifunction device200having a touch screen212in accordance with some embodiments. The touch screen optionally displays one or more graphics within user interface (UI)300. In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers302(not drawn to scale in the figure) or one or more styluses303(not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device200. In some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap. Device200also includes one or more physical buttons, such as “home” or menu button304. As described previously, menu button304is used to navigate to any application236in a set of applications that is executed on device200. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on touch screen212. In one embodiment, device200includes touch screen212, menu button304, push button306for powering the device on/off and locking the device, volume adjustment button(s)308, subscriber identity module (SIM) card slot310, headset jack312, and docking/charging external port224. Push button306is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device200also accepts verbal input for activation or deactivation of some functions through microphone213. Device200also, optionally, includes one or more contact intensity sensors265for detecting intensity of contacts on touch screen212and/or one or more tactile output generators267for generating tactile outputs for a user of device200. FIG.4is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. Device400need not be portable. In some embodiments, device400is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child's learning toy), a gaming system, or a control device (e.g., a home or industrial controller). Device400typically includes one or more processing units (CPUs)410, one or more network or other communications interfaces460, memory470, and one or more communication buses420for interconnecting these components. Communication buses420optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device400includes input/output (I/O) interface430comprising display440, which is typically a touch screen display. I/O interface430also optionally includes a keyboard and/or mouse (or other pointing device)450and touchpad455, tactile output generator457for generating tactile outputs on device400(e.g., similar to tactile output generator(s)267described above with reference toFIG.2A), sensors459(e.g., optical, acceleration, proximity, touch-sensitive, and/or contact intensity sensors similar to contact intensity sensor(s)265described above with reference toFIG.2A). Memory470includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory470optionally includes one or more storage devices remotely located from CPU(s)410. In some embodiments, memory470stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory202of portable multifunction device200(FIG.2A), or a subset thereof. Furthermore, memory470optionally stores additional programs, modules, and data structures not present in memory202of portable multifunction device200. For example, memory470of device400optionally stores drawing module480, presentation module482, word processing module484, website creation module486, disk authoring module488, and/or spreadsheet module490, while memory202of portable multifunction device200(FIG.2A) optionally does not store these modules. Each of the above-identified elements inFIG.4is, in some examples, stored in one or more of the previously mentioned memory devices. Each of the above-identified modules corresponds to a set of instructions for performing a function described above. The above-identified modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are combined or otherwise rearranged in various embodiments. In some embodiments, memory470stores a subset of the modules and data structures identified above. Furthermore, memory470stores additional modules and data structures not described above. Attention is now directed towards embodiments of user interfaces that can be implemented on, for example, portable multifunction device200. FIG.5Aillustrates an exemplary user interface for a menu of applications on portable multifunction device200in accordance with some embodiments. Similar user interfaces are implemented on device400. In some embodiments, user interface500includes the following elements, or a subset or superset thereof: Signal strength indicator(s)502for wireless communication(s), such as cellular and Wi-Fi signals;Time504;Bluetooth indicator505;Battery status indicator506;Tray508with icons for frequently used applications, such as:Icon516for telephone module238, labeled “Phone,” which optionally includes an indicator514of the number of missed calls or voicemail messages;Icon518for e-mail client module240, labeled “Mail,” which optionally includes an indicator510of the number of unread e-mails;Icon520for browser module247, labeled “Browser;” andIcon522for video and music player module252, also referred to as iPod (trademark of Apple Inc.) module252, labeled “iPod;” andIcons for other applications, such as:Icon524for IM module241, labeled “Messages;”Icon526for calendar module248, labeled “Calendar;”Icon528for image management module244, labeled “Photos;”Icon530for camera module243, labeled “Camera;”Icon532for online video module255, labeled “Online Video;”Icon534for stocks widget249-2, labeled “Stocks;”Icon536for map module254, labeled “Maps;”Icon538for weather widget249-1, labeled “Weather;”Icon540for alarm clock widget249-4, labeled “Clock;”Icon542for workout support module242, labeled “Workout Support;”Icon544for notes module253, labeled “Notes;” andIcon546for a settings application or module, labeled “Settings,” which provides access to settings for device200and its various applications236. It should be noted that the icon labels illustrated inFIG.5Aare merely exemplary. For example, icon522for video and music player module252is optionally labeled “Music” or “Music Player.” Other labels are, optionally, used for various application icons. In some embodiments, a label for a respective application icon includes a name of an application corresponding to the respective application icon. In some embodiments, a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon. FIG.5Billustrates an exemplary user interface on a device (e.g., device400,FIG.4) with a touch-sensitive surface551(e.g., a tablet or touchpad455,FIG.4) that is separate from the display550(e.g., touch screen display212). Device400also, optionally, includes one or more contact intensity sensors (e.g., one or more of sensors457) for detecting intensity of contacts on touch-sensitive surface551and/or one or more tactile output generators459for generating tactile outputs for a user of device400. Although some of the examples which follow will be given with reference to inputs on touch screen display212(where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown inFIG.5B. In some embodiments, the touch-sensitive surface (e.g.,551inFIG.5B) has a primary axis (e.g.,552inFIG.5B) that corresponds to a primary axis (e.g.,553inFIG.5B) on the display (e.g.,550). In accordance with these embodiments, the device detects contacts (e.g.,560and562inFIG.5B) with the touch-sensitive surface551at locations that correspond to respective locations on the display (e.g., inFIG.5B,560corresponds to568and562corresponds to570). In this way, user inputs (e.g., contacts560and562, and movements thereof) detected by the device on the touch-sensitive surface (e.g.,551inFIG.5B) are used by the device to manipulate the user interface on the display (e.g.,550inFIG.5B) of the multifunction device when the touch-sensitive surface is separate from the display. It should be understood that similar methods are, optionally, used for other user interfaces described herein. Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously. FIG.6Aillustrates exemplary personal electronic device600. Device600includes body602. In some embodiments, device600includes some or all of the features described with respect to devices200and400(e.g.,FIGS.2A-4). In some embodiments, device600has touch-sensitive display screen604, hereafter touch screen604. Alternatively, or in addition to touch screen604, device600has a display and a touch-sensitive surface. As with devices200and400, in some embodiments, touch screen604(or the touch-sensitive surface) has one or more intensity sensors for detecting intensity of contacts (e.g., touches) being applied. The one or more intensity sensors of touch screen604(or the touch-sensitive surface) provide output data that represents the intensity of touches. The user interface of device600responds to touches based on their intensity, meaning that touches of different intensities can invoke different user interface operations on device600. Techniques for detecting and processing touch intensity are found, for example, in related applications: International Patent Application Serial No. PCT/US2013/040061, titled “Device, Method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application,” filed May 8, 2013, and International Patent Application Serial No. PCT/US2013/069483, titled “Device, Method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships,” filed Nov. 11, 2013, each of which is hereby incorporated by reference in their entirety. In some embodiments, device600has one or more input mechanisms606and608. Input mechanisms606and608, if included, are physical. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device600has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device600with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device600to be worn by a user. FIG.6Bdepicts exemplary personal electronic device600. In some embodiments, device600includes some or all of the components described with respect toFIGS.2A,2B, and4. Device600has bus612that operatively couples I/O section614with one or more computer processors616and memory618. I/O section614is connected to display604, which can have touch-sensitive component622and, optionally, touch-intensity sensitive component624. In addition, I/O section614is connected with communication unit630for receiving application and operating system data, using Wi-Fi, Bluetooth, near field communication (NFC), cellular, and/or other wireless communication techniques. Device600includes input mechanisms606and/or608. Input mechanism606is a rotatable input device or a depressible and rotatable input device, for example. Input mechanism608is a button, in some examples. Input mechanism608is a microphone, in some examples. Personal electronic device600includes, for example, various sensors, such as GPS sensor632, accelerometer634, directional sensor640(e.g., compass), gyroscope636, motion sensor638, and/or a combination thereof, all of which are operatively connected to I/O section614. Memory618of personal electronic device600is a non-transitory computer-readable storage medium, for storing computer-executable instructions, which, when executed by one or more computer processors616, for example, cause the computer processors to perform the techniques and processes described below. The computer-executable instructions, for example, are also stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. Personal electronic device600is not limited to the components and configuration ofFIG.6B, but can include other or additional components in multiple configurations. As used here, the term “affordance” refers to a user-interactive graphical user interface object that is, for example, displayed on the display screen of devices104,200,400, and/or600(FIGS.1,2,4, and6). For example, an image (e.g., icon), a button, and text (e.g., hyperlink) each constitutes an affordance. As used herein, the term “focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad455inFIG.4or touch-sensitive surface551inFIG.5B) while the cursor is over a particular user interface element (e.g., a button, window, slider or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system212inFIG.2Aor touch screen212inFIG.5A) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen acts as a “focus selector” so that when an input (e.g., a press input by the contact) is detected on the touch screen display at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface. Without regard to the specific form taken by the focus selector, the focus selector is generally the user interface element (or contact on a touch screen display) that is controlled by the user so as to communicate the user's intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact). For example, the location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device). As used in the specification and claims, the term “characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. For example, the set of one or more intensity thresholds includes a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation) rather than being used to determine whether to perform a first operation or a second operation. In some embodiments, a portion of a gesture is identified for purposes of determining a characteristic intensity. For example, a touch-sensitive surface receives a continuous swipe contact transitioning from a start location and reaching an end location, at which point the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end location is based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm is applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some circumstances, these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of determining a characteristic intensity. The intensity of a contact on the touch-sensitive surface is characterized relative to one or more intensity thresholds, such as a contact-detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface figures. An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a “light press” input. An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a “deep press” input. An increase of characteristic intensity of the contact from an intensity below the contact-detection intensity threshold to an intensity between the contact-detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting the contact on the touch-surface. A decrease of characteristic intensity of the contact from an intensity above the contact-detection intensity threshold to an intensity below the contact-detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch-surface. In some embodiments, the contact-detection intensity threshold is zero. In some embodiments, the contact-detection intensity threshold is greater than zero. In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold. In some embodiments, the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., a “down stroke” of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., an “up stroke” of the respective press input). In some embodiments, the device employs intensity hysteresis to avoid accidental inputs sometimes termed “jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an “up stroke” of the respective press input). Similarly, in some embodiments, the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances). For ease of explanation, the descriptions of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting either: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, and/or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold. Additionally, in examples where an operation is described as being performed in response to detecting a decrease in intensity of a contact below the press-input intensity threshold, the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold. 3. Digital Assistant System FIG.7Aillustrates a block diagram of digital assistant system700in accordance with various examples. In some examples, digital assistant system700is implemented on a standalone computer system. In some examples, digital assistant system700is distributed across multiple computers. In some examples, some of the modules and functions of the digital assistant are divided into a server portion and a client portion, where the client portion resides on one or more user devices (e.g., devices104,122,200,400, or600) and communicates with the server portion (e.g., server system108) through one or more networks, e.g., as shown inFIG.1. In some examples, digital assistant system700is an implementation of server system108(and/or DA server106) shown inFIG.1. It should be noted that digital assistant system700is only one example of a digital assistant system, and that digital assistant system700can have more or fewer components than shown, can combine two or more components, or can have a different configuration or arrangement of the components. The various components shown inFIG.7Aare implemented in hardware, software instructions for execution by one or more processors, firmware, including one or more signal processing and/or application specific integrated circuits, or a combination thereof. Digital assistant system700includes memory702, one or more processors704, input/output (I/O) interface706, and network communications interface708. These components can communicate with one another over one or more communication buses or signal lines710. In some examples, memory702includes a non-transitory computer-readable medium, such as high-speed random access memory and/or a non-volatile computer-readable storage medium (e.g., one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices). In some examples, I/O interface706couples input/output devices716of digital assistant system700, such as displays, keyboards, touch screens, and microphones, to user interface module722. I/O interface706, in conjunction with user interface module722, receives user inputs (e.g., voice input, keyboard inputs, touch inputs, etc.) and processes them accordingly. In some examples, e.g., when the digital assistant is implemented on a standalone user device, digital assistant system700includes any of the components and I/O communication interfaces described with respect to devices200,400, or600inFIGS.2A,4,6A-B, respectively. In some examples, digital assistant system700represents the server portion of a digital assistant implementation, and can interact with the user through a client-side portion residing on a user device (e.g., devices104,200,400, or600). In some examples, the network communications interface708includes wired communication port(s)712and/or wireless transmission and reception circuitry714. The wired communication port(s) receives and send communication signals via one or more wired interfaces, e.g., Ethernet, Universal Serial Bus (USB), FIREWIRE, etc. The wireless circuitry714receives and sends RF signals and/or optical signals from/to communications networks and other communications devices. The wireless communications use any of a plurality of communications standards, protocols, and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol. Network communications interface708enables communication between digital assistant system700with networks, such as the Internet, an intranet, and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN), and/or a metropolitan area network (MAN), and other devices. In some examples, memory702, or the computer-readable storage media of memory702, stores programs, modules, instructions, and data structures including all or a subset of: operating system718, communications module720, user interface module722, one or more applications724, and digital assistant module726. In particular, memory702, or the computer-readable storage media of memory702, stores instructions for performing the processes described below. One or more processors704execute these programs, modules, and instructions, and reads/writes from/to the data structures. Operating system718(e.g., Darwin, RTXC, LINUX, UNIX, iOS, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communications between various hardware, firmware, and software components. Communications module720facilitates communications between digital assistant system700with other devices over network communications interface708. For example, communications module720communicates with RF circuitry208of electronic devices such as devices200,400, and600shown inFIG.2A,4,6A-B, respectively. Communications module720also includes various components for handling data received by wireless circuitry714and/or wired communications port712. User interface module722receives commands and/or inputs from a user via I/O interface706(e.g., from a keyboard, touch screen, pointing device, controller, and/or microphone), and generate user interface objects on a display. User interface module722also prepares and delivers outputs (e.g., speech, sound, animation, text, icons, vibrations, haptic feedback, light, etc.) to the user via the I/O interface706(e.g., through displays, audio channels, speakers, touch-pads, etc.). Applications724include programs and/or modules that are configured to be executed by one or more processors704. For example, if the digital assistant system is implemented on a standalone user device, applications724include user applications, such as games, a calendar application, a navigation application, or an email application. If digital assistant system700is implemented on a server, applications724include resource management applications, diagnostic applications, or scheduling applications, for example. Memory702also stores digital assistant module726(or the server portion of a digital assistant). In some examples, digital assistant module726includes the following sub-modules, or a subset or superset thereof: input/output processing module728, speech-to-text (STT) processing module730, natural language processing module732, dialogue flow processing module734, task flow processing module736, service processing module738, and speech synthesis processing module740. Each of these modules has access to one or more of the following systems or data and models of the digital assistant module726, or a subset or superset thereof: ontology760, vocabulary index744, user data748, task flow models754, service models756, and ASR systems758. In some examples, using the processing modules, data, and models implemented in digital assistant module726, the digital assistant can perform at least some of the following: converting speech input into text; identifying a user's intent expressed in a natural language input received from the user; actively eliciting and obtaining information needed to fully infer the user's intent (e.g., by disambiguating words, games, intentions, etc.); determining the task flow for fulfilling the inferred intent; and executing the task flow to fulfill the inferred intent. In some examples, as shown inFIG.7B, I/O processing module728interacts with the user through I/O devices716inFIG.7Aor with a user device (e.g., devices104,200,400, or600) through network communications interface708inFIG.7Ato obtain user input (e.g., a speech input) and to provide responses (e.g., as speech outputs) to the user input. I/O processing module728optionally obtains contextual information associated with the user input from the user device, along with or shortly after the receipt of the user input. The contextual information includes user-specific data, vocabulary, and/or preferences relevant to the user input. In some examples, the contextual information also includes software and hardware states of the user device at the time the user request is received, and/or information related to the surrounding environment of the user at the time that the user request was received. In some examples, I/O processing module728also sends follow-up questions to, and receive answers from, the user regarding the user request. When a user request is received by I/O processing module728and the user request includes speech input, I/O processing module728forwards the speech input to STT processing module730(or speech recognizer) for speech-to-text conversions. STT processing module730includes one or more ASR systems758. The one or more ASR systems758can process the speech input that is received through I/O processing module728to produce a recognition result. Each ASR system758includes a front-end speech pre-processor. The front-end speech pre-processor extracts representative features from the speech input. For example, the front-end speech pre-processor performs a Fourier transform on the speech input to extract spectral features that characterize the speech input as a sequence of representative multi-dimensional vectors. Further, each ASR system758includes one or more speech recognition models (e.g., acoustic models and/or language models) and implements one or more speech recognition engines. Examples of speech recognition models include Hidden Markov Models, Gaussian-Mixture Models, Deep Neural Network Models, n-gram language models, and other statistical models. Examples of speech recognition engines include the dynamic time warping based engines and weighted finite-state transducers (WFST) based engines. The one or more speech recognition models and the one or more speech recognition engines are used to process the extracted representative features of the front-end speech pre-processor to produce intermediate recognitions results (e.g., phonemes, phonemic strings, and sub-words), and ultimately, text recognition results (e.g., words, word strings, or sequence of tokens). In some examples, the speech input is processed at least partially by a third-party service or on the user's device (e.g., device104,200,400, or600) to produce the recognition result. Once STT processing module730produces recognition results containing a text string (e.g., words, or sequence of words, or sequence of tokens), the recognition result is passed to natural language processing module732for intent deduction. In some examples, STT processing module730produces multiple candidate text representations of the speech input. Each candidate text representation is a sequence of words or tokens corresponding to the speech input. In some examples, each candidate text representation is associated with a speech recognition confidence score. Based on the speech recognition confidence scores, STT processing module730ranks the candidate text representations and provides the n-best (e.g., n highest ranked) candidate text representation(s) to natural language processing module732for intent deduction, where n is a predetermined integer greater than zero. For example, in one example, only the highest ranked (n=1) candidate text representation is passed to natural language processing module732for intent deduction. In another example, the five highest ranked (n=5) candidate text representations are passed to natural language processing module732for intent deduction. More details on the speech-to-text processing are described in U.S. Utility application Ser. No. 13/236,942 for “Consolidating Speech Recognition Results,” filed on Sep. 20, 2011, the entire disclosure of which is incorporated herein by reference. In some examples, STT processing module730includes and/or accesses a vocabulary of recognizable words via phonetic alphabet conversion module731. Each vocabulary word is associated with one or more candidate pronunciations of the word represented in a speech recognition phonetic alphabet. In particular, the vocabulary of recognizable words includes a word that is associated with a plurality of candidate pronunciations. For example, the vocabulary includes the word “tomato” that is associated with the candidate pronunciations of // and //. Further, vocabulary words are associated with custom candidate pronunciations that are based on previous speech inputs from the user. Such custom candidate pronunciations are stored in STT processing module730and are associated with a particular user via the user's profile on the device. In some examples, the candidate pronunciations for words are determined based on the spelling of the word and one or more linguistic and/or phonetic rules. In some examples, the candidate pronunciations are manually generated, e.g., based on known canonical pronunciations. In some examples, the candidate pronunciations are ranked based on the commonness of the candidate pronunciation. For example, the candidate pronunciation // is ranked higher than //, because the former is a more commonly used pronunciation (e.g., among all users, for users in a particular geographical region, or for any other appropriate subset of users). In some examples, candidate pronunciations are ranked based on whether the candidate pronunciation is a custom candidate pronunciation associated with the user. For example, custom candidate pronunciations are ranked higher than canonical candidate pronunciations. This can be useful for recognizing proper nouns having a unique pronunciation that deviates from canonical pronunciation. In some examples, candidate pronunciations are associated with one or more speech characteristics, such as geographic origin, nationality, or ethnicity. For example, the candidate pronunciation // is associated with the United States, whereas the candidate pronunciation // is associated with Great Britain. Further, the rank of the candidate pronunciation is based on one or more characteristics (e.g., geographic origin, nationality, ethnicity, etc.) of the user stored in the user's profile on the device. For example, it can be determined from the user's profile that the user is associated with the United States. Based on the user being associated with the United States, the candidate pronunciation // (associated with the United States) is ranked higher than the candidate pronunciation // (associated with Great Britain). In some examples, one of the ranked candidate pronunciations is selected as a predicted pronunciation (e.g., the most likely pronunciation). When a speech input is received, STT processing module730is used to determine the phonemes corresponding to the speech input (e.g., using an acoustic model), and then attempt to determine words that match the phonemes (e.g., using a language model). For example, if STT processing module730first identifies the sequence of phonemes // corresponding to a portion of the speech input, it can then determine, based on vocabulary index744, that this sequence corresponds to the word “tomato.” In some examples, STT processing module730uses approximate matching techniques to determine words in an utterance. Thus, for example, the STT processing module730determines that the sequence of phonemes // corresponds to the word “tomato,” even if that particular sequence of phonemes is not one of the candidate sequence of phonemes for that word. Natural language processing module732(“natural language processor”) of the digital assistant takes the n-best candidate text representation(s) (“word sequence(s)” or “token sequence(s)”) generated by STT processing module730, and attempts to associate each of the candidate text representations with one or more “actionable intents” recognized by the digital assistant. An “actionable intent” (or “user intent”) represents a task that can be performed by the digital assistant, and can have an associated task flow implemented in task flow models754. The associated task flow is a series of programmed actions and steps that the digital assistant takes in order to perform the task. The scope of a digital assistant's capabilities is dependent on the number and variety of task flows that have been implemented and stored in task flow models754, or in other words, on the number and variety of “actionable intents” that the digital assistant recognizes. The effectiveness of the digital assistant, however, also dependents on the assistant's ability to infer the correct “actionable intent(s)” from the user request expressed in natural language. In some examples, in addition to the sequence of words or tokens obtained from STT processing module730, natural language processing module732also receives contextual information associated with the user request, e.g., from I/O processing module728. The natural language processing module732optionally uses the contextual information to clarify, supplement, and/or further define the information contained in the candidate text representations received from STT processing module730. The contextual information includes, for example, user preferences, hardware, and/or software states of the user device, sensor information collected before, during, or shortly after the user request, prior interactions (e.g., dialogue) between the digital assistant and the user, and the like. As described herein, contextual information is, in some examples, dynamic, and changes with time, location, content of the dialogue, and other factors. In some examples, the natural language processing is based on, e.g., ontology760. Ontology760is a hierarchical structure containing many nodes, each node representing either an “actionable intent” or a “property” relevant to one or more of the “actionable intents” or other “properties.” As noted above, an “actionable intent” represents a task that the digital assistant is capable of performing, i.e., it is “actionable” or can be acted on. A “property” represents a parameter associated with an actionable intent or a sub-aspect of another property. A linkage between an actionable intent node and a property node in ontology760defines how a parameter represented by the property node pertains to the task represented by the actionable intent node. In some examples, ontology760is made up of actionable intent nodes and property nodes. Within ontology760, each actionable intent node is linked to one or more property nodes either directly or through one or more intermediate property nodes. Similarly, each property node is linked to one or more actionable intent nodes either directly or through one or more intermediate property nodes. For example, as shown inFIG.7C, ontology760includes a “restaurant reservation” node (i.e., an actionable intent node). Property nodes “restaurant,” “date/time” (for the reservation), and “party size” are each directly linked to the actionable intent node (i.e., the “restaurant reservation” node). In addition, property nodes “cuisine,” “price range,” “phone number,” and “location” are sub-nodes of the property node “restaurant,” and are each linked to the “restaurant reservation” node (i.e., the actionable intent node) through the intermediate property node “restaurant.” For another example, as shown inFIG.7C, ontology760also includes a “set reminder” node (i.e., another actionable intent node). Property nodes “date/time” (for setting the reminder) and “subject” (for the reminder) are each linked to the “set reminder” node. Since the property “date/time” is relevant to both the task of making a restaurant reservation and the task of setting a reminder, the property node “date/time” is linked to both the “restaurant reservation” node and the “set reminder” node in ontology760. An actionable intent node, along with its linked concept nodes, is described as a “domain.” In the present discussion, each domain is associated with a respective actionable intent, and refers to the group of nodes (and the relationships there between) associated with the particular actionable intent. For example, ontology760shown inFIG.7Cincludes an example of restaurant reservation domain762and an example of reminder domain764within ontology760. The restaurant reservation domain includes the actionable intent node “restaurant reservation,” property nodes “restaurant,” “date/time,” and “party size,” and sub-property nodes “cuisine,” “price range,” “phone number,” and “location.” Reminder domain764includes the actionable intent node “set reminder,” and property nodes “subject” and “date/time.” In some examples, ontology760is made up of many domains. Each domain shares one or more property nodes with one or more other domains. For example, the “date/time” property node is associated with many different domains (e.g., a scheduling domain, a travel reservation domain, a movie ticket domain, etc.), in addition to restaurant reservation domain762and reminder domain764. WhileFIG.7Cillustrates two example domains within ontology760, other domains include, for example, “find a movie,” “initiate a phone call,” “find directions,” “schedule a meeting,” “send a message,” and “provide an answer to a question,” “read a list,” “providing navigation instructions,” “provide instructions for a task” and so on. A “send a message” domain is associated with a “send a message” actionable intent node, and further includes property nodes such as “recipient(s),” “message type,” and “message body.” The property node “recipient” is further defined, for example, by the sub-property nodes such as “recipient name” and “message address.” In some examples, ontology760includes all the domains (and hence actionable intents) that the digital assistant is capable of understanding and acting upon. In some examples, ontology760is modified, such as by adding or removing entire domains or nodes, or by modifying relationships between the nodes within the ontology760. In some examples, nodes associated with multiple related actionable intents are clustered under a “super domain” in ontology760. For example, a “travel” super-domain includes a cluster of property nodes and actionable intent nodes related to travel. The actionable intent nodes related to travel includes “airline reservation,” “hotel reservation,” “car rental,” “get directions,” “find points of interest,” and so on. The actionable intent nodes under the same super domain (e.g., the “travel” super domain) have many property nodes in common. For example, the actionable intent nodes for “airline reservation,” “hotel reservation,” “car rental,” “get directions,” and “find points of interest” share one or more of the property nodes “start location,” “destination,” “departure date/time,” “arrival date/time,” and “party size.” In some examples, each node in ontology760is associated with a set of words and/or phrases that are relevant to the property or actionable intent represented by the node. The respective set of words and/or phrases associated with each node are the so-called “vocabulary” associated with the node. The respective set of words and/or phrases associated with each node are stored in vocabulary index744in association with the property or actionable intent represented by the node. For example, returning toFIG.7B, the vocabulary associated with the node for the property of “restaurant” includes words such as “food,” “drinks,” “cuisine,” “hungry,” “eat,” “pizza,” “fast food,” “meal,” and so on. For another example, the vocabulary associated with the node for the actionable intent of “initiate a phone call” includes words and phrases such as “call,” “phone,” “dial,” “ring,” “call this number,” “make a call to,” and so on. The vocabulary index744optionally includes words and phrases in different languages. Natural language processing module732receives the candidate text representations (e.g., text string(s) or token sequence(s)) from STT processing module730, and for each candidate representation, determines what nodes are implicated by the words in the candidate text representation. In some examples, if a word or phrase in the candidate text representation is found to be associated with one or more nodes in ontology760(via vocabulary index744), the word or phrase “triggers” or “activates” those nodes. Based on the quantity and/or relative importance of the activated nodes, natural language processing module732selects one of the actionable intents as the task that the user intended the digital assistant to perform. In some examples, the domain that has the most “triggered” nodes is selected. In some examples, the domain having the highest confidence value (e.g., based on the relative importance of its various triggered nodes) is selected. In some examples, the domain is selected based on a combination of the number and the importance of the triggered nodes. In some examples, additional factors are considered in selecting the node as well, such as whether the digital assistant has previously correctly interpreted a similar request from a user. User data748includes user-specific information, such as user-specific vocabulary, user preferences, user address, user's default and secondary languages, user's contact list, and other short-term or long-term information for each user. In some examples, natural language processing module732uses the user-specific information to supplement the information contained in the user input to further define the user intent. For example, for a user request “invite my friends to my birthday party,” natural language processing module732is able to access user data748to determine who the “friends” are and when and where the “birthday party” would be held, rather than requiring the user to provide such information explicitly in his/her request. It should be recognized that in some examples, natural language processing module732is implemented using one or more machine learning mechanisms (e.g., neural networks). In particular, the one or more machine learning mechanisms are configured to receive a candidate text representation and contextual information associated with the candidate text representation. Based on the candidate text representation and the associated contextual information, the one or more machine learning mechanism are configured to determine intent confidence scores over a set of candidate actionable intents. Natural language processing module732can select one or more candidate actionable intents from the set of candidate actionable intents based on the determined intent confidence scores. In some examples, an ontology (e.g., ontology760) is also used to select the one or more candidate actionable intents from the set of candidate actionable intents. Other details of searching an ontology based on a token string are described in U.S. Utility application Ser. No. 12/341,743 for “Method and Apparatus for Searching Using An Active Ontology,” filed Dec. 22, 2008, the entire disclosure of which is incorporated herein by reference. In some examples, once natural language processing module732identifies an actionable intent (or domain) based on the user request, natural language processing module732generates a structured query to represent the identified actionable intent. In some examples, the structured query includes parameters for one or more nodes within the domain for the actionable intent, and at least some of the parameters are populated with the specific information and requirements specified in the user request. For example, the user says “Make me a dinner reservation at a sushi place at7.” In this case, natural language processing module732is able to correctly identify the actionable intent to be “restaurant reservation” based on the user input. According to the ontology, a structured query for a “restaurant reservation” domain includes parameters such as {Cuisine}, {Time}, {Date}, {Party Size}, and the like. In some examples, based on the speech input and the text derived from the speech input using STT processing module730, natural language processing module732generates a partial structured query for the restaurant reservation domain, where the partial structured query includes the parameters {Cuisine=“Sushi”} and {Time=“7 pm” }. However, in this example, the user's utterance contains insufficient information to complete the structured query associated with the domain. Therefore, other necessary parameters such as {Party Size} and {Date} is not specified in the structured query based on the information currently available. In some examples, natural language processing module732populates some parameters of the structured query with received contextual information. For example, in some examples, if the user requested a sushi restaurant “near me,” natural language processing module732populates a {location} parameter in the structured query with GPS coordinates from the user device. In some examples, natural language processing module732identifies multiple candidate actionable intents for each candidate text representation received from STT processing module730. Further, in some examples, a respective structured query (partial or complete) is generated for each identified candidate actionable intent. Natural language processing module732determines an intent confidence score for each candidate actionable intent and ranks the candidate actionable intents based on the intent confidence scores. In some examples, natural language processing module732passes the generated structured query (or queries), including any completed parameters, to task flow processing module736(“task flow processor”). In some examples, the structured query (or queries) for the m-best (e.g., m highest ranked) candidate actionable intents are provided to task flow processing module736, where m is a predetermined integer greater than zero. In some examples, the structured query (or queries) for the m-best candidate actionable intents are provided to task flow processing module736with the corresponding candidate text representation(s). Other details of inferring a user intent based on multiple candidate actionable intents determined from multiple candidate text representations of a speech input are described in U.S. Utility application Ser. No. 14/298,725 for “System and Method for Inferring User Intent From Speech Inputs,” filed Jun. 6, 2014, the entire disclosure of which is incorporated herein by reference. Task flow processing module736is configured to receive the structured query (or queries) from natural language processing module732, complete the structured query, if necessary, and perform the actions required to “complete” the user's ultimate request. In some examples, the various procedures necessary to complete these tasks are provided in task flow models754. In some examples, task flow models754include procedures for obtaining additional information from the user and task flows for performing actions associated with the actionable intent. As described above, in order to complete a structured query, task flow processing module736needs to initiate additional dialogue with the user in order to obtain additional information, and/or disambiguate potentially ambiguous utterances. When such interactions are necessary, task flow processing module736invokes dialogue flow processing module734to engage in a dialogue with the user. In some examples, dialogue flow processing module734determines how (and/or when) to ask the user for the additional information and receives and processes the user responses. The questions are provided to and answers are received from the users through I/O processing module728. In some examples, dialogue flow processing module734presents dialogue output to the user via audio and/or visual output, and receives input from the user via spoken or physical (e.g., clicking) responses. Continuing with the example above, when task flow processing module736invokes dialogue flow processing module734to determine the “party size” and “date” information for the structured query associated with the domain “restaurant reservation,” dialogue flow processing module734generates questions such as “For how many people?” and “On which day?” to pass to the user. Once answers are received from the user, dialogue flow processing module734then populates the structured query with the missing information, or pass the information to task flow processing module736to complete the missing information from the structured query. Once task flow processing module736has completed the structured query for an actionable intent, task flow processing module736proceeds to perform the ultimate task associated with the actionable intent. Accordingly, task flow processing module736executes the steps and instructions in the task flow model according to the specific parameters contained in the structured query. For example, the task flow model for the actionable intent of “restaurant reservation” includes steps and instructions for contacting a restaurant and actually requesting a reservation for a particular party size at a particular time. For example, using a structured query such as: {restaurant reservation, restaurant=ABC Café, date=3/12/2012, time=7 pm, party size=5}, task flow processing module736performs the steps of: (1) logging onto a server of the ABC Café or a restaurant reservation system such as OPENTABLE®, (2) entering the date, time, and party size information in a form on the website, (3) submitting the form, and (4) making a calendar entry for the reservation in the user's calendar. In some examples, task flow processing module736employs the assistance of service processing module738(“service processing module”) to complete a task requested in the user input or to provide an informational answer requested in the user input. For example, service processing module738acts on behalf of task flow processing module736to make a phone call, set a calendar entry, invoke a map search, invoke or interact with other user applications installed on the user device, and invoke or interact with third-party services (e.g., a restaurant reservation portal, a social networking website, a banking portal, etc.). In some examples, the protocols and application programming interfaces (API) required by each service are specified by a respective service model among service models756. Service processing module738accesses the appropriate service model for a service and generate requests for the service in accordance with the protocols and APIs required by the service according to the service model. For example, if a restaurant has enabled an online reservation service, the restaurant submits a service model specifying the necessary parameters for making a reservation and the APIs for communicating the values of the necessary parameter to the online reservation service. When requested by task flow processing module736, service processing module738establishes a network connection with the online reservation service using the web address stored in the service model, and send the necessary parameters of the reservation (e.g., time, date, party size) to the online reservation interface in a format according to the API of the online reservation service. In some examples, natural language processing module732, dialogue flow processing module734, and task flow processing module736are used collectively and iteratively to infer and define the user's intent, obtain information to further clarify and refine the user intent, and finally generate a response (i.e., an output to the user, or the completion of a task) to fulfill the user's intent. The generated response is a dialogue response to the speech input that at least partially fulfills the user's intent. Further, in some examples, the generated response is output as a speech output. In these examples, the generated response is sent to speech synthesis processing module740(e.g., speech synthesizer) where it can be processed to synthesize the dialogue response in speech form. In yet other examples, the generated response is data content relevant to satisfying a user request in the speech input. In examples where task flow processing module736receives multiple structured queries from natural language processing module732, task flow processing module736initially processes the first structured query of the received structured queries to attempt to complete the first structured query and/or execute one or more tasks or actions represented by the first structured query. In some examples, the first structured query corresponds to the highest ranked actionable intent. In other examples, the first structured query is selected from the received structured queries based on a combination of the corresponding speech recognition confidence scores and the corresponding intent confidence scores. In some examples, if task flow processing module736encounters an error during processing of the first structured query (e.g., due to an inability to determine a necessary parameter), the task flow processing module736can proceed to select and process a second structured query of the received structured queries that corresponds to a lower ranked actionable intent. The second structured query is selected, for example, based on the speech recognition confidence score of the corresponding candidate text representation, the intent confidence score of the corresponding candidate actionable intent, a missing necessary parameter in the first structured query, or any combination thereof. Speech synthesis processing module740is configured to synthesize speech outputs for presentation to the user. Speech synthesis processing module740synthesizes speech outputs based on text provided by the digital assistant. For example, the generated dialogue response is in the form of a text string. Speech synthesis processing module740converts the text string to an audible speech output. Speech synthesis processing module740uses any appropriate speech synthesis technique in order to generate speech outputs from text, including, but not limited, to concatenative synthesis, unit selection synthesis, diphone synthesis, domain-specific synthesis, formant synthesis, articulatory synthesis, hidden Markov model (HMM) based synthesis, and sinewave synthesis. In some examples, speech synthesis processing module740is configured to synthesize individual words based on phonemic strings corresponding to the words. For example, a phonemic string is associated with a word in the generated dialogue response. The phonemic string is stored in metadata associated with the word. Speech synthesis processing module740is configured to directly process the phonemic string in the metadata to synthesize the word in speech form. In some examples, instead of (or in addition to) using speech synthesis processing module740, speech synthesis is performed on a remote device (e.g., the server system108), and the synthesized speech is sent to the user device for output to the user. For example, this can occur in some implementations where outputs for a digital assistant are generated at a server system. And because server systems generally have more processing power or resources than a user device, it is possible to obtain higher quality speech outputs than would be practical with client-side synthesis. Additional details on digital assistants can be found in the U.S. Utility application Ser. No. 12/987,982, entitled “Intelligent Automated Assistant,” filed Jan. 10, 2011, and U.S. Utility application Ser. No. 13/251,088, entitled “Generating and Processing Task Items That Represent Tasks to Perform,” filed Sep. 30, 2011, the entire disclosures of which are incorporated herein by reference. With reference back toFIG.7A, digital assistant module726further includes audio processing module770and latency management module780. Audio processing module770is configured to analyze a stream of audio received by digital assistant system700(e.g., at I/O processing module728and via microphone213). In some examples, audio processing module770is configured to analyze the stream of audio to identify which portions contain user speech and which portions do not contain user speech. For example, audio processing module770divides the stream of audio into a sequence of overlapping audio frames. Each audio frame has a predetermined duration (e.g., 10 ms). Audio processing module770analyzes the audio features of each audio frame (e.g., using audio and/or speech models) to determine whether or not each audio frame contains user speech. The analyzed audio features can include time domain and/or frequency domain features. Time domain features include, for example, zero-crossing rates, short-time energy, spectral energy, spectral flatness, autocorrelation, or the like. Frequency domain features include, for example, mel-frequency cepstral coefficients, linear predictive cepstral coefficients, mel-frequency discrete wavelet coefficients, or the like. In some examples, audio processing module770provides audio frame information indicating which audio frames of the stream of audio contain user speech and which audio frames of the stream of audio do not contain user speech to other components of digital assistant module726. In some examples, latency management module780receives the audio frame information from audio processing module770. Latency management module780uses this information to control the timing of various digital assistant processes to reduce latency. For example, latency management module780uses the audio frame information to detect pauses or interruptions in user speech in the stream of audio. In addition, the duration of each pause or interruption can be determined. In some examples, latency management module780applies one or more predetermined rules to determine whether a first portion of the stream of audio satisfies a predetermined condition. In some examples, the predetermined condition includes the condition of detecting, in the first portion of the stream of audio, an absence of user speech (e.g., a pause) for longer than a first predetermined duration (e.g., 50 ms, 75 ms, or 100 ms). In response to determining that the first portion of the stream of audio satisfies a predetermined condition, latency management module780initiates performance of natural language processing (e.g., at natural language processing module732), task flow processing (e.g., at task flow processing module736or836), and/or speech synthesis (e.g., at speech synthesis processing module740) based on the user utterance contained in the first portion of the stream of audio. In some examples, latency management module780initiates performance of these processes while causing the digital assistant system to continue receiving a second portion of the stream of audio. In some examples, latency management module780is configured to detect a speech end-point condition. Specifically, after determining that the first portion of the stream of audio satisfies a predetermined condition, latency management module780determines whether a speech end-point condition is detected. In some examples, latency management module780uses the audio frame information to determine whether a speech end-point condition is detected. For example, detecting the speech end-point condition can include detecting, in the second portion of the stream of audio, an absence of user speech for greater than a second predetermined duration (e.g., 600 ms, 700 ms, or 800 ms). The second predetermined duration is longer than the first predetermined duration. In some examples, detecting the speech end-point condition includes detecting a predetermined type of non-speech input from the user. For example, the predetermined type of non-speech input can be a user selection of a button (e.g., “home” or menu button304) of the electronic device or an affordance displayed on the touch screen (e.g., touch screen212) of the electronic device. In response to determining that a speech end-point condition is detected, latency management module780causes results generated by task flow processing module736or836, dialogue flow processing module734, and/or speech synthesis processing module740to be presented to the user. In some examples, the results include spoken dialogue. In some examples, latency management module780prevents the generated results from being presented to the user prior to determining that a speech end-point condition is detected. In some examples, latency management module780determines that a speech end-point condition is not detected. Instead, latency management module780detects additional speech in the second portion of the stream of audio. Specifically, for example, the additional speech is a continuation of the utterance in the first portion of the stream of audio. In these examples, latency management module780re-initiates performance of natural language processing, task flow processing, and/or speech synthesis based on the user utterance across the first and second portions of the stream of audio. The latency reducing functions of latency management module780are described in greater detail below with reference toFIGS.9and10. FIG.8is a block diagram illustrating a portion of digital assistant module800, according to various examples. In particular,FIG.8depicts certain components of digital assistant module800that can enable robust operation of a digital assistant, according to various examples. More specifically, the components depicted in digital assistant module800can function to evaluate multiple candidate task flows corresponding to a user utterance and improve the robustness and reliability of task flow processing. For simplicity, only a portion of digital assistant module800is depicted. It should be recognized that digital assistant module800can include additional components. For example, digital assistant module800can be similar or substantially identical to digital assistant module726and can reside in memory702of digital assistant system700. As shown inFIG.8, STT processing module830receives a user utterance (e.g., via I/O processing module728). STT processing module830is similar or substantially identical to STT processing module730. In an illustrative example, the received user utterance is “Directions to Fidelity Investments.” STT processing module830performs speech recognition on the user utterance to determine a plurality of candidate text representations. Each candidate text representation of the plurality of candidate text representations corresponds to the user utterance. STT processing module830further determines an associated speech recognition confidence score for each candidate text representation. In some examples, the determined plurality of candidate text representations are the n-best candidate text representations having the n-highest speech recognition confidence scores. In the present example, STT processing module830determines three candidate text representations for the user utterance, which include “Directions to Fidelity Investments,” “Directions to deli restaurants,” and “Directions to Italian restaurants.” The candidate text representation “Directions to Fidelity Investments” can have the highest speech recognition confidence score and the candidate text representations “Directions to deli restaurants” and “Directions to Italian restaurants” can have lower speech recognition confidence scores. STT processing module830provides the three candidate text representations and the associated speech recognition confidence scores to natural language processing module832. Natural language processing module832can be similar or substantially identical to natural language processing module732. Based on the three candidate text representations, natural language processing module832determines corresponding candidate user intents. Each candidate user intent is determined from a respective candidate text representation. Natural language processing module832further determines an associated intent confidence score for each candidate user intent. Determining a candidate user intent from a respective candidate text representation includes, for example, parsing the respective candidate text representation to determine a candidate domain and candidate parse interpretations for the candidate text representation. The determined candidate user intent can be represented in the form of a structured query based on the determined candidate domain and parse interpretations. For instance, in the present example, natural language processing module832parses the candidate text interpretation “Directions to Fidelity Investments” to determine that a candidate domain is “get directions.” In addition, natural language processing module832recognizes that “Fidelity Investments” is an entry in the user's contact list and thus interprets it as a person/entity of the contact list (contacts=“Fidelity Investment”). Thus, a first candidate user intent determined for the candidate text interpretation “Directions to Fidelity Investments” can be represented by the structured query {Get directions, location=search(contacts=“Fidelity Investments”)}. In some examples, natural language processing module832can also interpret “Fidelity Investments” as a business. Thus, a second candidate user intent determined for the candidate text interpretation “Directions to Fidelity Investments” can be represented by the structured query {Get directions, location=search(contacts=“Fidelity Investments”)}. The candidate text representations “Directions to deli restaurants” and “Directions to Italian restaurants” can similarly be parsed by natural language processing module832to determine respective candidate user intents. Specifically, natural language processing module832can interpret “deli” and “Italian” as types of cuisine associated with restaurants. Thus, candidate user intents determined for these candidate text interpretations can be represented by the structured queries {Get directions, location=search(restaurant, cuisine=“deli”)} and {Get directions, location=search(restaurant, cuisine=“Italian”)}, respectively. Therefore, in the present example, natural language processing module832determines four candidate user intents from the three candidate text representations. The four candidate user intents are represented by the following respective structured queries:1. {Get directions, location=search(contacts=“Fidelity Investments”)}2. {Get directions, location=search(business=“Fidelity Investments”)}3. {Get directions, location=search(restaurant, cuisine=“deli”)}4. {Get directions, location=search(restaurant, cuisine=“Italian”)} Each of the four candidate user intents has an associated intent confidence score. In this example, the four candidate user intents are arranged in decreasing order of intent confidence scores, with the first candidate user intent having the highest intent confidence score and the fourth candidate user intent having the lowest intent confidence score. Although in the present example, each of the determined candidate user intents has the same inferred domain (“get directions”), it should be recognized that, in other examples, the determined candidate user intents can include a plurality of different inferred domains. Natural language processing module832provides the four candidate user intents to task flow processing module836. For example, the structured queries for the four candidate user intents are provided to task flow processing module836. In addition, the associated speech recognition confidence scores and intent confidence scores are provided to task flow processing module836. In some examples, task flow processing module836is similar or substantially identical to task flow processing module736. In some examples, task flow manager838of task flow processing module836initially only selects one of the four candidate user intents for processing. If task flow manager838determines that the initially selected candidate user intent cannot be successfully processed through task flow processing module836, task flow manager838selects another candidate user intent for processing. For example, the first candidate user intent having the highest intent confidence score is initially selected. Specifically, in the present example, the first candidate user intent represented by the structured query {Get directions, location=search(contacts=“Fidelity Investments”)} is selected by task flow manager838. Task flow manager838maps the first candidate user intent to a corresponding first candidate task flow (e.g., first candidate task flow842). Notably, the structured query for the first candidate user intent is incomplete because it does not contain any value for the “location” property. As a result, the “location” task parameter in the corresponding first candidate task flow can be missing a required value for performing the task of getting directions to a location corresponding to “Fidelity Investments.” In this example, the first candidate task flow includes procedures for resolving the “location” task parameter. Specifically, the first candidate task flow includes procedures for searching the “Fidelity Investments” entry of the user's contact list to obtain a corresponding value (e.g., address value) for the “location” task parameter. In some examples, task flow manager838determines a corresponding first task flow score for the first candidate task flow. The first task flow score can represent the likelihood that the corresponding candidate task flow is the correct candidate task flow to perform given the user utterance. In some examples, the first task flow score is based on a first flow parameter score. The first flow parameter score can represent a confidence of resolving one or more flow parameters for the first candidate task flow. In some examples, the first task flow score is based on any combination of a speech recognition confidence score for the corresponding candidate text representation “Directions to Fidelity Investments,” an intent confidence score for the first candidate user intent, and a first task parameter score. Task flow resolver840determines the first flow parameter score by attempting to resolve the missing “location” flow parameter for the first candidate task flow. In the present example, task flow resolver840attempts to search the “Fidelity Investments” entry of the user's contact list to obtain a value (e.g., address value) for the “location” flow parameter. If task flow resolver840successfully resolves the “location” flow parameter by obtaining a value from the “Fidelity Investments” entry of the user's contact list, task flow resolver840can determine a higher value for the first flow parameter score. However, if task flow resolver840is unable to successfully resolve the “location” flow parameter (e.g., no address value is found in the “Fidelity Investments” entry of the user's contact list), task flow resolver840can determine a lower value for the first flow parameter score. Thus, the first flow parameter score can be indicative of whether one or more flow parameters of the first candidate task flow (e.g., the “location” parameter) can be successfully resolved. In some examples, task flow resolver840can utilize context data to attempt to resolve missing flow parameters. In some examples, task flow manager838determines whether the first task flow score satisfies a predetermined criterion (e.g., greater than a predetermined threshold level). In the example where task flow resolver840successfully resolves the “location” flow parameter, the first flow parameter score can be sufficiently high to enable the first task flow score to satisfy the predetermined criterion. In this example, task flow manager838determines that the first task flow score satisfies the predetermined criterion and in response, task flow manager838executes the corresponding first candidate task flow without processing the remaining three candidate user intents. In the present example, executing the first candidate task flow can include searching for directions to the address obtained from the “Fidelity Investments” entry of the user's contact list and displaying the directions to the user on the electronic device. In some examples, executing the first candidate task flow further includes generating a dialogue text that is responsive to the user utterance and outputting a spoken representation of the dialogue text. For example, the outputted spoken representation of the dialogue text can be “OK, here are directions to Fidelity Investments.” In the alternative example where task flow resolver840is unable to successfully resolve the “location” flow parameter, the first flow parameter score can be sufficiently low to result in the first task flow score not satisfying the predetermined criterion. In this example, task flow manager838forgoes executing the corresponding first candidate task flow and proceeds to select a second candidate user intent for processing. For example, the second candidate user intent having the second highest intent confidence score can be selected. In the present example, the selected second candidate user intent is represented by the second structured query {Get directions, location=search(business=“Fidelity Investments”)}. Task flow manager838maps the second candidate user intent to a corresponding second candidate task flow (e.g., second candidate task flow844). The second candidate task flow is processed in a similar manner as the first candidate task flow. Specifically, task flow resolver840determines a second flow parameter score for the second candidate task flow by attempting to resolve one or more missing task parameters for the second candidate task flow. For example, task flow resolver840attempts to search one or more “business” data sources (e.g., a business directory) to obtain an address value corresponding to the business “Fidelity Investments.” The determined second flow parameter score is based on whether task flow resolver840can successfully resolve the “location” flow parameter by searching the “business” data source. Based on the second flow parameter score, task flow manager838determines a second task flow score for the second candidate task flow. In some examples, the second task flow score is further based on the speech recognition confidence score for the corresponding candidate text representation “Directions to Fidelity Investments” and/or the intent confidence score for the second candidate user intent. If task flow manager838determines that the second task flow score satisfies the predetermined criterion, then the second candidate task flow is executed. However, if task flow manager838determines that the second task flow score does not satisfy the predetermined criterion, then task flow manager838forgoes executing the second candidate task flow and continues to evaluate the third candidate user intent (and if necessary, the fourth candidate user intent) until a corresponding candidate task flow is determined to have an associated task flow score that satisfies the predetermined criterion. Although in the examples described above, task flow processing module836evaluates the candidate user intents serially to determine a candidate task flow having an associated task flow score that satisfies the predetermined criterion, it should be recognized that, in other examples, task flow processing module836can evaluate the candidate user intents in parallel. In these examples, task flow manager838maps each of the four candidate user intents to four respective candidate task flows. Task flow manager838then determines a respective task flow score for each candidate task flow. As discussed above, task flow processing module836determines each task flow score based on the speech recognition confidence score for the respective candidate text representation, the intent confidence score for the respective candidate user intent, the respective task parameter score, or any combination thereof. Task flow processing module836determines each respective task parameter score by attempting to resolve one or more missing task parameters for the respective candidate task flow. Based on the determined task flow scores for the four candidate task flows, task flow manager838ranks the four candidate task flows and selects the highest ranking candidate task flow. Task flow manager838then executes the selected candidate task flow. In some examples, the highest ranking candidate task flow is the candidate task flow with the highest task flow score. In the present example, the highest ranking candidate task flow can correspond to the second candidate user intent represented by the structured query {Get directions, location=search(business=“Fidelity Investments”)}. Thus, in the present example, Task flow manager838can execute the second candidate user intent, which can include obtaining directions to a “Fidelity Investments” address obtained by searching one or more business data sources and presenting the directions to the user (e.g., by displaying a map with the directions). It should be appreciated that, in some examples, the selected highest ranking candidate task flow need not correspond to the candidate user intent with the highest intent confidence score. For instance, in the present example, the selected second candidate task flow corresponds to the second candidate user intent (e.g., represented by the structured query {Get directions, location=search(business=“Fidelity Investments”)}), which does not have the highest intent confidence score. Further, in some examples, the selected highest ranking candidate task flow need not correspond to the candidate text representations with the highest speech recognition confidence score. Because the task flow score can be based on a combination of speech recognition confidence scores, intent confidence scores, and flow parameter scores, using the task flow scores to select a suitable candidate task flow can enable a candidate task flow that represents an optimization of speech recognition, natural language processing, and task flow processing to be selected. As a result, the selected candidate task flow can be more likely to coincide with the user's actual desired goal for providing the user utterance and less likely to fail (e.g., causes a fatal error) during execution. FIGS.9and10are timelines900and1000illustrating the timing for low-latency operation of a digital assistant, according to various examples. In some examples, the timing for low-latency operation of a digital assistant is controlled using a latency management module (e.g., latency management module780) of a digital assistant module (e.g., digital assistant module726).FIGS.9and10are described with references to digital assistant system700ofFIGS.7A and7B. As shown inFIG.9, digital assistant system700begins receiving stream of audio902at first time904. For example, digital assistant system700begins receiving stream of audio902at first time904in response to receiving user input that invokes digital assistant system700. In this example, stream of audio902is continuously received from first time904to third time910. Specifically, a first portion of stream of audio902is received from first time904to second time908and a second portion of stream of audio902is received from second time908to third time910. As shown, the first portion of stream of audio902includes user utterance903. In some examples, digital assistant system700performs speech recognition as stream of audio902is being received. For example, latency management module780causes STT processing module730to beginning performing speech recognition in real-time as stream of audio902is being received. STT processing module730determines one or more first candidate text representations for user utterance903. Latency management module780determines whether the first portion of stream of audio902satisfies a predetermined condition. For example, the predetermined condition can include the condition of detecting an absence of user speech in the first portion of stream of audio902for longer than a first predetermined duration (e.g., 50 ms, 75 ms, or 100 ms). It should be appreciated that, in other examples, the predetermined condition can include other conditions associated with the first portion of stream of audio902. In the present example, as shown inFIG.9, the first portion of stream of audio902contains an absence of user speech between first intermediate time906and second time908. If latency management module780determines that this absence of user speech between first intermediate time906and second time908satisfies the predetermined condition (e.g., duration912is longer than the first predetermined duration), latency management module780causes the relevant components of digital assistant system700to initiate a sequence of processes that include natural language processing, task flow processing, dialogue flow processing, speech synthesis, or any combination thereof. Specifically, in the present example, in response to determining that the first portion of stream of audio902satisfies the predetermined condition, latency management module780causes natural language processing module732to begin performing, at second time908, natural language processing on the one or more first candidate text representations. This can be advantageous because natural language processing, task flow processing, dialogue flow processing, or speech synthesis can be at least partially completed between second time908and third time910while digital assistant system700is awaiting the detection of a speech end-point condition. As a result, less processing can be required after the speech end-point condition is detected, which can reduce the response latency of digital assistant system700. As discussed above, latency management module780causes one or more of natural language processing, task flow processing, dialogue flow processing, and speech synthesis to be performed while the second portion of stream of audio902is being received between second time908and third time910. Specifically, between second time908and third time910, latency management module780causes natural language processing module732to determine one or more candidate user intents for user utterance903based on the one or more first candidate text representations. In some examples, latency management module780also causes task flow processing module736(or836) to determine (e.g., at least partially between second time908and third time910) one or more respective candidate task flows for the one or more candidate user intents and to select (e.g., at least partially between second time908and third time910) a first candidate task flow from the one or more candidate task flows. In some examples, latency management module780further causes task flow processing module736(or836) to execute (e.g., at least partially between second time908and third time910) the selected first candidate task flow without providing an output to a user of digital assistant system700(e.g., without displaying any result or outputting any speech/audio on the user device). In some examples, executing the first candidate task flow includes generating a text dialogue that is responsive to user utterance903and generating a spoken representation of the text dialogue. In these examples, latency management module780further causes dialogue flow processing module734to generate (e.g., at least partially between second time908and third time910) the text dialogue and causes speech synthesis processing module740to generate (e.g., at least partially between second time908and third time910) the spoken representation of the text dialogue. In some examples, speech synthesis processing module740receives a request (e.g., from task flow processing module736or dialogue flow processing module734) to generate the spoken representation of the text dialogue. In response to receiving the request, speech synthesis processing module740can determine (e.g., at least partially between second time908and third time910) whether the memory (e.g., memory202,470, or702) of the electronic device (e.g., server106, device104, device200, or system700) stores an audio file having a spoken representation of the text dialogue. In response to determining that the memory of the electronic device does store an audio file having a spoken representation of the text dialogue, speech synthesis processing module740awaits detection of an end-point condition before playing the stored audio file. In response to determining that the memory of the electronic device does not store an audio file having a spoken representation of the text dialogue, speech synthesis processing module740generates an audio file having a spoken representation of the text dialogue and stores the audio file in the memory. In some examples, generating and storing the audio file are at least partially performed between second time908and third time910. After storing the audio file, speech synthesis processing module740awaits detection of a speech end-point condition before playing the stored audio file. Latency management module780determines whether a speech end-point condition is detected between second time908and third time910. For example, detecting the speech end-point condition can include detecting, in the second portion of stream of audio902, an absence of user speech for longer than a second predetermined duration (e.g., 600 ms, 700 ms, or 800 ms). It should be recognized that, in other examples, other speech end-point conditions can be implemented. In the present example, the second portion of stream of audio902between second time908and third time910does not contain any user speech. In addition, duration914between second time908and third time910is longer than the second predetermined duration. Thus, in this example, a speech end-point condition is detected between second time908and third time910. In response to determining that a speech end-point condition is detected between the second time and the third time, latency management module780causes digital assistant system700to present (e.g., at fourth time1014) the results obtained from executing the first candidate task flow. For example, the results can be displayed on a display of the electronic device. In some examples, latency management module780causes output of the spoken representation of the text dialogue to the user by causing a respective stored audio file to be played. Because at least a portion of natural language processing, task flow processing, dialogue flow processing, and speech synthesis is performed prior to detecting the speech end-point condition, less processing can be required after the speech end-point condition is detected, which can reduce the response latency of digital assistant system700. Specifically, the results obtained from executing the first candidate task flow can be presented more quickly after the speech end-point condition is detected. In other examples, a speech end-point condition is not detected between second time908and third time910. For example, with reference to timeline1000inFIG.10, stream of audio1002contains user speech between second time1008and third time1012. Thus, in this example, a speech end-point condition is not detected between second time1008and third time1012. Timeline1000ofFIG.10from first time1004to second time1008can be similar or substantially identical to timeline900ofFIG.9from first time904to second time908. In particular, the first portion of stream of audio1002containing user utterance1003is received from first time1004to second time1008. Latency management module780determines that the absence of user speech between first intermediate time1006and second time1008satisfies the predetermined condition (e.g., duration1018is longer than the first predetermined duration) and in response, latency management module780causes the relevant components of digital assistant system700to initiate, for the first time, a sequence of processes that include natural language processing, task flow processing, dialogue flow processing, speech synthesis, or any combination thereof. Specifically, in response to determining that the first portion of stream of audio1002satisfies the predetermined condition, latency management module780causes natural language processing module732to begin performing, at second time1008, natural language processing on one or more first candidate text representations of user utterance1003in the first portion of stream of audio1002. Timeline1000ofFIG.10differs from timeline900ofFIG.9in that user utterance1003continues from the first portion of stream of audio1002(between first time1004and second time1008) to the second portion of stream of audio1002(between second time1008and third time1012) and stream of audio1002further extends from third time1012to fourth time1014. In this example, latency management module780determines that a speech end-point condition is not detected between second time1008and third time1012(e.g., due to detecting user speech) and in response, latency management module780causes digital assistant system700to forgo presentation of any results obtained from performing task flow processing between second time1008and third time1012. In other words, the natural language processing, task flow processing, dialogue flow processing, or speech synthesis performed between second time1008and third time1012can be discarded upon detecting user speech in the second portion of stream of audio1002. In addition, upon detecting user speech in the second portion of stream of audio1002, latency management module780causes digital assistant system700to process the user speech (continuation of user utterance1003) in the second portion of stream of audio1002. Specifically, latency management module780causes STT processing module730to perform speech recognition on the second portion of stream of audio1002and determine one or more second candidate text representations. Each candidate text representation of the one or more second candidate text representations is a candidate text representation of user utterance1003across the first and second portions of stream of audio1002(e.g., from first time1004to third time1012). Further, upon detecting user speech in the second portion of stream of audio1002, latency management module780causes digital assistant system700to continue receiving stream of audio1002from third time1012to fourth time1014. Specifically, a third portion of stream of audio1002is received from third time1012and fourth time1014. The second and third portions of stream of audio1002are processed in a similar manner as the first and second portions of stream of audio902, described above with reference toFIG.9. In particular, latency management module780determines whether the second portion of stream of audio1002satisfies the predetermined condition (e.g., absences of user speech for longer than a first predetermined duration). In the present example, as shown inFIG.10, the second portion of stream of audio1002contains an absence of user speech between second intermediate time1010and third time1012. If latency management module780determines that this absence of user speech between second intermediate time1010and third time1012satisfies the predetermined condition (e.g., duration1020is longer than the first predetermined duration), latency management module780causes the relevant components of digital assistant system700to initiate, for a second time, a sequence of processes that include natural language processing, task flow processing, dialogue flow processing, speech synthesis, or any combination thereof. Specifically, in the present example, in response to determining that the second portion of stream of audio1002satisfies the predetermined condition, latency management module780causes natural language processing module732to begin performing, at third time1012, natural language processing on the one or more second candidate text representations. As discussed above, latency management module780causes one or more of natural language processing, task flow processing, dialogue flow processing, and speech synthesis to be performed between third time1012and fourth time1014. In particular, between third time1012and fourth time1014, latency management module780causes natural language processing module732to determine, based on the one or more second candidate text representations, one or more second candidate user intents for user utterance1003in the first and second portions of stream of audio1002. In some examples, latency management module780causes task flow processing module736(or836) to determine (e.g., at least partially between third time1012and fourth time1014) one or more respective second candidate task flows for the one or more second candidate user intents and to select (e.g., at least partially between third time1012and fourth time1014) a second candidate task flow from the one or more second candidate task flows. In some examples, latency management module780further causes task flow processing module736(or836) to execute (e.g., at least partially between third time1012and fourth time1014) the selected second candidate task flow without providing an output to a user of the digital assistant system (e.g., without displaying any result or outputting any speech/audio on the user device). Latency management module780determines whether a speech end-point condition is detected between third time1012and fourth time1014. In the present example, the third portion of stream of audio1002between third time1012and fourth time1014does not contain any user speech. In addition, duration1022between third time1012and fourth time1014is longer than the second predetermined duration. Thus, in this example, a speech end-point condition is detected between third time1012and fourth time1014. In response to determining that a speech end-point condition is detected between third time1012and fourth time1014, latency management module780causes digital assistant system700to present (e.g., at fourth time1014) the results obtained from executing the first candidate task flow. For example, the results can be displayed on a display of the electronic device. In other examples, presenting the results includes outputting spoken dialogue that is responsive to user utterance1003. In these examples, the spoken dialogue can be at least partially generated between third time1012and fourth time1014. 4. Process for Operating a Digital Assistant FIGS.11A-11Billustrate process1100for operating a digital assistant, according to various examples. Some aspects of process1100relate to low-latency operation of a digital assistant. In addition, some aspects of process1100relate to more reliable and robust operation of a digital assistant. Process1100is performed, for example, using one or more electronic devices implementing a digital assistant. In some examples, process1100is performed using a client-server system (e.g., system100), and the blocks of process1100are divided up in any manner between the server (e.g., DA server106) and a client device (user device104). In other examples, the blocks of process1100are divided up between the server and multiple client devices (e.g., a mobile phone and a smart watch). Thus, while portions of process1100are described herein as being performed by particular devices of a client-server system, it will be appreciated that process1100is not so limited. In other examples, process1100is performed using only a client device (e.g., user device104) or only multiple client devices. In process1100, some blocks are, optionally, combined, the order of some blocks is, optionally, changed, and some blocks are, optionally, omitted. In some examples, additional steps may be performed in combination with process1100. At block1102, a stream of audio (e.g., stream of audio902or1002) is received (e.g., at I/O processing module728via microphone213). The stream of audio is received, for example, by activating a microphone (e.g., microphone213) of the electronic device (e.g., user device104) and initiating the collection of audio data via the microphone. Activation of the microphone and initiating collection of audio data can be performed in response to detecting a predetermined user input. For example, detecting the activation of a “home” affordance of the electronic device (e.g., by the user pressing and holding the affordance) can invoke the digital assistant and initiate the receiving of the stream of audio. In some examples, the stream of audio is a continuous stream of audio data. The stream of audio data can be continuously collected and stored in a buffer (e.g., buffer of audio circuitry210). In some examples, block1102is performed in accordance with blocks1104and1106. Specifically, in these examples, the stream of audio is received across two time intervals. At block1104, a first portion of the stream of audio is received from a first time (e.g., first time904or1004) to a second time (e.g., second time908or1008). The first portion of the stream of audio contains, for example, a user utterance (user utterance903or1003). At block1106, a second portion of the stream of audio is received from the second time (e.g., second time908or1008) to a third time (e.g., third time910or1012). The first time, second time, and third time are each specific points of time. The second time is after the first time and the third time is after the second time. In some examples, the stream of audio is continuously received from the first time through to the third time, wherein the first portion of the stream of audio is continuously received from the first time to the second time and the second portion of the stream of audio is continuously received from the second time to the third time. At block1108, a plurality of candidate text representations of the user utterance are determined (e.g., using STT processing module730). The plurality of candidate text representations are determined by performing speech recognition on the stream of audio. Each candidate text representation is associated with a respective speech recognition confidence score. The speech recognition confidence scores can indicate the confidence that a particular candidate text representation is the correct text representation of the user utterance. In addition, the speech recognition confidence scores can indicate the confidence of any determined word in a candidate text representation of the plurality of candidate text representations. In some examples, the plurality of candidate text representations are the n-best candidate text representations having the n-highest speech recognition confidence scores. In some examples, block1108is performed in real-time as the user utterance is being received at block1104. In some examples, speech recognition is performed automatically upon receiving the stream of audio. In particular, words of the user utterance are decoded and transcribed as each portion of the user utterance is received. In these examples, block1108is performed prior to block1110. In other examples, block1108is performed after block1110(e.g., performed in response to determining that the first portion of the stream of audio satisfies a predetermined condition). At block1110, a determination is made (e.g., using latency management module780) as to whether the first portion of the stream of audio satisfies a predetermined condition. In some examples, the predetermined condition is a condition based on one or more audio characteristics of the stream of audio. The one or more audio characteristics include, for example, one or more time domain and/or frequency domain features of the stream of audio. Time domain features include, for example, zero-crossing rates, short-time energy, spectral energy, spectral flatness, autocorrelation, or the like. Frequency domain features include, for example, mel-frequency cepstral coefficients, linear predictive cepstral coefficients, mel-frequency discrete wavelet coefficients, or the like. In some examples, the predetermined condition includes the condition of detecting, in the first portion of the stream of audio, an absence of user speech for longer than a first predetermined duration after the user utterance. Specifically, process1100can continuously monitor the first portion of the stream of audio (e.g., from the first time to the second time) and determine the start time and end time of the user utterances (e.g., using conventional speech detection techniques). If an absence of user speech is detected in the first portion of the stream of audio for longer than a first predetermined duration (e.g., 50 ms, 75 ms, or 100 ms) after the end time of the user utterance, it can be determined that the first portion of the stream of audio satisfies the predetermined condition. In some examples, the presence or absence of user speech is detected based on audio energy level (e.g., energy level of the stream of audio within a frequency range corresponding to human speech, such as 50-500 Hz). In these examples, the predetermined condition includes the condition of detecting, in the first portion of the stream of audio, an audio energy level that is less than a predetermined threshold energy level for longer than a first predetermined duration after the end time of the user utterance. In some examples, the predetermined condition includes a condition that relates to a linguistic characteristic of the user utterance. For example, the plurality of candidate text representations of block1108can be analyzed to determine whether an end-of-sentence condition is detected in the one or more candidate text representations. In some examples, the end-of-sentence condition is detected if the ending portions of the one or more candidate text representations match a predetermined sequence of words. In some examples, a language model is used to detect an end-of-sentence condition in the one or more candidate text representations. In response to determining that the first portion of the stream of audio satisfies a predetermined condition, one or more of the operations of blocks1112-1126are performed. In particular, one or more of the operations of blocks1112-1126are performed automatically (e.g., without further input from the user) in response to determining that the first portion of the stream of audio satisfies a predetermined condition. Further, in response to determining that the first portion of the stream of audio satisfies a predetermined condition, one or more of the operations of blocks1112-1126are at least partially performed between the second time (e.g., second time908or1008) and the third time (e.g., third time910or1012) (e.g., while the second portion of the stream of audio is received at block1106). In response to determining that the first portion of the stream of audio does not satisfy a predetermined condition, block1110continues to monitor the first portion of the stream of audio (e.g., without performing blocks1112-1126) until it is determined that the predetermined condition is satisfied by the first portion of the stream of audio. Determining whether the first portion of the stream of audio satisfies a predetermined condition and performing, at least partially between the second time and the third time, one or more of the operations of blocks1112-1126in response to determining that the first portion of the stream of audio satisfies a predetermined condition can reduce the response latency of the digital assistant on the electronic device. In particular, the electronic device can at least partially complete these operations while waiting for the speech end-point condition to be detected. This can enhance operability of the electronic device by reducing the operations needed to be performed after detecting the speech end-point condition. In turn, this can reduce the overall latency between receiving the user utterance (block1104) and presenting the results to the user (block1130). At block1112, a plurality of candidate user intents for the user utterance are determined (e.g., using natural language processing module732). In particular, natural language processing is performed on the one or more candidate text representations of block1108to determine the plurality of candidate user intents. Each candidate user intent of the plurality of candidate user intents is an actionable intent that represents one or more tasks, which when performed, would satisfy a predicted goal corresponding to the user utterance. In some examples, each candidate user intent is determined in the form of a structured query. In some examples, each candidate text representation of the one or more candidate text representations of block1108is parsed to determine one or more respective candidate user intents. In some examples, the plurality of candidate user intents determined at block1112include candidate user intents corresponding to different candidate text representations. For example, at block1112, a first candidate user intent of the plurality of candidate user intents can be determined from a first candidate text representation of block1108and a second candidate user intent of the plurality of candidate user intents can be determined from a second candidate text representation of block1108. In some examples, each candidate user intent is associated with a respective intent confidence score. The intent confidence scores can indicate the confidence that a particular candidate user intent is the correct user intent for the respective candidate text representation. In addition, the intent confidence scores can indicate the confidence of corresponding domains, actionable intents, concepts, or properties determined for the candidate user intents. In some examples, the plurality of candidate user intents are the m-best candidate user intents having the m-highest intent confidence scores. At block1114, a plurality of candidate task flows are determined (e.g., using task flow processing module736or836) from the plurality of candidate user intents of block1112. Specifically, each candidate user intent of the plurality of candidate user intents is mapped to a corresponding candidate task flow of the plurality of candidate task flows. Each candidate task flow includes procedures for performing one or more actions that fulfill the respective candidate user intent. For candidate user intents having incomplete structured queries (e.g., partial structured queries with one or more missing property values), the corresponding candidate task flows can include procedures for resolving the incomplete structured queries. For example, the candidate task flows can include procedures for determining one or more flow parameters (e.g., corresponding to the one or more missing property values) by searching one or more data sources or querying the user for additional information. Each candidate task flow further includes procedures for performing one or more actions represented by the corresponding candidate user intent (e.g., represented by the complete structured query of the candidate user intent). At block1116, a plurality of task flow scores are determined (e.g., using task flow processing module736or836) for the plurality of candidate task flows. Each task flow score of the plurality of task flow scores corresponds to a respective candidate task flow of the plurality of candidate task flows. The task flow score for a respective candidate task flow can represent the likelihood that the respective candidate task flow is the correct candidate task flow to perform given the user utterance. For example, the task flow score can represent the likelihood that the user's actual desired goal for providing the user utterance is fulfilled by performing the respective candidate task flow. In some examples, each task flow score is based on a flow parameter score for the respective candidate task flow. In these examples, block1116includes determining (e.g., using task flow manager838) a respective flow parameter score for each candidate task flow. The flow parameter score for a respective candidate task flow can represent a confidence of resolving one or more flow parameters for the respective candidate task flow. In some examples, determining a flow parameter score for a respective candidate task flow includes resolving one or more flow parameters for the respective candidate task flow. Specifically, for each candidate task flow, process1100determines, at block1118, whether the respective candidate task flow includes procedures for resolving one or more flow parameters. The one or more flow parameters can correspond, for example, to one or more missing property values of a corresponding incomplete structured query. In some examples, the one or more flow parameters are parameters that are not expressly specified in the user utterance. If process1100determines that the respective candidate task flow includes procedures for resolving one or more flow parameters, the procedures can be executed (e.g., using task flow resolver840) to resolve the one or more flow parameters. In some examples, executing the procedures causes one or more data sources to be searched. In particular, the one or more data sources are searched to obtain one or more values for the one or more flow parameters. In some examples, the one or more data sources correspond to one or more properties of the respective candidate user intent. If the one or more flow parameters for the respective candidate task flow can be resolved (e.g., by successfully obtaining one or more values for the one or more flow parameters from the one or more data sources), then the flow parameter score determined for the respective candidate task flow can be high. Conversely, if the one or more flow parameters for the respective candidate task flow cannot be resolved (e.g., due to a failure to obtain one or more values for the one or more flow parameters from the one or more data sources), then the flow parameter score determined for the respective candidate task flow can be low. Determining task flow scores and/or task parameter scores for the plurality of candidate task flows can be advantageous for evaluating the reliability of each candidate task flow prior to selecting and executing any candidate task flow. In particular, the task flow scores and/or task parameter scores can be used to identify candidate task flows that cannot be resolved. This allows process1100to only select (e.g., at block1122) and execute (e.g., at block1124) candidate task flows that can be resolved, which improves the reliability and robustness of task flow processing by the digital assistant. In some examples, each task flow score of the plurality of task flow scores is based on the intent confidence score of a respective candidate user intent corresponding to the respective candidate task flow. Further, in some examples, each task flow score of the plurality of task flow scores is based on the speech recognition confidence score of the respective candidate text representation corresponding to the respective candidate task flow. In some examples, each task flow score is based on a combination of the flow parameter score for the respective candidate task flow, the intent confidence score of the respective candidate user intent, and the speech recognition confidence score of the respective candidate text representation. At block1120, the plurality of candidate task flows are ranked (e.g., using task flow manager838) according to the plurality of task flow scores of block1116. For example, the plurality of candidate task flows are ranked from the highest task flow score to the lowest task flow score. At block1122, a first candidate task flow of the plurality of candidate task flows is selected (e.g., using task flow manager838). In particular, the first candidate task flow of the plurality of candidate task flows is selected based on the plurality of task flow scores and the ranking of block1120. For example, the selected first candidate task flow is the highest ranked candidate task flow of the plurality of candidate task flows (e.g., having the highest task flow score). In some examples, the selected first candidate task flow has the highest task flow score, but corresponds to a candidate user intent having an intent confidence score that is not the highest intent confidence score among the plurality of candidate user intents. In some examples, the selected first candidate task flow corresponds to a text representation having a speech recognition score that is not the highest speech recognition score among the plurality of candidate text representations. In the examples described above, process1100evaluates each of the plurality of candidate task flows in parallel to select the first candidate task flow having the highest task flow score. It should be appreciated, however, that in other examples, process1100can instead evaluate the plurality of candidate task flows serially. For instance, in some examples, a first task flow score is initially determined only for a candidate task flow corresponding to a candidate user intent having the highest intent confidence score. If the first task flow score satisfies a predetermined criterion (e.g., greater than a predetermined threshold level), then the corresponding candidate task flow is selected at block1122. If, however, the first task flow score does not satisfy the predetermined criterion (e.g., less than the predetermined threshold level), then a second task flow score is determined for another candidate task flow corresponding to a candidate user intent having the next highest intent confidence score. Depending on whether or not the second task flow score satisfies the predetermined criterion, the another candidate task flow corresponding to the second task flow score can be selected at block1122, or additional task flow scores can be subsequently determined for additional candidate task flows based on the associated intent confidence scores. Selecting the first candidate task flow based on the plurality of task flow scores can enhance the accuracy and reliability of the digital assistant on the electronic device. In particular, using the plurality of task flow scores, process1100can avoid selecting candidate task flows that cannot be resolved. This can reduce the likelihood of task flow processing errors during execution of the selected first candidate task flow. Moreover, because candidate task flows that cannot be resolved are less likely to coincide with the user's actual goals, selecting the first candidate task flow based on the plurality of task flow scores can increase the likelihood that the selected first candidate task flow coincides with the user's actual desired goal. As a result, the accuracy and reliability of the digital assistant on the electronic device can be improved by selecting the first candidate task flow based on the plurality of task flow scores. At block1124, the first candidate task flow selected at block1122is executed (e.g., using task flow manager838). Specifically, one or more actions represented by the first candidate task flow are performed. In some examples, results are obtained by executing the first candidate task flow. The results can include, for example, information requested by the user in the user utterance. In some examples, not all actions represented by the first candidate task flow are performed at block1124. Specifically, actions that provide an output to the user of the device are not performed at block1124. For example, block1124does not include displaying, on a display of the electronic device, the results obtained by executing the first candidate task flow. Nor does block1124include providing audio output (e.g., speech dialogue or music) on the electronic device. Thus, in some examples, the first candidate task flow is executed without providing any output to the user prior to detecting a speech end-point condition at block1128. In some examples, executing the first candidate task flow at block1124can include performing the operations of block1126. At block1126, a text dialogue that is responsive to the user utterance is generated (e.g., using task flow manager838in conjunction with dialogue flow processing module734). In some examples, the generated text dialogue includes results obtained from executing the first candidate task flow. In some examples, the text dialogue is generated at block1126without outputting the text dialogue or a spoken representation of the text dialogue to the user. In some examples, block1126further includes additional operations for generating a spoken representation of the text dialogue for output (e.g., operations of blocks1202-1208in process1200, described below with reference toFIG.12). In these examples, block1126can include generating a plurality of speech attribute values for the text dialogue. The plurality of speech attribute values provide information that can be used to generate the spoken representation of the text dialogue. In some examples, the plurality of speech attribute values can include a first speech attribute value that specifies the text dialogue (e.g., a representation of the text dialogue that can be used by a speech synthesis processing module to convert the text dialogue into corresponding speech). In some examples, the plurality of speech attribute values can specify one or more speech characteristics for generating the spoken representation of the text dialogue, such as language, gender, audio quality, type (e.g., accent/localization), speech rate, volume, pitch, or the like. At block1128, a determination is made as to whether a speech end-point condition is detected between the second time (e.g., second time908or1008) and the third time (e.g., third time910or1012). A speech end-point refers to a point in the stream of audio where the user has finished speaking (e.g., end of the user utterance). The determination of block1128is made, for example, while the stream of audio is being received from the second time to the third time at block1102. In some examples, the determination of block1128is performed by monitoring one or more audio characteristics in the second portion of the stream of audio. For instance, in some examples, detecting the speech end-point condition can include detecting, in the second portion of the stream of audio, an absence of user speech for greater than a second predetermined duration (e.g., 600 ms, 700 ms, or 800 ms). In these examples, block1128includes determining whether the second portion of the stream of audio contains a continuation of the user utterance in the first portion of the stream of audio. If a continuation of the user utterance in the first portion of the stream of audio is detected in the second portion of the stream of audio, then process1100can determine that a speech end-point condition is not detected between the second time and the third time. If a continuation of the user utterance in the first portion of the stream of audio is not detected in the second portion of the stream of audio for greater than the second predetermined duration, process1100can determine that a speech end-point condition is detected between the second time and the third time. The absence of user speech can be detected using similar speech detection techniques described above with respect to block1110. In some examples, the second predetermined duration is longer than the first predetermined duration of block1110. In some examples, detecting the speech end-point condition includes detecting a predetermined type of non-speech input from the user between the second time and the third time. For example, a user may invoke the digital assistant at the first time by pressing and holding a button (e.g., “home” or menu button304) of the electronic device. In this example, the predetermined type of non-speech input can be the user releasing the button (e.g., at the third time). In other examples, the predetermined type of non-speech input is a user input of an affordance displayed on the touch screen (e.g., touch screen212) of the electronic device. In response to determining that a speech end-point condition is detected between the second time and the third time, block1130is performed. Specifically, at block1130, results from executing the selected first candidate task flow at block1124are presented to the user. In some examples, block1130includes outputting the results on the electronic device to the user. For example, the results are displayed on a display of the electronic device. The results can include, for example, the text dialogue generated at block1126. In some examples, the results are presented to the user in the form of audio output. For example, the results can include music or speech dialogue. In some examples, presenting the results at block1130includes performing the operations of block1132. Specifically, at block1132, a spoken representation of the text dialogue generated at block1126is outputted. Outputting the spoken representation of the text dialogue includes, for example, playing an audio file having the spoken representation of the text dialogue. In some examples, outputting the spoken representation of the text dialogue includes performing one or more of the blocks of process1200, described below with reference toFIG.12. For example, outputting the spoken representation of the text dialogue includes determining whether the memory of the electronic device stores an audio file having the spoken representation of the text dialogue (block1204). In response to determining that the memory of the electronic device stores an audio file having the spoken representation of the text dialogue, the spoken representation of the text dialogue is outputted by playing the stored audio file (block1212). In response to determining that the memory of the electronic device does not store an audio file having the spoken representation of the text dialogue, an audio file having the spoken representation of the text dialogue is generated (block1206) and stored (block1208) in the memory of the electronic device. In response to determining that the speech end-point condition is detected (block1128or1210), the stored audio file is played to output the spoken representation of the text dialogue (block1212). As discussed above, at least partially performing the operations of blocks1112-1126and/or1202-1208between the second time and the third time (prior to detecting the speech end-point condition at block1128or1210) can reduce the number of operations required to be performed upon detecting the speech end-point condition. Thus, less computation can be required upon detecting the speech end-point condition, which can enable the digital assistant to provide a quicker response (e.g., by presenting the results at block1130or outputting spoken dialogue at block1132or1212) upon detecting the speech end-point condition. With reference back to block1128, in response to determining that a speech end-point condition is not detected between the second time and the third time, process1100forgoes performance of block1130(and block1132). For example, if process1100determines that the second portion of the stream of audio contains a continuation of the user utterance, then no speech end-point condition is detected between the second time and the third time and process1100forgoes performance of blocks1130and1132. Specifically, process1100forgoes presenting results from executing the selected first candidate task flow of block1122. In examples where text dialogue is generated, process1100further forgoes output of a spoken representation of the text dialogue. Furthermore, if process1100is still performing any of the operations of blocks1112-1126or blocks1202-1208with respect to the utterance in the first portion of the stream of audio, process1100ceases to perform these operations upon determining that a speech end-point condition is not detected between the second time and the third time. In some examples, in response to determining that a speech end-point condition is not detected between the second time and the third time, process1100can return to one or more of blocks1102-1126to process the speech in the second portion of the stream of audio. Specifically, upon detecting a continuation of the user utterance in the second portion of the stream of audio, speech recognition is performed (block1108) on the continuation of the user utterance in the second portion of the stream of audio. Additionally, in some examples, process1100continues to receive the stream of audio (block1102) after the third time. Specifically, a third portion of the stream of audio can be received (block1102) from the third time (third time1012) to a fourth time (fourth time1014). In some examples, the speech recognition results of the continuation of the user utterance in the second portion of the stream of audio is combined with the speech recognition results of the user utterance in the first portion of the stream of audio to obtain a second plurality of candidate text representations. Each candidate text representation of the second plurality of candidate text representations is a text representation of the user utterance across the first and second portions of the stream of audio. A determination is made (block1110) as to whether the second portion of the stream of audio satisfies a predetermined condition. In response to determining that the second portion of the stream of audio satisfies a predetermined condition, one or more of the operations of blocks1112-1126are performed with respect to the second plurality of candidate text representations. In particular, in response to determining that the second portion of the stream of audio satisfies a predetermined condition, one or more of the operations of blocks1112-1130are at least partially performed between the third time (e.g., third time1012) and the fourth time (e.g., fourth time1014) (e.g., while receiving the third portion of the stream of audio at block1102). Based on the second plurality of candidate text representations, a second plurality of candidate user intents for the user utterance in the first and second portions of the stream of audio are determined (block1112). A second plurality of candidate task flows are determined (block1114) from the second plurality of candidate user intents. Specifically, each candidate user intent of the second plurality of candidate user intents is mapped to a corresponding candidate task flow of the second plurality of candidate task flows. A second candidate task flow is selected from the second plurality of candidate task flows (block1122). The selection can be based on a second plurality of task flow scores determined for the second plurality of candidate task flows (block1116). The selected second candidate task flow is executed (block1124) without providing any output to the user prior to detecting a speech end-point condition. In some examples, second results are obtained from executing the second candidate task flow. In some examples, executing the second candidate task flow includes generating a second text dialogue (block1126) that is responsive to the user utterance in the first and second portions of the stream of audio. In some examples, the second text dialogue is generated without outputting the second text dialogue or a spoken representation of the second text dialogue to the user prior to detecting a speech end-point condition. In some examples, additional operations for generating a spoken representation of the second text dialogue for output are performed (e.g., operations in blocks1202-1208of process1200, described below with reference toFIG.12). In some examples, a determination is made (block1128) as to whether a speech end-point condition is detected between the third time and the fourth time. In response to determining that a speech end-point condition is detected between the third time and the fourth time, second results from executing the selected second candidate task flow are presented to the user (block1130). In some examples, presenting the second results includes outputting, to the user of the device, the spoken representation of the second text dialogue by playing a stored second audio file (e.g., a stored second audio file generated at block1206). FIG.12illustrates process1200for operating a digital assistant to generate a spoken dialogue response, according to various examples. In some examples, process1200is implemented as part of process1100for operating a digital assistant. Process1100is performed, for example, using one or more electronic devices implementing a digital assistant. Implementing process1200in a digital assistant system can reduce the latency associated with text-to-speech processing. In some examples, process1200is performed using a client-server system (e.g., system100), and the blocks of process1200are divided up in any manner between the server (e.g., DA server106) and a client device (user device104). In some examples, process1200is performed using only a client device (e.g., user device104) or only multiple client devices. In process1100, some blocks are, optionally, combined, the order of some blocks is, optionally, changed, and some blocks are, optionally, omitted. In some examples, additional operations may be performed in combination with process1200. At block1202, a text dialogue is received (e.g., at speech synthesis processing module740). In some examples, the text dialogue is generated by the digital assistant system (e.g., at block1126) in response to a received user utterance (e.g., at block1102). In some examples, the text dialogue is received with a plurality of associated speech attribute values (e.g., speech attribute values, described above with reference to block1126). In some examples, the plurality of speech attribute values specify one or more speech characteristics for generating the spoken representation of the text dialogue. The one or more speech characteristics include, for example, language, gender, audio quality, type (e.g., accent/localization), speech rate, volume, pitch, or the like. The combination of the text dialogue and the plurality of speech attribute values can represent a request to generate a spoken representation of the text dialogue in accordance with the one or more speech characteristics defined in the plurality of speech attribute values. In response to receiving the text dialogue at block1202, block1204is performed. At block1204, a determination is made (e.g., using speech synthesis processing module740) as to whether the memory (e.g., memory202,470, or702) of the electronic device (e.g., device104,200,600, or700) stores an audio file having the spoken representation of the text dialogue. For example, block1204includes searching the memory of the electronic device for an audio file having the spoken representation of the text dialogue. In some examples, the memory of the electronic device contains one or more audio files. In these examples, block1204includes analyzing each audio file of the one or more audio files to determine whether one of the one or more audio files includes a plurality of speech attribute values that match the plurality of speech attribute values for the text dialogue received at block1202. If an audio file of the one or more audio files has a first plurality of speech attribute values that match the plurality of speech attribute values for the text dialogue, then it would be determined that the memory stores an audio file having the spoken representation of the text dialogue. In some examples, block1204includes searching the file names of the one or more audio files stored in the memory of the electronic device. In these examples, the file name of each audio file is analyzed to determine whether the file name represents a plurality of speech attribute values that match the plurality of speech attribute values for the text dialogue. Specifically, each file name can encode (e.g., using md5 hash) a plurality of speech attribute values. Thus, analyzing the file names of the one or more audio files stored in the memory can determine whether the memory stores an audio file having the spoken representation of the text dialogue. In response to determining that the memory of the electronic device stores an audio file having the spoken representation of the text dialogue, process1200forgoes performance of blocks1206and1208and proceeds to block1210. In response to determining that the memory of the electronic device does not store an audio file having the spoken representation of the text dialogue, block1206is performed. At block1206, an audio file having the spoken representation of the text dialogue is generated (e.g., using speech synthesis processing module740). In particular, speech synthesis is performed using the text dialogue and the associated plurality of speech attribute values to generate the audio file of the spoken representation of the text dialogue. The spoken representation of the text dialogue is generated according to the one or more speech characteristics specified in the plurality of speech attribute values. At block1208, the audio file generated at block1206is stored in the memory of the electronic device. In some examples, the audio file having the spoken representation of the text dialogue can indicate the plurality of speech attribute values for the text dialogue. Specifically, in some examples, the audio file having the spoken representation of the text dialogue is stored with a file name that encodes the plurality of speech attribute values for the text dialogue (e.g., using md5 hash). In some examples, blocks1202-1208are performed without providing any output (e.g., audio or visual) to the user. Specifically, neither the text dialogue nor the spoken representation of the text dialogue is outputted to the user prior to determining that the speech end-point condition is detected at block1210. Blocks1202-1208of process1200are performed at least partially prior to a speech end-point condition being detected at block1210. This can be advantageous for reducing the response latency of the digital assistant on the electronic device. At block1210, a determination is made (e.g., using latency management module780) as to whether a speech end-point condition is detected. Block1208is similar or substantially identical to block1128, described above. For example, the determination can be made between the second time (e.g., second time908) and the third time (e.g., third time910) while the second portion of the stream of audio is received at block1102. In response to determining that a speech end-point condition is detected, block1212is performed. Specifically, at block1212, the spoken representation of the text dialogue is outputted to the user by playing the stored audio file. Block1212is similar or substantially identical to block1132. In response to determining that a speech end-point condition is not detected, process1200forgoes output of the spoken representation of the text dialogue (block1214). For example, process1100can remain at block1210to await detection of the speech end-point condition. In the examples described above, a suitable candidate task flow is first selected (block1122) and the selected candidate task flow is then executed (block1124). Moreover, an audio file of spoken dialogue is generated (block1206) only for the selected candidate task flow. However, it should be recognized that, in other examples, a suitable candidate task flow can be selected at block1122while executing a plurality of candidate task flows at block1124. In certain implementations, executing the plurality of candidate task flows prior to selecting a suitable candidate task flow can be advantageous for reducing latency. Specifically, determining the task flow scores (block1116) and selecting a suitable candidate task flow (block1122) based on the determined task flow scores can be computationally intensive and thus to reduce latency, the plurality of candidate task flows can be executed in parallel while determining the task flow scores and selecting a suitable candidate task flow. In addition, a plurality of respective audio files containing spoken dialogues for the plurality of candidate task flows can be generated at block1206while determining the task flow scores and selecting a suitable candidate task flow. By performing these operations in parallel, when a suitable candidate task flow is selected, the selected candidate task flow would have been, for example, at least partially executed and the respective audio file containing spoken dialogue for the selected candidate task flow would have been, for example, at least partially generated. As a result, response latency can be further reduced. Upon detecting a speech end-point condition at block1128or1210, the result corresponding to the selected candidate task flow can be retrieved from the plurality of results and presented to the user. In addition, the audio file corresponding to the selected candidate task flow can be retrieved from the plurality of audio files and played to output the corresponding spoken dialogue for the selected candidate task flow. The operations described above with reference toFIGS.11A-11B and12are optionally implemented by components depicted inFIGS.1-4,6A-B,7A-7C, and8. For example, the operations of processes1100and1200may be implemented by I/O processing module728, STT processing module730, natural language processing module732, dialogue flow processing module734, task flow processing module736, speech synthesis processing module740, audio processing module770, latency management module780, task flow manager838, and task flow resolver840. It would be clear to a person having ordinary skill in the art how other processes are implemented based on the components depicted inFIGS.1-4,6A-B,7A-7C, and8. In accordance with some implementations, a computer-readable storage medium (e.g., a non-transitory computer-readable storage medium) is provided, the computer-readable storage medium storing one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions for performing any of the methods or processes described herein. In accordance with some implementations, an electronic device (e.g., a portable electronic device) is provided that comprises means for performing any of the methods or processes described herein. In accordance with some implementations, an electronic device (e.g., a portable electronic device) is provided that comprises a processing unit configured to perform any of the methods or processes described herein. In accordance with some implementations, an electronic device (e.g., a portable electronic device) is provided that comprises one or more processors and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for performing any of the methods or processes described herein. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated. Although the disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims.
236,346
11862152
In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. DETAILED DESCRIPTION Provided herein are system, apparatus, article of manufacture, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for adapting ASR systems to process voice queries involving content within dynamic domains. This adaptation involves the use of multiple ASR modules, including a second level module that is tailored to handle dynamic domain voice queries and provide domain-specific candidates in response to a voice query involving content in dynamic domains. As indicated above, voice queries may require retrieving content in dynamic domains such as the entertainment domain that encompasses new media content from movies, songs, television shows, etc., as well as from user-generated content sites. For example, a user may submit voice queries for a movie titled “NOMADLAND” or for a kids show called “PAW PATROL.” These titles are not conventional words or phrases; they are unique and novel as most titles are with respect to media content. A conventional ASR system would likely produce a domain mismatch when attempting to process a voice query involving these titles (e.g., “Play NOMADLAND” or “Search for PAW PATROL episodes”). Domain mismatches with these titles are likely to occur because of their phonetic similarities to other words and the static nature of conventional ASR systems. For example, a conventional ASR system might translate “NOMADLAND” into “Nomad” and “Land” or perhaps even as a more well-established phrase, “No Man's Land” and “PAW PATROL” into “Pop Patrol.” A conventional ASR system would likely not recognize these titles as being associated with media content and therefore provide inaccurate translations that are irrelevant to the voice query. Put another way, the translations may be phonetically correct (e.g., “PAW PATROL” vs. “Pop Patrol”) but they are not relevant to the entertainment domain. The disclosure herein describes dynamic domain adaptation for ASR embodiments for more accurately processing voice queries that involve content in dynamic domains such an entertainment domain involving ever changing media content. The result is a novel two level ASR system that involves, at the first level, an ASR engine for performing a translation of a voice query and, at the second level, a candidate generator that is linked to a domain-specific entity index that can be continuously updated in real-time with new entities. Such an implementation allows for new entities to be included as part of the ASR processing without having to re-train the ASR engine or large amounts of domain data. In order to achieve this real-time domain adaptation, the domain-specific entity index may be configured to store textual information associated with new entities such as their phonetic representation and other relevant metadata (e.g., content type, information source, grapheme information, 3-gram information, and popularity score). In a given embodiment, the two level ASR system may be implemented in a voice input device (also called voice responsive device or audio responsive device) that includes a microphone capable of receiving speech. Examples of a voice input device include a remote control device or a media device. A remote control device may be implemented as a dedicated remote control device with physical buttons or a mobile device with an installed software application providing remote control functionality to the mobile device. A media device may be any device that has media streaming capability such as a standalone media device that externally connects to a display device or a display device that has an integrated media device. Examples of a standalone media device include a media streaming player and a sound bar. Accordingly, various embodiments of this disclosure may be implemented using and/or may be part of a multimedia environment100shown inFIG.1. It is noted, however, that multimedia environment100is provided solely for illustrative purposes, and is not limiting. Embodiments of this disclosure may be implemented using and/or may be part of environments different from and/or in addition to the multimedia environment100, as will be appreciated by persons skilled in the relevant art(s) based on the teachings contained herein. Also, the embodiments of this disclosure are applicable to any voice responsive devices, not just those related to entertainment systems such as multimedia environment100. Such voice responsive devices include digital assistants, smart phones and tablets, appliances, automobiles and other vehicles, and Internet of Things (IOT) devices, to name just some examples. An example of the multimedia environment100shall now be described. Multimedia Environment In a non-limiting example, multimedia environment100may be directed a system for processing audio commands involving streaming media. However, this disclosure is applicable to any type of media (instead of or in addition to streaming media), as well as any mechanism, means, protocol, method and/or process for distributing media where audio commands may be processed in order to request media. The multimedia environment100may include one or more media systems104. A media system104could represent a family room, a kitchen, a backyard, a home theater, a school classroom, a library, a car, a boat, a bus, a plane, a movie theater, a stadium, an auditorium, a park, a bar, a restaurant, or any other location or space where it is desired to receive and play streaming content. User(s)102may operate with the media system104to select and consume media content by, for example, providing audio commands to request media content. Each media system104may include one or more media devices106each coupled to one or more display devices108. It is noted that terms such as “coupled,” “connected to,” “attached,” “linked,” “combined” and similar terms may refer to physical, electrical, magnetic, logical, etc., connections, unless otherwise specified herein. Media device106may be a streaming media device, DVD or BLU-RAY device, audio/video playback device, a sound bar, cable box, and/or digital video recording device, to name just a few examples. Display device108may be a monitor, television (TV), computer, smart phone, tablet, wearable (such as a watch or glasses), appliance, internet of things (IoT) device, and/or projector, to name just a few examples. In some embodiments, media device106can be a part of, integrated with, operatively coupled to, and/or connected to its respective display device108. Each media device106may be configured to communicate with network118via a communication device114. The communication device114may include, for example, a cable modem or satellite TV transceiver. The media device106may communicate with the communication device114over a link116, wherein the link116may include wireless (such as WiFi) and/or wired connections. In various embodiments, the network118can include, without limitation, wired and/or wireless intranet, extranet, Internet, cellular, Bluetooth, infrared, and/or any other short range, long range, local, regional, global communications mechanism, means, approach, protocol and/or network, as well as any combination(s) thereof. Media system104may include a remote control110. The remote control110can be any component, part, apparatus and/or method for controlling the media device106and/or display device108, such as a remote control, a tablet, laptop computer, smartphone, wearable, on-screen controls, integrated control buttons, audio controls, or any combination thereof, to name just a few examples. In an embodiment, the remote control110wirelessly communicates with the media device106and/or display device108using cellular, Bluetooth, infrared, etc., or any combination thereof. In an embodiment, the remote control110may be integrated into media device106or display device108. The remote control110may include a microphone112, which is further described below. Any device in media system104may be capable of receiving and processing audio commands from user(s)102. Such devices may be referred to herein as audio or voice responsive devices, and/or voice input devices. For example, any one of media device106, display device108, or remote control110may include a domain adapted audio command processing module130that receives audio commands requesting media content, processes the audio commands, and performs actions for retrieving and providing the requested media content to media system104. In an embodiment, microphone112may also be integrated into media device106or display device108, thereby enabling media device106or display device108to receive audio commands directly from user102. Additional components and operations of domain adapted audio command processing module130are described further below with regard toFIGS.2-5below. While domain adapted audio command processing module130may be implemented in each device in media system104, in practice, domain adapted audio command processing modules130may also be implemented as a single module within one of media device106, display device108, and/or remote control110. The multimedia environment100may include a plurality of content servers120(also called content providers or sources). Although only one content server120is shown inFIG.1, in practice the multimedia environment100may include any number of content servers120. Each content server120may be configured to communicate with network118. Each content server120may store content122and metadata124. Content122may include any combination of music, videos, movies, TV programs, multimedia, images, still pictures, text, graphics, gaming applications, advertisements, programming content, public service content, government content, local community content, software, and/or any other content or data objects in electronic form. In some embodiments, metadata124comprises data about content122. For example, metadata124may include associated or ancillary information indicating or related to writer, director, producer, composer, artist, actor, summary, chapters, production, history, year, trailers, alternate versions, related content, applications, and/or any other information pertaining or relating to the content122. Metadata124may also or alternatively include links to any such information pertaining or relating to the content122. Metadata124may also or alternatively include one or more indexes of content122, such as but not limited to a trick mode index. The multimedia environment100may include one or more system servers126. The system servers126may operate to support the media devices106from the cloud. It is noted that the structural and functional aspects of the system servers126may wholly or partially exist in the same or different ones of the system servers126. The media devices106may exist in thousands or millions of media systems104. Accordingly, the media devices106may lend themselves to crowdsourcing embodiments and, thus, the system servers126may include one or more crowdsource servers128. For example, using information received from the media devices106in the thousands and millions of media systems104, the crowdsource server(s)128may identify similarities and overlaps between closed captioning requests issued by different users102watching a particular movie. Based on such information, the crowdsource server(s)128may determine that turning closed captioning on may enhance users' viewing experience at particular portions of the movie (for example, when the soundtrack of the movie is difficult to hear), and turning closed captioning off may enhance users' viewing experience at other portions of the movie (for example, when displaying closed captioning obstructs critical visual aspects of the movie). Accordingly, the crowdsource server(s)128may operate to cause closed captioning to be automatically turned on and/or off during future streaming sessions of the movie. The system servers126may also include a domain adapted audio command processing module130.FIG.1depicts domain adapted audio command processing module130implemented in media device106, display device108, remote control110, and system server126, respectively. In practice, domain adapted audio command processing modules130may be implemented as a single module within just one of media device106, display device108, remote control110, or system server126, or in a distributed manner as shown inFIG.1. As noted above, the remote control110may include a microphone112. The microphone112may receive spoken audio data from users102(as well as other sources, such as the display device108). As noted above, the media device106may be audio responsive, and the audio data may represent audio commands (e.g., “Play a movie,” “search for a movie”) from the user102to control the media device106as well as other components in the media system104, such as the display device108. In some embodiments, the audio data received by the microphone112in the remote control110is processed by the device in which the domain adapted audio command processing module130is implemented (e.g., media device106, display device108, remote control110, and/or system server126). For example, in an embodiment where the domain adapted audio command processing module130is implemented in media device106, audio data may be received by the media device106from remote control110. The transfer of audio data may occur over a wireless link between remote control110and media device106. Also or alternatively, where voice command functionality is integrated within display device108, display device108may receive the audio data directly from user102. The domain adapted audio command processing module130that receives the audio data may operate to process and analyze the received audio data to recognize the user102's audio command. The domain adapted audio command processing module130may then perform an action associated with the audio command such as identifying potential candidates associated with the requested media content, forming a system command for retrieving the requested media content, or displaying the requested media content on the display device108. As noted above, the system servers126may also include the domain adapted audio command processing module130. In an embodiment, media device106may transfer audio data to the system servers126for processing using the domain adapted audio command processing module130in the system servers126. FIG.2illustrates a block diagram of an example media device106, according to some embodiments. Media device106may include a streaming module202, processing module204, storage/buffers208, and user interface module206. As described above, the user interface module206may include the domain adapted audio command processing module216. The media device108may also include one or more audio decoders212and one or more video decoders214. Each audio decoder212may be configured to decode audio of one or more audio formats, such as but not limited to AAC, HE-AAC, AC3 (Dolby Digital), EAC3 (Dolby Digital Plus), WMA, WAV, PCM, MP3, OGG GSM, FLAC, AIFF, and/or VOX, to name just some examples. Similarly, each video decoder214may be configured to decode video of one or more video formats, such as but not limited to MP4 (mp4, m4a, m4v, f4v, f4a, m4b, m4r, f4b, mov), 3GP (3gp, 3gp2, 3g2, 3gpp, 3gpp2), OGG (ogg, oga, ogv, ogx), NWMV (wmv, wma, asf), WEBM, FLV, AVI, QuickTime, HDV, MXF (OP1a, OP-Atom), MPEG-TS, MPEG-2 PS, MPEG-2 TS, WAV, Broadcast WAV, LXF, GXF, and/or VOB, to name just some examples. Each video decoder214may include one or more video codecs, such as but not limited to H.263, H.264, HEV, MPEG1, MPEG2, MPEG-TS, MPEG-4, Theora, 3GP, DV, DVCPRO, DVCPRO, DVCProHD, IMX, XDCAM HD, XDCAM HD422, and/or XDCAM EX, to name just some examples. Now referring to bothFIGS.1and2, in some embodiments, the user102may interact with the media device106via, for example, the remote control110. As noted above, remote control110may be implemented separately from media device106or integrated within media device106. For example, the user102may use the remote control110to verbally interact with the user interface module206of the media device106to select content, such as a movie, TV show, music, book, application, game, etc. The streaming module202of the media device106may request the selected content from the content server(s)120over the network118. The content server(s)120may transmit the requested content to the streaming module202. The media device106may transmit the received content to the display device108for playback to the user102. In streaming embodiments, the streaming module202may transmit the content to the display device108in real time or near real time as it receives such content from the content server(s)120. In non-streaming embodiments, the media device106may store the content received from content server(s)120in storage/buffers208for later playback on display device108. Domain Adapted Audio Command Processing Referring toFIG.1, the domain adapted audio command processing module130may be implemented within any device of media system104and may be configured to process audio data received from user102. The domain adapted audio command processing module130supports processing audio commands in the context of dynamic content domains and provides faster and more accurate translations of audio commands that involve media content in these domains. The domain adapted audio command processing module130may utilize a domain entity index, which provides information about more current entities (i.e., entities that an ASR engine would not recognize). The domain entity index may be implemented separately from an ASR engine and may be continuously updated with information about new entities (e.g., content titles) including their phonetic representations from dynamic domains. The domain entity index indexes the entities with the phonetic representations. This index allows for faster processing of audio commands because phonetic forms may be quickly searched to identity potentially relevant entities. This continuous updating of the domain entity index is in contrast to conventional systems utilizing a pre-trained ASR engine. In order to update the ASR engine, large amounts of additional domain data is needed to retrain the ASR engine. Because the domain entity index operates based on phonetic forms, new media content can be quickly indexed and ready for searching even for newly available content. The index may be continuously updated with new entities and their phonetic forms so that the index is able to provide accurate transcriptions of more current entities than conventional ASR engines. Sources of these entities may come from recently released content (e.g., live events such as a presidential debate), user-upload sites where new content is uploaded on a daily basis, or other online resources for media content such as WIKIPEDIA or INTERNET MOVIE DATABASE (IMDB). The candidates provided by domain adapted audio command processing module130in response to audio commands in the dynamic domain are therefore more accurate than conventional systems. FIG.3illustrates an example block diagram of domain adapted audio processing module130, according to some embodiments. Domain adapted audio processing module130may include an ASR engine306, named entity recognition component308, grapheme-phoneme converter310, domain entities index312, fuzzy candidate generator314, ranker316, any other suitable hardware, software, device, or structure, or any combination thereof. In some embodiments, domain adapted audio processing module130may operate in an ingestion and run-time mode. The ingestion mode may include operations when not processing a voice query, and may involve components grapheme-phoneme converter310and domain entities index312for processing entities received from entertain domain entity source(s)304(i.e., ingesting new entities). The term “entities” is used to refer to specific content of media content such as a specific movie, song, or television show, etc., and may be associated with different types of metadata such as movie titles, music titles, actor names, music artists, titles of media content including user-generated content, and popular phrases (e.g., lyrics from songs, dialogue from movies), just to name a few examples. Now referring toFIGS.1,2, and3, in some embodiments, domain adapted audio processing module130may include an ASR engine306configured to receive voice query302which, depending on where device domain adapted audio processing module130is implemented, may be provided by another device within media system104or directly from user102. ASR engine306may be implemented as a pre-trained ASR system that has been trained on public domain data available at the time of training. In an embodiment, ASR engine306may be an “off-the-shelf” engine that has not been modified, or has not received any additional training. ASR engine306may translate voice query302into a transcription or text format of the voice query. In an embodiment, voice query302includes an audio command for retrieving media content. The transcription provided by ASR engine306may not accurately reflect the media content requested by the voice query302but may nonetheless accurately reflect the phonetic form of the requested media content. For example, in response to a voice query “Play PAW PATROL,” ASR engine306may transcribe the audio command as “Play Pop Patrol.” As another example, ASR engine306may transcribe the audio command “Play THE DARK KNIGHT RISES” as “Play The Dark Night Rises.” These errors are examples of domain mismatch where the transcription may be an accurate phonetic representation of the voice query but not of the actually requested media content. Such errors by the ASR engine306are addressed by downstream components in domain adapted audio processing module130. Importantly, the transcription provided by ASR engine306does not need to be an accurate reflection of the requested media content. Named entity recognition (NER) component308is configured to receive the transcription from ASR engine306. The transcription is a textual representation of command components that form the audio command. Examples of command components including an intent, an action, and an entity. In an example where the voice query302includes the audio command “Play PAW PATROL” and the resulting transcription is “Play Pop Patrol,” then the action command component of the transcription is the “Play” action, the entity command component is “Pop Patrol,” and the intent component is a “video request.” NER308parses the transcription and performs recognition of the constituent command components within the transcription. The intent command component identifies the purpose of voice query302such as requesting media content; the action command component identifies the action to be performed on the requested media content; and the entity identifies the media content on which the action is to be performed. NER308identifies these command components—intent, action, and entity—and provides the entity as a token (text format) to grapheme-phoneme converter310. An entity may refer to the media content and the token refers to a textual form of the media content. A token is therefore merely an underspecified text and/or an erroneous ASR transcription and one goal of the present disclosure is to link the text form (i.e., token) to a corresponding media content (i.e., entity). After transcription, the “PAW PATROL” in example audio command “Play PAW PATROL” represents a token. The token is linked to a corresponding entity, “Paw Patrol,” with a type “TV Show.” In an embodiment, tokens are derived from transcriptions while entities are derived from sources (e.g., Wikidata) of known entities in the entertainment domain. Grapheme-phoneme converter310receives the entity, identifies the language of the entity (e.g., English), and performs a language-specific conversion process which involves converting the text format of the entity into phonetic forms. Phonetic forms include the phoneme of the entity and are used to search for a matching entity in the database. There are different known kinds of phonetic forms: New York State Identification and Intelligence System (“NYSIIS”) and International Phonetic Alphabet (“IPA”). The phoneme represents the phonetic pronunciation of the entity (e.g., “pαp pΛtro:l” for for the IPA form and “P PATRAL” for the NYSIIS phonetic form). NYSIIS is a lossy phonetic form that provides an approximation of the entity and allows for a faster method for determining a predetermined number (e.g., 100) of relevant entity candidates from a database of millions of entity candidates. IPA is a precise phonetic algorithm that may be utilized to calculate phonetic-edit-distance. In an embodiment, other orthographic forms may be used to improve the ranking. Examples of these other orthographic forms include such as the grapheme of the entity, the N-gram of the entity, and a popularity score of the entity. The grapheme represents the text (spelling) of the entity (e.g., “PAW PATROL”). The N-gram represents a N-letter sequence of letters of the entity; for example, a 3-gram of “PAW PATROL” represents a 3-letter sequence of letters (e.g., “paw,” “aw_,” “w_p,” “_pa” “pat,” “atr,” “tro,” and “rol”). And the popularity score represents a value indicating the popularity of the entity with respect to other entities within media system (e.g., reflects which entities are requested or have been viewed more often than other entities). The entities with their phonetic forms are stored as an entry within domain entities index312and, if responding to voice query, may be provided to fuzzy candidate generator314for further processing. The receipt and processing of voice queries (such as voice query302) by the domain adapted processing module130may be considered a run-time process. In contrast, communication between grapheme-phoneme converter310and entertainment domain entity source(s)304may occur during the ingestion process. Communication between grapheme-phoneme converter310and entertainment domain entity source(s)304may occur continuously (e.g., in the background) such that new entities are provided to grapheme-phoneme converter310, and subsequently to domain entities index312, on a continuous basis. Examples of entertainment domain entity source(s)304include user-upload sites or other media content resources such as WIKIDATA or INTERNET MOVIE DATABASE (IMDB) that are constantly updated with new media content as they are released. Information may be retrieved from these sources through an automated means such as with a website crawler. In an embodiment, communication between grapheme-phoneme converter310and entertainment domain entity source(s)304may occur as part of a push process where new media content entities are continuously pushed to grapheme-phoneme converter310as new entities are discovered. In another embodiment, domain adapted audio processing module130may pull new media content entities from entertainment domain entity source(s)304on an intermittent or scheduled basis. Domain entities index312receives entities and their phonetic forms from grapheme-phoneme converter310and stores them as indexed entries so that the entries can be easily searched. Grapheme-phoneme converter310continuously updates domain entities index312when grapheme-phoneme converter310receives new entities from entertainment domain entity source(s)304as part of the ingestion process. These operations of the ingestion process allow new entities to be continuously stored in domain entities index312independently of training the ASR engine306, and allow those entries to be made available for fuzzy candidate generator314to generate domain specific candidates when responding to voice query302. The ingestion process of domain adapted audio processing module130provides advantages over a conventional ASR system that would rely only on an off-the-shelf ASR engine such as ASR engine306. By continuously updating entries and their associated phonetic forms in the domain entities index, the entries are available, in real-time, for responding to voice queries and domain adapted audio processing module130can generate candidates that are more relevant to voice query302in the entertainment domain. Domain adapted audio processing module130can quickly adapt to new terminology or potentially confusing content titles (“The Dark Night Rises” vs “THE DARK KNIGHT RISES”). In addition, use of a continuously updated index obviates the need to retrain the ASR engine. Yet another advantage provided by domains entity index312is that domain adapted audio processing module130may be quickly modified for different languages (language portability) because only phonetic forms of entries are required as opposed to large amounts of language specific training data for building or customizing the language specific speech model. Fuzzy candidate generator314is responsible for generating domain specific candidates in response to voice query302. The candidates generated by fuzzy candidate generator314may be considered fuzzy candidates because the candidates may not exactly match the entity representing the media content identified in the voice query. This is especially true when there is a domain mismatch in the transcription provided by ASR engine306such as with “The Dark Night Rises” compared with the actual audio command for “THE DARK KNIGHT RISES.” In this example, there is no media content titled “The Dark Night Rises” so any suggested candidates would not perfectly match this token; accordingly, such candidates would be considered fuzzy candidates. Fuzzy candidate generator314receives a token identifying the requested entity and its corresponding phonetic forms from grapheme-phoneme converter310, and performs a search of the domain entities index312to retrieve candidates that are similar phonetically to the token. In an embodiment, the search performed by fuzzy candidate generator314includes at least one of a grapheme search, a phoneme search, and an N-gram (e.g., 3-gram) search. In an embodiment, the search includes all three searches—grapheme search, a phoneme search, and an N-gram—and fuzzy candidate generator314concatenates candidates generated by each search to populate a fuzzy candidate list. The grapheme search includes a text spelling match based on matching the spelling of the token (e.g., “Pop Patrol”) to graphemes in the domain entities index312. The phoneme search includes a phonetic matching based on pronunciation where a phoneme of the token (e.g., “pαp pΛtro:l”) matches phonemes in the domain entities index312. The N-gram search is a combined grapheme-phoneme match based on matching the N-gram (e.g., “pop pat atr tro rol”) to N-grams in the domain entities index312; N-gram search may be considered a combination of the grapheme and phoneme matching of the token. Component of fuzzy candidate generator314are discussed in further detail forFIG.4. In an embodiment, the phoneme search utilizes both a lossy and a precise phonetic form to determine a matching entity. An advantage of both types of phonetic forms is that it increases efficiency of fuzzy candidate generator314at run-time. A lossy phonetic form reduces the number of potential candidates (e.g., millions) to a predetermined number (e.g., 100) while the precise phonetic form further reduces the predetermined number of candidates to the most relevant candidates (e.g., the top 3 or 5 candidates). For example, fuzzy candidate generator314may first employ a lossy phonetic form (e.g., NYSIIS) to determine a number of relevant entity candidates. Next, fuzzy candidate generator314may utilize the precise phonetic algorithm (e.g., IPA) to calculate phonetic-edit-distance to rank the candidates that were generated using the lossy phonetic form. In an embodiment, the candidates with the smallest phonetic edit distance may be considered to be the most relevant. The other orthographic forms—grapheme, N-gram, spelling—to improve both the candidate generation using the lossy form (to generate the predetermined number of candidates) as well as with the precise form to improve the ranking using phonetic edit distance. Fuzzy candidate generator314may index all phonetic and orthographic forms in domain entities index312. Ranker316ranks the fuzzy candidate list generated by fuzzy candidate generator314to provide a ranked candidate list identifying domain adapted transcriptions associated with the voice query. The fuzzy candidate list represents a reduced set of candidates pulled from the domain entities index312and allows ranker316to perform its operations at run-time because only a small set of candidates (as opposed to the full amount of candidates from domain entities index312) need to be processed and matched with the token. Ranker316may consider a number of different factors when ranking the candidates provided by fuzzy candidate generator314including but not limited to phonetic edit distance, match count, longest common sequence, nospace overlap, and popularity. The ranking of each candidate may be based on one or more of each of these factors. In an embodiment, each factor is assigned a numerical value to indicate candidates that provide a better match in each of the factors. For example, phonetic edit distance may be represented by a numerical value that indicates the similarity between the phonemes of the token and of the candidate entity. As an example, if the transcription token is “hobs and shaw” (phonetic form “hαbz ænd ∫o”) and the entity is “Hobbs and Shaw” (phonetic form “hαbz ænd ∫o”), the phonetic edit distance between the token and the entity is 0 since both have identical phonetic forms. However, the text edit distance between them is 1 since a “b” is missing from token. As another example, the popularity of each candidate may also be retrieved from domain entities index312. The numerical value associated with popularity may indicate the frequency that the candidate (e.g., media content) was played, requested, or otherwise involved in an action within multimedia environment100. For example, the popularity of an entity could refer to the number of streams by the thousands or millions of media systems104within multimedia environment100. Accordingly, a higher value for popularity may indicate a higher number of streams within multimedia environment100. The numerical value for the match count factor may indicate how many matching-strategies—grapheme spelling, grapheme n-gram, phoneme—indicate that the potential candidate is a quality match. For example, if all three matching-strategies indicate a quality match, then the value for the match count factor is “3.” The numerical value for the longest common sequence may be based on the grapheme search and indicates the longest common sequence of matching text between the candidate and the token. For example, for the candidates “PAW PATROL” and “PAW PATROL toy play,” the numerical values are the same since they both share the same text “Patrol” with the token “Pop Patrol.” The numerical value for nospace overlap may indicate the similarity scores between token and entities if spaces were removed.” As an example, a “melissa fent” token may match “maleficent” real world entity if a space is removed from the token “melissa fent,” resulting in “melissafent.” A “melissa fent” token in response to an audio command requesting “maleficent” occurs with conventional ASR systems because off-the-shelf ASR does not understand have insight into the entertainment domain and may randomly inject spaces in the transcription. In this example, a conventional ASR may consider “maleficient” to be a person's name and adds a space after “Melissa.” An example ranked fuzzy candidate list is reproduced below with exemplary values for each of the factors. PhoneticLongestEditMatchCommonNoSpaceCandidateDistancePopularityCountSequenceOverlapRankPaw Patrol9399380451Paw Patrol8450180252toy playPatrol9130160573American869213074Pop Ranker316may then provide the top ranked candidate within the ranked fuzzy candidate list or a certain number of the top ranked candidates to an appropriate device for retrieval (e.g., content server120), display (e.g., display device108), or additional processing (e.g., media device106). For example, the certain number of the top ranked candidates may be provided for display on display device108to allow for user102to select the appropriate candidate. In another embodiment, the top ranked candidate in the ranked fuzzy candidate list may be automatically retrieved (e.g., from content server120) and played (e.g., by media device106). In another embodiment, content server120may identify all streaming services that provide the top ranked candidate and generate a new list that displays the top ranked candidate along with the streaming services, and provide that to display device108for display. FIG.4is a block diagram of a fuzzy candidate generator314, according to some embodiments. Fuzzy candidate generator314may include receiver402, grapheme search component404, N-gram search component406, phoneme search component408, and candidate generator410. Receiver402receives the identified token and the phonetic forms from grapheme-phoneme converter310, and initiates at least one search (and up to all three searches) from the grapheme search, the N-gram search, and the phoneme search. Receiver402routes the appropriate token and phonetic information respective components for each search. For example, receiver402routes the token and its grapheme to grapheme search component404. Grapheme search component404communicates with domain entities index312to search for graphemes that match the grapheme of the token. For example, grapheme search component404performs a search for the grapheme “Pop Patrol” (i.e., the grapheme of the token identified in the transcription provided by ASR engine306) in domain entities index312. Because domain entities index312has been updated with media content from the entertainment domain, the grapheme search may produce domain specific candidates based on the grapheme such as “Paw Patrol” and “American Pop.” In an embodiment, the grapheme search component404performs a fuzzy match of the grapheme of the token with the grapheme candidates from domain entities index312. N-gram search component406may be implemented based on the number of letters to be searched. In an embodiment, n-gram search component406may be implemented as a 3-gram search component that searches for 3-grams associated with the token. N-gram search component406communicates with domain entities index312to search for n-grams that match the n-gram of the token. For example, n-gram search component406performs a search for the 3-gram “pop pat atr tro rol” (i.e., the 3-gram of the token identified in the transcription provided by ASR engine306) in domain entities index312. The n-gram search component406may then provide domain specific candidates based on the n-gram such as “PAW PATROL” and “patrol.” In an embodiment, the n-gram search component406performs a fuzzy match of the n-gram of the token with the n-gram candidates from domain entities index312. Phoneme search component408communicates with domain entities index312to search for phonemes that match the phoneme of the token. For example, phoneme search component408performs a search for the phoneme of the token identified in the transcription provided by ASR engine306, e.g., “pαp pΛtro:l” (a precise phonetic form) and/or “P PATRAL” (a lossy phonetic form) in domain entities index312. The phoneme may produce domain specific candidates based on the phoneme such as “PAW PATROL” and “Paw Patrol toy play.” Candidate generator410may then concatenate the candidates provided by one or all of the searches provided by the grapheme search component404, the N-gram search component406, and the phoneme search component408to form a fuzzy candidate list that includes candidates from at least one of the grapheme search, the N-gram search, and the phoneme search. In an embodiment, the fuzzy candidate list includes at least one candidate from all three searches. Candidate generator410may then provide the fuzzy candidate list to a ranker, such as ranker316for further ranking. FIG.5is a flowchart for a method500for processing speech input using a domain adapted audio command processing module, according to some embodiments. Method500can be performed by processing logic that can include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions executing on a processing device), or a combination thereof. It is to be appreciated that not all steps may be needed to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown inFIG.5, as will be understood by a person of ordinary skill in the art. Method500shall be described with reference toFIGS.1-4. However, method500is not limited to those example embodiments. Method500relates to the run-time process for processing voice queries as they are received. In502, the domain adapted audio command processing module130receives a voice query from a user. Domain adapted audio command processing module130may be implemented in any user device, such as media device106, display device108, and remote control110. The voice query may represent a request to retrieve media content such as a movie, music, or television show. In504, the domain adapted audio command processing module130generates a transcription of the voice query. The transcription may be generated using automatic speech recognition (ASR) engine306. The transcription is a textual representation of the voice query including all of the components in the query such as the requested media content and the action to be performed on the requested media content. In an embodiment, the textual representation of the requested media content is an imperfect match (i.e., domain mismatch) to the requested media content. In other words, the textual representation may not exactly match the media content in the voice query. For example, a voice query may be “Play PAW PATROL” where “PAW PATROL” represents the requested media content; the textual representation of the requested media content provided by the ASR engine306may be “Pop Patrol.” In other words, ASR engine306may provide a textual representation that is phonetically similar to, but not an accurate representation, of the requested media content. In506, the domain adapted audio command processing module130may generate, based on the transcription, a token representing each media content being requested in the voice query. The voice query may include more than one entity (e.g., “Play PAW PATROL directed by Karl Bunker”) and there is one transcribed token for each entity. In an embodiment, identifying entity tokens may include parsing the transcription to identify one or more command components within the transcription where a command component may include an entity that identifies the requested media content, an identified intent of the voice query, and an identified action to be performed on the requested media content. The identified intent may be determined based on the combination of the entity and the identified action. Continuing the example above, a transcription for a voice query for “Play PAW PATROL” may include the “Play” action and the “Pop Patrol” entity. Based on the combination of these command components, domain audio command processing module130may identify that the intent of the voice query is a request for media content (i.e., the content is being requested so that it may be played). Based on the command components identified in the transcription, domain adapted processing module130may then generate a token corresponding to the entity. The token may be in a text form. In508, the domain adapted audio command processing module130may generate phonetic forms of the tokens via a grapheme-phoneme conversion process. In an embodiment, this step may include converting the token into a phonetic representation of the entity. Examples of the phonetic representation were discussed above with respect toFIGS.3and4and include the grapheme of the token, the phoneme of the token, and the N-gram of the token. In510, the domain adapted audio command processing module130may generate domain specific candidates based on the phonetic forms and provide the candidates in a fuzzy candidate list. The fuzzy candidate list may include fuzzy candidates that represent potential matches to the media content identified by the entity. A goal of the domain adapted audio command processing module130is to identify the requested media content in the voice query using what could be an imperfect match represented by the entity in the transcription of the voice query. Accordingly, one of the fuzzy candidates may be an imperfect match to the entity but a perfect match for the requested media content in the voice query. In an embodiment, the matching between the fuzzy candidates and the entity is based on the phonetic representation, including one of the grapheme of the token, the phoneme of the token, and the N-gram of the token, and the token itself. In an embodiment, generating the domain specific candidates may include at least one a grapheme search, a phoneme search, and an N-gram search. The grapheme search may be based on the grapheme of the token that is used to identify at least one fuzzy grapheme candidate in domain entities index312. The identification of the fuzzy grapheme candidate may be based on a spelling comparison between the grapheme of the token and the spelling of the fuzzy grapheme candidates within the domain entities index. At least one of the fuzzy candidates in the fuzzy candidate list may include one fuzzy grapheme candidate. The spelling comparison may include using the grapheme of the token to search for a grapheme candidate in domain entities index312and identifying the grapheme candidate as a fuzzy grapheme candidate if there is a fuzzy match between a spelling of the grapheme to a spelling of the grapheme candidate. This identification may involve retrieving, from an entry in domain entities index312, the spelling of the grapheme candidate. The domain entities index312may be updated to include an entry associated with the grapheme candidate by populating the entry with the spelling of the grapheme candidate. This update of the domain entities index312occurs independently of ASR engine306that allows for the domain entities index312to updated more quickly and does not require retraining of ASR engine306. The domain entities index312may include a number of entries, including the entry, associated with a plurality of grapheme candidates and the domain entity index may be updated on a continuous basis with new entries as they are received. The phoneme search may include searching the domain entities index312based on the phoneme of the token to identify a fuzzy phoneme match based on a phonetic comparison between the phoneme of the token and the fuzzy phoneme candidate. At least one of the fuzzy candidates in the fuzzy candidate list may include one fuzzy phoneme candidate. The phonetic comparison may involve using the phoneme of the token to search for a phoneme candidate in domain entities index312and identifying the phoneme candidate as the fuzzy phone candidate based on a phonetic matching between the phoneme of the token and the phoneme candidate by, for example, retrieving the phoneme candidate from the entry. The domain entity index may include an entry associated with the phoneme candidate and may be updated by populating the entry with the phoneme candidate independently of the automatic speech recognition engine. The N-gram search may include searching the domain entities index312based on the N-gram of the token to identify a fuzzy N-gram match based on an N-gram comparison between the token and the fuzzy N-gram candidate. At least one of the fuzzy candidates further may include the fuzzy N-gram match. The N-gram comparison may involve using the N-gram of the token to search for an N-gram candidate in the domain entity index and identifying the N-gram candidate as the fuzzy N-gram candidate based on matching the N-gram of the token to an N-gram of the N-gram candidate. The domain entities index312may include an entry associated with the phoneme candidate and performing the search may include retrieving, from the entry, the N-gram of the N-gram candidate. In512, domain adapted audio command processing module130may rank the candidates in the fuzzy candidate list to form a ranked fuzzy candidate list including a highest ranked fuzzy candidate corresponding to a best potential match for the media content. One or more of the highest ranked candidates in ranked fuzzy candidate list may then be provided in response to the voice query. This may include performing an action associated with the highest ranked fuzzy candidate. In an embodiment, ranking the candidate may include ranking the fuzzy grapheme match, the fuzzy N-gram match, and the fuzzy phoneme match in the fuzzy candidate list to form a ranked candidate list. In an embodiment, the highest ranked fuzzy candidate in the ranked candidate list corresponds to the best potential match for the media content requested by the voice query and that is represented by the token. The highest ranked fuzzy candidate may be determined based on any number of ranking criteria including at least one of a phonetic edit distance, a popularity score, a match count, a longest common sequence score, and a nospace overlap score, as discussed above. Providing the ranked domain adapted candidates may include performing an action such as displaying the fuzzy candidates on display device108and waiting for a user selection from a user device (e.g., remote control110, media device106). Additional actions may occur after display of the fuzzy candidates including receiving a selection of the highest ranked fuzzy candidate from the user device, retrieving the highest ranked fuzzy candidate from a database (e.g., content server120), and sending the ranked fuzzy list including the highest ranked fuzzy candidate to media device106. FIG.6is a flowchart illustrating a process for updating a domain entities index, according to some embodiments. Method600can be performed by processing logic that can include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions executing on a processing device), or a combination thereof. It is to be appreciated that not all steps may be needed to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown inFIG.6, as will be understood by a person of ordinary skill in the art. Method600shall be described with reference toFIGS.1-4. However, method600is not limited to those example embodiments. Method600relates to the ingestion process for populating domain entities index312with new entities as they are received from entertainment domain entity source(s)304. In602, domain adapted audio command processing module130may collect entertainment domain entities. In an embodiment, collection may be a push process where entertainment domain entity source(s)304automatically pushes new entities on a continuous or scheduled basis. In an embodiment, collection may be a pull process where domain adapted audio command processing module130submits requests to entertainment domain entity source(s)304to provide updated entities. In604, domain adapted audio command processing module130provides the new entities to grapheme-phoneme converter310for conversion of the entities into phonetic forms. In606, domain adapted audio command processing module130stores the entities along with phonetic forms in domain entities index312as index entries to facilitate searching and retrieval of information during run-time such as by fuzzy candidate generator314when generating fuzzy candidates in response to a voice query. Example Computer System Various embodiments and/or components therein can be implemented, for example, using one or more computer systems, such as computer system700shown inFIG.7. Computer system700can be any computer or computing device capable of performing the functions described herein. For example, the media device106may be implemented using combinations or sub-combinations of computer system700. Also or alternatively, one or more computer systems700may be used to implement any of the embodiments discussed herein, as well as combinations and sub-combinations thereof. Computer system700includes one or more processors (also called central processing units, or CPUs), such as processor704. Processor704is connected to communications infrastructure706(e.g., a bus). In some embodiments, processor704can be a graphics processing unit (GPU). In some embodiments, a GPU may be a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU can have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc. Computer system700also includes user input/output device(s)703, such as monitors, keyboards, pointing devices, etc., that communicate with communications infrastructure706through user input/output interface(s)702. Computer system700also includes main memory708(e.g., a primary memory or storage device), such as random access memory (RAM). Main memory708can include one or more levels of cache. Main memory708may have stored therein control logic (i.e., computer software) and/or data. Computer system700can also include one or more secondary storage devices or memories such as secondary memory710. Secondary memory710can include, for example, hard disk drive712, removable storage drive714(e.g., a removable storage device), or both. Removable storage drive714can be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive. Removable storage drive714can interact with removable storage unit718. Removable storage unit718includes a computer usable or readable storage device having stored thereon computer software (e.g., control logic) and/or data. Removable storage unit718can be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/any other computer data storage device. Removable storage drive714may read from and/or write to removable storage unit718. In some embodiments, secondary memory710can include other means, devices, components, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system700. Such means, devices, components, instrumentalities or other approaches can include, for example, removable storage unit722and interface720. Examples of removable storage unit722and interface720can include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface. Computer system700can further include a communications interface724(e.g., a network interface). Communications interface724may enable computer system700to communicate and interact with any combination of external or remote devices, external or remote networks, remote entities, etc. (individually and collectively referenced by reference number728). For example, communications interface724can allow computer system700to communicate with external or remote devices728over communications path726, which can be wired, wireless, or a combination thereof, and which can include any combination of LANs, WANs, the Internet, etc. Control logic and/or data can be transmitted to and from computer system700via communications path726. Computer system700may also be any of a personal digital assistant (PDA), desktop workstation, laptop or notebook computer, netbook, tablet, smart phone, smart watch or other wearable, appliance, part of the Internet-of-Things, and/or embedded system, to name a few non-limiting examples, or any combination thereof. Computer system700may be a client or server, accessing or hosting any applications and/or data through any delivery paradigm, including but not limited to remote or distributed cloud computing solutions; local or on-premises software (“on-premise” cloud-based solutions); “as a service” models (e.g., content as a service (CaaS), digital content as a service (DCaaS), software as a service (SaaS), managed software as a service (MSaaS), platform as a service (Paas), desktop as a service (DaaS), framework as a service (FaaS), backend as a service (Baas), mobile backend as a service (MBaaS), infrastructure as a service (IaaS), etc.); and/or a hybrid model including any combination of the foregoing examples or other services or delivery paradigms. Any applicable data structures, file formats, and schemas in computer system700may be derived from standards and specifications associated with images, audio, video, streaming (e.g., adaptive bitrate (ABR) streaming, content feeds), high-dynamic-range (HDR) video, text (e.g., closed captioning, subtitles), metadata (e.g., content metadata), data interchange, data serialization, data markup, digital rights management (DRM), encryption, any other suitable function or purpose, or any combination thereof. Alternatively, proprietary data structures, formats or schemas may be used, either exclusively or in combination with another standard or specification. Standards and specifications associated with images may include, but are not limited to, Base Index Frames (BIF), Bitmap (BMP), Graphical Interchange Format (GIF), Joint Photographic Experts Group (JPEG or JPG), Portable Network Graphics (PNG), any other suitable techniques (e.g., functionally similar representations), any predecessors, successors, and variants thereof, and any combinations thereof. Standards and specifications associated with audio may include, but are not limited to, Advanced Audio Coding (AAC), AAC High Efficiency (AAC-HE), AAC Low Complexity (AAC-LC), Apple Lossless Audio Codec (ALAC), Audio Data Transport Stream (ADTS), Audio Interchange File Format (AIFF), Digital Theater Systems (DTS), DTS Express (DTSE), Dolby Digital (DD or AC3), Dolby Digital Plus (DD+ or Enhanced AC3 (EAC3)), Dolby AC4, Dolby Atmos, Dolby Multistream (MS12), Free Lossless Audio Codec (FLAC), Linear Pulse Code Modulation (LPCM or PCM), Matroska Audio (MKA), Moving Picture Experts Group (MPEG)-1 Part 3 and MPEG-2 Part 3 (MP3), MPEG-4 Audio (e.g., MP4A or M4A), Ogg, Ogg with Vorbis audio (Ogg Vorbis), Opus, Vorbis, Waveform Audio File Format (WAVE or WAV), Windows Media Audio (WMA), any other suitable techniques, any predecessors, successors, and variants thereof, and any combinations thereof. Standards and specifications associated with video may include, but are not limited to, Alliance for Open Media (AOMedia) Video 1 (AV1), Audio Video Interleave (AVI), Matroska Video (MKV), MPEG-4 Part 10 Advanced Video Coding (AVC or H.264), MPEG-4 Part 14 (MP4), MPEG-4 Video (e.g., MP4V or M4V), MPEG-H Part 2 High Efficiency Video Coding (HEVC or H.265), QuickTime File Format (QTFF or MOV), VP8, VP9, WebM, Windows Media Video (WMV), any other suitable techniques, any predecessors, successors, and variants thereof, and any combinations thereof. Standards and specifications associated with streaming may include, but are not limited to, Adaptive Streaming over HTTP, Common Media Application Format (CMAF), Direct Publisher JavaScript Object Notation (JSON), HD Adaptive Streaming, HTTP Dynamic Streaming, HTTP Live Streaming (HLS), HTTP Secure (HTTPS), Hypertext Transfer Protocol (HTTP), Internet Information Services (IIS) Smooth Streaming (SMOOTH), Media RSS (MRSS), MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH or DASH), MPEG transport stream (MPEG-TS or TS), Protected Interoperable File Format (PIFF), Scalable HEVC (SHVC), any other suitable techniques, any predecessors, successors, and variants thereof, and any combinations thereof. Standards and specifications associated with HDR video may include, but are not limited to, Dolby Vision, HDR10 Media Profile (HDR10), HDR10 Plus (HDR10+), Hybrid Log-Gamma (HLG), Perceptual Quantizer (PQ), SL-HDR1, any other suitable techniques, any predecessors, successors, and variants thereof, and any combinations thereof. Standards and specifications associated with text, metadata, data interchange, data serialization, and data markup may include, but are not limited to, Internet Information Services (IIS) Smooth Streaming Manifest (ISM), IIS Smooth Streaming Text (ISMT), Matroska Subtitles (MKS), SubRip (SRT), Timed Text Markup Language (TTML), Web Video Text Tracks (WebVTT or WVTT), Comma-Separated Values (CSV), Extensible Markup Language (XML), Extensible Hypertext Markup Language (XHTML), XML User Interface Language (XUL), JSON, MessagePack, Wireless Markup Language (WML), Yet Another Markup Language (YAML), any other suitable techniques, any predecessors, successors, and variants thereof, and any combinations thereof. Standards and specifications associated with DRM and encryption may include, but are not limited to, Advanced Encryption Standard (AES) (e.g., AES-128, AES-192, AES-256), Blowfish (BF), Cipher Block Chaining (CBC), Cipher Feedback (CFB), Counter (CTR), Data Encryption Standard (DES), Triple DES (3DES), Electronic Codebook (ECB), FairPlay, Galois Message Authentication Code (GMAC), Galois/Counter Mode (GCM), High-bandwidth Digital Content Protection (HDCP), Output Feedback (OFB), PlayReady, Propagating CBC (PCBC), Trusted Execution Environment (TEE), Verimatrix, Widevine, any other suitable techniques, any predecessors, successors, and variants thereof, and any combinations thereof, such as AES-CBC encryption (CBCS), AES-CTR encryption (CENC). In some embodiments, a tangible, non-transitory apparatus or article of manufacture including a tangible, non-transitory computer useable or readable medium having control logic (software) stored thereon may also be referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system700, main memory708, secondary memory710, and removable storage units718and722, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system700), may cause such data processing devices to operate as described herein. Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown inFIG.7. In particular, embodiments can operate with software, hardware, and/or operating system implementations other than those described herein. Conclusion It is to be appreciated that the Detailed Description section, and not any other section, is intended to be used to interpret the claims. Other sections can set forth one or more but not all example embodiments as contemplated by the inventor(s), and thus, are not intended to limit this disclosure or the appended claims in any way. While this disclosure describes example embodiments for example fields and applications, it should be understood that the disclosure is not limited thereto. Other embodiments and modifications thereto are possible, and are within the scope and spirit of this disclosure. For example, and without limiting the generality of this paragraph, embodiments are not limited to the software, hardware, firmware, and/or entities illustrated in the figures and/or described herein. Further, embodiments (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein. Embodiments have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative embodiments can perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein. References herein to “one embodiment,” “an embodiment,” “an example embodiment,” or similar phrases, indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment can not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein. Additionally, some embodiments can be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments can be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, can also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The breadth and scope of this disclosure should not be limited by any of the above-described example embodiments, but should be defined only in accordance with the following claims and their equivalents.
66,839
11862153
DETAILED DESCRIPTION Overview This disclosure includes techniques and implementations to improve acoustic performance of an audio controlled assistant device. One way to improve acoustic performance is to personalize language models and acoustic models (used to analyze, isolate and respond to audio commands) for a given acoustic environment, such as a user's home. The audio controlled assistant is configured to detect and respond to audio commands. Audio commands include voice commands or words spoken by a user and audio prompt which are non-conversational noises. As used herein the “non-conversational noises” are sounds other than speech, which occur naturally in an environment. In one implementation, the non-conversational noises may be defined as audio signals that have no meaning within a selected vocabulary or dictionary. For instance, the audio controlled assistant may be configured for a selected language and the non-conversational noises may be discrete sounds that do not appear in a dictionary representative of the selected language. In some examples, the non-conversational noises may include door bell chimes, ring tones, footsteps, dog barks, noise related to an appliance, etc. The audio prompts are non-conversational noises, which have been designated to elicit specific responses from the audio controlled assistant. For example, the audio controlled assistant may designate a noise as an audio prompt in response to detecting the noise more than a pre-determined number of times and/or by determining the noise is within a threshold of similarity of prerecorded sounds. In one implementation, the audio prompts are configured to elicit specific responses from the audio controlled assistant, in addition to the voice commands typically associated with such devices. For example, the audio controlled assistant may be configured to mute any active audio or pause the television in response to detecting an audio prompt associated with a baby crying. In another example, the audio controlled assistant may be configured to respond in a particular way to a first user's ring tone and in another way to a second user's ring tone. In this way the audio controlled assistant may be configure to respond to the each user's phone in a separate manner. In another implementation, the language models associated with an audio controlled assistant may be configured to learn the differences between the voice profile of a first user, such as a parent, and a second user, such as a child, and to respond differently to voice commands initiated from the parent and voice commands initiated from the child. For example, the audio controlled assistant may be configured to aid the user in shopping online. In this example, the audio controlled assistant may be configured to accept a payment authorization from the parent but not from the child. In an implementation, the audio controlled assistant may be configured to capture environmental noise from a room and to provide the environmental noise to a cloud based acoustic modeling system. The acoustic modeling system may be configured to utilize feedback loops or other machine learning techniques to analyze the captured environmental noise and personalize the language models and acoustic models used to detect audio commands for the transmitting audio controlled assistant. In this manner, each audio controlled assistant has its own particular language models and acoustic models, which are customized for the acoustic environment associated with the audio controlled assistant. In one particular implementation, the acoustic modeling system may be configured to identify reoccurring or common noises and to categorize them as a particular type of noise. For example, the acoustic modeling system may identify a particular noise (such as a ring tone associated with a user's phone) and classify the particular noise as falling within a predefined category. Once the particular noise is identified and classified, the acoustic modeling system may define the particular noise as an audio prompt for the transmitting audio controlled assistant. Further, once defined as an audio prompt, future occurrences of the noise will cause the transmitting audio controlled assistant to respond in a particular manner based on the response instructions for the corresponding category. By personalizing the language models and acoustic models associated with the audio controlled assistant for the specific acoustic environment, the audio commands and corresponding responses may be tailored to the lifestyle, languages, and dialects of the users and the acoustic environment. Illustrative Environment FIG.1shows an illustrative voice interaction computing architecture100set in an acoustic environment102. The architecture100includes an audio controlled assistant104physically situated in a room of the home, and communicatively coupled to cloud-based services106over one or more networks108. In the illustrated implementation, the audio controlled assistant104is positioned on a table within the home in the acoustic environment102. In other implementations, it may be placed in any number of places (e.g., an office, store, public place, etc.) or locations (e.g., ceiling, wall, in a lamp, beneath a table, under a chair, etc.). Further, more than one audio controlled assistant104may be positioned in a single room, or one audio controlled assistant104may be used to accommodate user interactions from more than one room of the home. In one particular example, the audio controlled assistant104may be configured to communicate with other home electronic devices to capture environmental noise and perform user requested actions. The audio controlled assistant104may be communicatively coupled to the networks108via wired technologies (e.g., wires, USB, fiber optic cable, etc.), wireless technologies (e.g., RF, cellular, satellite, Bluetooth, etc.), or other connection technologies. The networks108are representative of any type of communication network, including data and/or voice network, and may be implemented using wired infrastructure (e.g., cable, CAT5, fiber optic cable, etc.), a wireless infrastructure (e.g., RF, cellular, microwave, satellite, Bluetooth, etc.), and/or other connection technologies. The networks108carry data, such as audio data, between the cloud services106and the audio controlled assistant104. The audio controlled assistant104is configured to respond to audio commands, including voice commands110and audio prompts112, present in the acoustic environment102. The voice commands110are specific spoken commands issued by one or more user to cause the audio controlled assistant104to perform one of a various list of tasks. The audio prompts112are non-conversational noises occurring in the acoustic environment102, which the audio controlled assistant104is configured to responds to in addition to the voice commands110. The voice commands110and/or audio prompts112may cause the audio controlled assistant104to perform any number or type of operations. For example, the audio controlled assistant104may be configured to access cloud services106to perform database searches, locate and consume/stream entertainment (e.g., games, music, movies and/or other content, etc.), aid in personal management tasks (e.g., calendaring events, taking notes, etc.), assisting in online shopping, conducting financial transactions, and so forth. The audio controlled assistant104also includes at least one microphone and at least one speaker to facilitate audio interactions with a user114and the acoustic environment102. In some instances, the audio controlled assistant104is implemented without a haptic input component (e.g., keyboard, keypad, touch screen, joystick, control buttons, etc.) or a display. In other instances, a limited set of one or more haptic input components may be employed (e.g., a dedicated button to initiate a configuration, power on/off, etc.). Generally, the audio controlled assistant104may be configured to capture environmental noises at the at least one microphone, generate corresponding audio or audio signals116and transmit the audio signals116to cloud services106. The cloud services106detect and respond to voice commands110uttered from the user114and audio prompts112present in the acoustic environment102. For example, the user114may speak voice commands110(e.g., specific commands such as “Awake” or “Sleep”, or more conversational commands such as “I'd like to go to a movie. Please tell me what's playing at the local cinema.”), which cause the audio controlled assistant104to perform tasks such as locating a list of currently playing movies. The cloud services106generally refer to a network accessible platform implemented as a computing infrastructure of processors, storage, software, data access, and so forth that is maintained and accessible via a network such as the Internet. The cloud services106do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Common expressions associated with cloud services include “on-demand computing”, “software as a service (SaaS)”, “platform computing”, “network accessible platform”, and so forth. The cloud services106is implemented by one or more servers, such as servers118(1),118(2), . . . ,118(S). Additionally, the servers118(1)-(S) may host any number of cloud based services106, such as music system120or search system122, which may process the voice commands110and audio prompts112received from the audio controlled assistant104, and produce a suitable response, as discussed above. These servers118(1)-(S) may be arranged in any number of ways, such as server farms, stacks, and the like that are commonly used in data centers. The cloud services106also includes an acoustic modeling system124, which is configured to select, generate, update and personalize the voice commands110and the audio prompts112, in addition to the language models126and acoustic models128used to detect the voice commands110and the audio prompts112. The acoustic modeling system124personalizes the voice commands110, the audio prompts112, the language models126and the acoustic models128for each audio controlled assistant104based on audio signals provided from the particular acoustical environment102in which the audio controlled assistant104providing the audio signals is placed. The acoustic modeling system124is also configured to analyze the audio signals116using the language models126and the acoustic models128personalized for the audio controlled assistant104to determine if a voice command110or audio prompt112is present within the audio signals116. In some examples, the audio prompts112may be combined with the voice commands110. For instance, non-conversional noises such as laughter, crying, coughing, sneezing, etc. may be added to the language models126such that when the acoustic modeling system124analyzes the audio signals116, the acoustic modeling system124detects the audio prompts as if they were spoken words recognizable in the language models126. Further, in response to detecting a voice command110or an audio prompt112, the cloud services106, the audio controlled assistant104or both perform corresponding actions. For example, in response to detecting a door bell chime the acoustic modeling system124may transmit responses instructions130to the audio controlled assistant104to cause the audio controlled assistant104to attenuate the audio being output by the audio controlled assistant104. In another example, the acoustic modeling system128may cause the cloud services106to contact911in response to detecting an alarm siren of the home alarm system. In one particular example, the audio controlled assistant104is introduced into a new environment (such as acoustic environment102). For instance, when the audio controlled assistant is first installed in a room of user's110home. When first introduced into an environment, the audio controlled assistant104responds to preprogrammed voice commands110and audio prompts112based on one or more default language models126and acoustic models128tuned for the average acoustical environment. As the audio controlled assistant104operates within the particular environment, however, the audio controlled assistant104generates audio signals116based on sound captured within the environment, including one or more user's voices and reoccurring or common noises, from the acoustic environment102. The audio controlled assistant104transmits the audio signals116to a cloud based system, such as the acoustic modeling system124. The acoustic modeling system124analyzes the audio signals116and, for example, applies model training methods, such as feedback loops or other machine learning techniques, to generate, select, adjust or personalize the language models126and the acoustic models128for the acoustic environment102based on the audio signals116. For example, the acoustic modeling system124may apply speaker adaptation methods, vocal tract normalizations, or vocabulary adaptation techniques. It should be understood, that as the language models126and the acoustic models128are personalized by acoustic molding system124the models126and128become more and more customized for the particular audio controlled assistant104. As the models126and128are personalized, the acoustic modeling system124becomes better able to identify voice commands110spoken by one or more users and audio prompts112occurring in the acoustic environment102associated with the particular audio controlled assistant104. While performing the model training methods, the acoustic modeling system124is also configured to identify and generate personalized audio prompts112and, in some implementations, voice commands110. For example, the acoustic modeling system124may be configured to identify reoccurring noises and/or words and to define the reoccurring words as additional voice commands110and the reoccurring noises as additional audio prompts112. In at least one instance, the acoustic modeling system124may replace a given voice command110or audio prompt112with a sound signal originating in the acoustic environment102. For example, the acoustic modeling system124may identify a particular song as a ring tone of the user110and may replace the audio prompt “ring ring” corresponding to a generic ring tone with the identified song. In one particular implementation, the acoustic modeling system124may be configured to detect noises falling within predefined categories. For instance, the acoustic modeling system124may include a category for door bell rings which includes sound pattern templates for noises typically associated with door bells. The acoustic modeling system124may detect and isolate reoccurring noises from within the audio signals116. For example the acoustic modeling system124may detect a reoccurring nose if it occurs more than a threshold number of time within a given period of time or if it occurs with a certain predefined level of periodicity. The acoustic modeling system124may then compare the sound pattern associated with reoccurring noise to the sound pattern templates of each category. If the acoustic modeling system124determines a match then the acoustic modeling system124defines the reoccurring noise as an audio prompt112within the matching category. In one example, a match may occur when the sound pattern of the noise and the sound pattern template are within a threshold of similarity to each other. In some example, the reoccurring noise may be so particular to the acoustic environment102that the acoustic modeling system124is unable to match the sound patterns of the reoccurring noise to any of the templates. In this example, each category may also include sound pattern templates of sounds typically associated with noise of the category. For example in the case of the door bell category, the acoustic modeling system124may recognize the sound pattern associated with opening a door, such as a “creaking” or the words “hello” or “hi” regularly found in close proximity within the reoccurring noise. And thus, the acoustic modeling system124may associate the reoccurring noise with the door bell category, even if the acoustic modeling system124is unable to match the sound pattern of the door bell ring to the sound pattern templates of the door bell category. In this way, the acoustic modeling system124is able to match customized noises to one or more categories. In another example, the audio controlled assistant104may be configured to find and play music at the user's request. The default language models126and voice commands110may cause the acoustic modeling system124to identify the voice command “play” followed by a song name as indicating that the acoustic modeling system124should cause the music system120to locate and stream the song to the audio controlled assistant104. Over time, the acoustic modeling system124may begin to identify that the user114typically says “start” followed by a song name instead of “play”. As the language models126are personalized, the acoustic modeling system124is configured to identify “start” as the voice command to play a particular song and may add it to the database of voice commands110. Further in a particular implementation, the acoustic modeling system124may also recognize that a first user, with a particular voice profile, uses the voice command “start” when requesting a song, while a second user, with another voice profile, uses the voice command “begin”. The acoustic modeling system124may then cause the song to play when the first user speaks the command “start” and the second user uses says the command “being” but not to play the music if the first user speakers “begin” or the second user says “start”. Thus, the acoustic modeling system124personalizes the voice commands110applied by the audio controlled assistant104per user. Illustrative Systems FIG.2shows selected functional components of the audio controlled assistant104in more detail. Generally, the audio controlled assistant104may be implemented as a standalone device that is relatively simple in terms of functional capabilities with limited input/output components, memory and processing capabilities or as part of a larger electronic system. In one implementation, the audio controlled assistant104may not have a keyboard, keypad, or other form of mechanical input. The audio controlled assistant104may also be implemented without a display or touch screen to facilitate visual presentation and user touch input. Instead, the assistant104may be implemented with the ability to receive and output audio, a network interface (wireless or wire-based), power, and limited processing/memory capabilities. In the illustrated implementation, the audio controlled assistant104includes, or accesses, components such as at least one control logic circuit, central processing unit, one or more processors202, in addition to one or more computer-readable media204to perform the function of the audio controlled assistant104. Additionally, each of the processors202may itself comprise one or more processors or processing cores. Depending on the configuration of the audio controlled assistant104, the computer-readable media204may be an example of tangible non-transitory computer storage media and may include volatile and nonvolatile memory and/or removable and non-removable media implemented in any type of technology for storage of information such as computer-readable instructions or modules, data structures, program modules or other data. Such computer-readable media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other computer-readable media technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, solid state storage, magnetic disk storage, RAID storage systems, storage arrays, network attached storage, storage area networks, cloud storage, or any other medium that can be used to store information and which can be accessed by the processors202. Several modules such as instruction, data stores, and so forth may be stored within the computer-readable media204and configured to execute on the processor202. An operating system module206is configured to manage hardware and services (e.g., communication interfaces, microphones, and speakers) within and coupled to the audio controlled assistant104for the benefit of other modules. A recognition module208provides at least some basic recognition functionality. In some implementations, this functionality may be limited to specific commands or prompts that perform fundamental tasks like waking up the device, configuring the device, cancelling an input, and the like. In other implementations, the functionality may be expanded to include performing at least some of the tasks described above with respect to cloud services106ofFIG.1. The amount of recognition capabilities implemented on the audio controlled assistant104is an implementation detail, but the architecture described herein supports having some recognition at the audio controlled assistant104together with more expansive recognition at the cloud services106. Various, other modules212may also be stored on computer-readable storage media204, such as a configuration module or to assist in an automated initial configuration of the audio controlled assistant104, as well as reconfigure the audio controlled assistant104at any time in the future. The computer-readable media204also stores one or more audio triggers212, in addition to at least some limited language models216and acoustic models218. In one implementation, the audio triggers216may be one or more words or noises which cause the audio controlled assistant104to “wake up” or begin transmitting audio signals to the cloud services106. For example, the audio triggers216may include specific audio prompts or voice commands which when detected by the audio controlled assistant104cause the audio controlled assistant104to connect and provide the audio signals116to the cloud service106. In another example, the audio triggers216may be a collection of voice commands and/or audio prompts. In at least one example, the audio triggers216may be the complete set of voice commands110and audio prompts112available to the acoustic modeling system124ofFIG.1. The audio controlled assistant104also includes one or more microphones220to capture audio, such as user voice commands and/or audio prompts. The microphones220may be implemented as a single omni-directional microphone, a calibrated microphone group, more than one calibrated microphone group, or one or more microphone arrays. The audio controlled assistant104also includes one or more speakers222to output audio signals as sounds. The audio controlled assistant104includes one or more communication interfaces224to facilitate a communication between the cloud services106and the audio controlled assistant104via one or more networks. The communication interfaces224may support both wired and wireless connection to various networks, such as cellular networks, radio, WiFi networks, short-range or near-field networks (e.g., Bluetooth®), infrared signals, local area networks, wide area networks, the Internet, and so forth. For example, the communication interfaces224may allow the user110to conduct a telephone conference with one or more other individuals. Generally, the audio controlled assistant104captures environmental noise from the acoustic environment102using microphones220, and converts the captured environmental noise into audio signals, such as the audio signals116. The audio controlled assistant104monitors the audio signals for one or more of the audio triggers216using the recognition model208, language models216and acoustic models218. For instance in the illustrated example, the recognition model208may be configured to utilize the language models216and the acoustic model218to detect the audio triggers216but the audio controlled assistant104, in this example, is not configured to perform the model training methods to personalize the language models216and the acoustic models218. Rather in this example, the model training is performed by the acoustic modeling system120at the cloud services106. In another example, the audio controlled assistant104may be configured to analyze the audio signals using one or more model training methods to personalize the language models216and the acoustic models218and generate personalized voice commands and audio prompts. In this example, the acoustic modeling is preformed directly on the audio controlled assistant104rather than by the acoustic modeling system124at the cloud services106, as described above but otherwise operates in a similar manner. In the illustrated implementation, the audio controlled assistant104begins to transmit the audio signals to the cloud services106via one or more of the communication interfaces224upon detecting one or more of the audio triggers216. For example, the audio controlled assistant104may be configured to monitor the environmental noise but not to provide the audio signals to the cloud services106until one or more audio triggers216are detected to protect the privacy of user110. In some instances, the audio triggers216may be the audio prompts112or voice commands110ofFIG.1. In this instance, the audio controlled assistant104may detect that an audio prompt or voice command was issued but provide the audio signals to the acoustic modeling system124to determine the identity of the specific audio prompt or voice command and to select an appropriate response. FIG.3shows selected functional components of a server114(1-S) architecture implemented as part of the cloud services106ofFIG.1. The servers118(1-S) collectively comprise processing resources, as represented by processors302, and computer-readable storage media304. The computer-readable storage media304may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. In the illustrated implementation, the acoustic modeling system124, music system120, and search system122, in addition to various other response systems306, are shown as software components or computer-executable instructions stored in the computer-readable storage media304and executed by one or more processors302. The computer-readable storage media304is also illustrated as storing voice commands110, audio prompts112, language models126and acoustic models128accessible by the acoustic modeling system124. The servers118(1-S) also included one or more communication interfaces308, which may support both wired and wireless connection to various networks, such as cellular networks, radio, WiFi networks, short-range or near-field networks (e.g., Bluetooth®), infrared signals, local area networks, wide area networks, the Internet, and so forth. For example, the communication interfaces312may allow the audio controlled assistant104to communicate with the acoustic modeling system120to process and perform various tasks, such as streaming music from music system120. In general, the servers118(1-S) are configured to receive audio signals, such as audio signals116, from the audio controlled assistant104. The acoustic modeling system124is configured to utilize the language models126and the acoustic models128to identify or detect one or more voice commands110and audio prompts112from the audio signals116. The acoustic modeling system124is able to cause either the audio controlled assistant104or one of the other response systems306to perform any number or types of operations to complete the task indicated by an identified voice commands110or audio prompts112. For example, the acoustic modeling system124may be configured to cause the cloud services106to perform database searches via search system122, locate and consume/stream entertainment (e.g., games, music, movies and/or other content, etc.) via music system120, aid in personal management tasks (e.g., calendaring events, taking notes, etc.), assist in online shopping, or conduct financial transactions in response to detecting a voice command110. In another example, the acoustic modeling system124may be configured to cause the audio controlled assistant104to restart an online purchase transaction in response to detecting an audio prompt112, such as a period of silence following a phone conversation. In one particular example, the acoustic modeling system124is configured to monitor the audio signals for the voice commands110while the acoustic modeling system124identifies that the audio signals includes speech and to only monitor the audio signals for audio prompts112when the audio signals are free of speech. For instance, the acoustic modeling system124may analyze the audio signals using the language models126to identify if the audio signals include speech and, if so, to monitor the audio signals for voice commands110. However, if the acoustic modeling system124determines that the audio signals do not include speech, then to monitor the audio signals for audio prompts112based on the acoustic models128. In another implementation, the acoustic modeling system124may utilize the language models126to detect the voice commands110as discussed above but to utilize the acoustic models128to analyze background noise to detect audio prompts112. For instance, to determine an acoustic scene (or activity that is being performed in the acoustic environment). For example, the acoustic modeling system124may monitor the background noise for clinks typically associated with silver and dishware. This may indicate that there is a dinner party taking place in the acoustic environment. Upon detection, the servers118may select music to enhance the dinner party and cause the music to be played by the voice controlled assistant or cause the voice controlled assistant to suppress incoming calls by sending them to voicemail, as to not interrupt the party. The acoustic modeling system124may filter foreground noise out of the audio signals and monitor the foreground noise for the voice commands110using the language models126. The acoustic modeling system124may also monitor the remaining background noise using the acoustic models128to detect audio prompts112associated with acoustic scenes, such as the dinner party described above. In this example, each of the audio prompts112may represent more than one noise such as a series of noises or a group of noises associated with a single activity. The acoustic modeling system124is also configured to select, generate, update and personalize voice commands110, the audio prompts112, and the language models126and the acoustic models128based on the audio signals received. For example, the acoustic modeling system124may be configured to utilize feedback loops or other machine learning techniques to analyze the environmental sounds and personalize the language models126and acoustic models128to the acoustic environment associated with the transmitting audio controlled assistant104. For instance, the acoustic modeling system124may apply speaker adaptation methods, vocal tract normalizations, or vocabulary adaptation techniques to personalize the language models126and the acoustic models128. Illustrative Processes FIGS.4,5and6are flow diagrams illustrating example processes for personalizing and detecting voice commands and audio prompts to a specific acoustic environment associated with a particular audio controlled assistant. The processes are illustrated as a collection of blocks in a logical flow diagram, which represent a sequence of operations, some or all of which can be implemented in hardware, software or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable media that, which when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described should not be construed as a limitation. Any number of the described blocks can be combined in any order and/or in parallel to implement the process, or alternative processes, and not all of the blocks need be executed. For discussion purposes, the processes herein are described with reference to the frameworks, architectures and environments described in the examples herein, although the processes may be implemented in a wide variety of other frameworks, architectures or environments. For purposes of describing one example implementation, the blocks are arranged visually inFIGS.4,5and6in columns beneath the audio controlled assistant104and cloud services106to illustrate what parts of the architecture may perform these operations. That is, actions defined by blocks arranged beneath the audio controlled assistant may be performed by the assistant, and similarly, actions defined by blocks arranged beneath the command response system may be performed by the system. FIG.4is a flow diagram illustrating a process400for personalizing language and acoustic models for an acoustic environment, such as acoustic environment102, associated with an audio controlled assistant, such as audio controlled assistant104. At402, the audio controlled assistant104generates audio signals, such as audio signals116, based on sound captured from the acoustic environment102. The audio signals may include voice commands and/or audio prompts, which are intended to cause the audio controlled assistant to perform various tasks. At404, the audio controlled assistant104transmits the sounds signals to various cloud services, such as the cloud services106. The cloud services106include at least an acoustic modeling system, such as acoustic modeling system124. The acoustic modeling system124, as described above, is configured to apply model training methods to personalize language models and acoustic models associated with the audio controlled assistant104. At406, the cloud services106receive the audio signals from the audio controlled assistant104. At the cloud services106various applications and/or systems may perform tasks to respond to voice commands and/or audio prompts identified within the audio signals. For example, the cloud services106may includes applications or access systems to perform database searches, locate and consume/stream entertainment (e.g., games, music, movies and/or other content, etc.), aid in personal management tasks (e.g., calendaring events, taking notes, etc.), assisting in online shopping, conducting financial transactions, and so forth. At408, the acoustic modeling system124of the cloud services106analyzes the audio signals408. For example, the acoustic modeling system124may be configure to identify the voice commands and audio prompts based on one or more language models and/or acoustic models associated with the transmitting audio controlled assistant104. At410, the acoustic modeling system124of the cloud services106applies model training methods to personalize the language models and the acoustic models associated with the transmitting audio controlled assistant104. For example, the acoustic modeling system124may transcribe the audio signals into text then feed the transcribed text into a machine learning model, which utilizes the transcribed text to update the acoustic models. In another example, the transcribed text may be utilized with an n-gram system to improve the recognition accuracy by reducing variability in the n-gram selection. FIG.5is a flow diagram illustrating a process500of personalizing voice commands and audio prompts to an acoustic environment, such as acoustic environment102, associated with an audio controlled assistant, such as audio controlled assistant104. At502, the audio controlled assistant104generates audio signals, such as audio signals116, from the acoustic environment102. The audio signals may include voice commands and/or audio prompts, which are intended to cause the audio controlled assistant to perform various tasks. At504, the audio controlled assistant104transmits the sounds signals to various cloud services, such as the cloud services106. The cloud services106include at least an acoustic modeling system, such as acoustic modeling system124. The acoustic modeling system124, as described above, is configured to generate personalized audio prompts for the acoustic environment102associated with the audio controlled assistant104. At506, the cloud services106receive the audio signals from the audio controlled assistant104. At the cloud services106various applications and/or systems may perform tasks to respond to voice commands and/or audio prompts identified within the audio signals. For example, the cloud services106may include applications or access systems to perform database searches, locate and consume/stream entertainment (e.g., games, music, movies and/or other content, etc.), aid in personal management tasks (e.g., calendaring events, taking notes, etc.), assisting in online shopping, conducting financial transactions, and so forth. At508, the acoustic modeling system124of the cloud services106analyzes the audio signals. For example, the acoustic modeling system124may be configure to identify reoccurring or common noises within the audio signals based language models, acoustic models and/or predefined classes or category of noises associated with specific events. At510, the acoustic modeling system124of the cloud services106isolates the reoccurring and common noises from the audio signals. For example, the acoustic modeling system124may isolate a portion or segment of the audio signals that repeats. In another example, the acoustic modeling system124may isolate noises from the audio signals when the acoustic pattern matches predefined sound pattern templates corresponding to a class or category of noises associated with specific events. At512, the acoustic modeling system124classifies the reoccurring noises as an audio prompts, which should elicit specific responses when one of the reoccurring noises is detected in the future. For example, the acoustic modeling system124may classify a particular song as a ring tone and cause the audio controlled assistant104to pause operations when the song is identified. In one particular example, the acoustic modeling system124may classify the reoccurring noises in the same manner as words are classified into voice commands. For instance, the noises such as a doorbell, laughter, or even silence may be configured to resemble a word in the language models and then the noise may be added to the list of voice commands. In this example, the acoustic models and language models may be combined, as well as the voice commands and audio prompts. At514, the acoustic modeling system124generates response instructions corresponding to the audio prompts that were defined. For example, the acoustic modeling system124may generate response instructions based on the matching class or category. In other examples, the acoustic modeling system124may cause the audio controlled assistant104to iterate a number of user-selectable response instruction and assign the selected instructions as the response for a particular audio prompt. FIG.6is a flow diagram illustrating a process600of detecting an audio prompt in an acoustic environment, such as acoustic environment102, associated with audio controlled assistant104. At602, the audio controlled assistant104generates audio signals from the acoustic environment102. The audio signals may include voice commands and/or audio prompts, which are intended to cause the audio controlled assistant to perform various tasks. At604, the audio controlled assistant104transmits the sounds signals to various cloud services, such as the cloud services106. The cloud services106include at least an acoustic modeling system, such as acoustic modeling system124. The acoustic modeling system124, as described above, is configured to identify audio prompts and voice commands located within the audio signals and to cause the cloud service106or the audio controlled assistant104to perform various actions to respond to the audio prompt or the voice command. At606, the cloud services106receive the audio signals from the audio controlled assistant104. At608, the acoustic modeling system124of cloud services106recognizes one or more audio prompts within the audio signals. For example, the acoustic modeling system124may be configure to identify the audio prompts based on one or more acoustic models that have been personalized for the acoustic environment102associated with the transmitting audio controlled assistant104. At610, the acoustic modeling system124of the cloud services106identifies a response corresponding to the recognized audio prompt. For instance, the acoustic modeling system124may cause the cloud services106to perform tasks to respond to the audio prompt. In one example, the acoustic modeling system124may cause one of the cloud services106to contact911in response to detecting an audio prompt associated with a home alarm. At612, the acoustic modeling system124transmits response instruction to the audio controlled assistant104, if the identified response indicates that the audio controlled assistant104should perform an action. For example, the acoustic modeling system124transmit response instruction which cause the audio controlled assistant104to pause or attenuate music in response to detecting a ring tone. At614, the audio controlled assistant104executes the response instructions and performs the identified response. For example, restarting a online purchase transaction in response to determining that the user has completed an interrupting conversation. CONCLUSION Although the subject matter has been described in language specific to structural features, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features described. Rather, the specific features are disclosed as illustrative forms of implementing the claims.
42,838
11862154
DETAILED DESCRIPTION Hereinafter, various exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the exemplary embodiments and terminology used herein are not intended to limit the invention to the particular exemplary embodiments described, but to include various modifications, equivalents, and/or alternatives of the exemplary embodiment. In relation to explanation of the drawings, similar drawing reference numerals may be used for similar constituent elements. Unless otherwise defined specifically, a singular expression may encompass a plural expression. In this disclosure, expressions such as “A or B” or “at least one of A and/or B” and the like may include all possible combinations of the items listed together. Expressions such as “first” or “second,” and the like, may express their components irrespective of their order or importance and may be used to distinguish one component from another, but is not limited to these components. When it is mentioned that some (e.g., first) component is “(functionally or communicatively) connected” or “accessed” to another (second) component”, the component may be directly connected to the other component or may be connected through another component (e.g., a third component). In this disclosure, “configured to (or set to)” as used herein may, for example, be used interchangeably with “suitable for”, “having the ability to”, “altered to”, “adapted to”, “capable of” or “designed to” in hardware or software. Under certain circumstances, the term “device configured to” may refer to “device capable of” doing something together with another device or components. For example, “a processor configured (or set) to perform A, B, and C” may refer to an exclusive processor (e.g., an embedded processor) for performing the corresponding operations, or a general-purpose processor (e.g., a CPU or an application processor) capable of performing the corresponding operations by executing one or more software programs stored in a memory device. Electronic devices in accordance with various exemplary embodiments of the present disclosure may include at least one of, for example, smart phones, tablet PCs, mobile phones, videophones, electronic book readers, desktop PCs, laptop PCs, netbook computers, workstations, a portable multimedia player (PMP), an MP3 player, a medical device, a camera, and a wearable device. A wearable device may include at least one of an accessory type (e.g., a watch, a ring, a bracelet, a bracelet, a necklace, a pair of glasses, a contact lens, or a head-mounted-device (HMD)), a textile or garment-integrated type (e.g., electronic clothes), a body attachment-type (e.g., skin pads or tattoos), and an implantable circuit. In some exemplary embodiments, the electronic device may, for example, include at least one of a television, a digital video disk (DVD) player, an audio player, a refrigerator, an air conditioner, a vacuum cleaner, an oven, a microwave oven, a washing machine, and may include at least one of a panel, a security control panel, a media box (e.g., Samsung HomeSync®, Apple TV®, or Google TV™), a game console (e.g., Xbox®, PlayStation®), electronic dictionary, electronic key, camcorder, and an electronic frame. In another exemplary embodiment, the electronic device may include at least one of any of a variety of medical devices (e.g., various portable medical measurement devices such as a blood glucose meter, a heart rate meter, a blood pressure meter, or a body temperature meter), magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), computed tomography (CT), camera, or ultrasonic, etc.), a navigation system, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), an automobile infotainment device, a marine electronic equipment (for example, marine navigation devices, gyro compass, etc.), avionics, security devices, head units for vehicles, industrial or domestic robots, drone, ATMs at financial institutions, point of sales (POS), or an IOT devices (e.g., a light bulb, various sensors, a sprinkler device, a fire alarm, a thermostat, a streetlight, a toaster, a fitness appliance, a hot water tank, a heater, a boiler, etc.). According to some exemplary embodiments, the electronic device may include at least one of a piece of furniture, a building/structure, a part of an automobile, an electronic board, an electronic signature receiving device, a projector, gas, and various measuring instruments (e.g., water, electricity, gas, or radio wave measuring instruments, etc.). In various exemplary embodiments, the electronic device may be flexible or a combination of two or more of the various devices described above. The electronic device according to an exemplary embodiment is not limited to the above-mentioned devices. In the present disclosure, the term “user” may refer to a person using an electronic device or a device using an electronic device (e.g., an artificial intelligence electronic device). FIGS.1A to1Care block diagrams showing a configuration of an electronic device, according to an exemplary embodiment of the present disclosure. The electronic device100ofFIG.1Amay be, for example, the above-described electronic device or a server. When the electronic device100is a server, the electronic device100may include, for example, a cloud server or a plurality of distributed servers. The electronic device100ofFIG.1Amay include a memory110and a processor120. The memory110, for example, may store a command or data regarding at least one of the other elements of the electronic device100. According to an exemplary embodiment, the memory110may store software and/or a program. The program may include, for example, at least one of a kernel, a middleware, an application programming interface (API) and/or an application program (or “application”). At least a portion of the kernel, middleware, or API may be referred to as an operating system. The kernel may, for example, control or manage system resources used to execute operations or functions implemented in other programs. In addition, the kernel may provide an interface to control or manage the system resources by accessing individual elements of the electronic device100in the middleware, the API, or the application program. The middleware, for example, can act as an intermediary for an API or an application program to communicate with the kernel and exchange data. In addition, the middleware may process one or more job requests received from the application program based on priorities. For example, the middleware may prioritize at least one of the application programs to use the system resources of the electronic device100, and may process the one or more job requests. An API is an interface for an application to control the functions provided in the kernel or middleware and may include, for example, at least one interface or function (e.g., command) for file control, window control, image processing, or character control. Further, the memory130may include at least one of an internal memory and an external memory. The internal memory may include at least one of, for example, a volatile memory (e.g., a DRAM, an SRAM, or an SDRAM), a nonvolatile memory (e.g., an OTPROM, a PROM, an EPROM, an EEPROM, a mask ROM, a flash ROM, a flash memory, a hard drive, and a solid state drive (SSD)). The external memory may include a flash drive, for example, a compact flash (CF), a secure digital (SD), a micro-SD, a mini-SD, an extreme digital (XD), a multi-media card (MMC), a memory stick, or the like. The external memory may be functionally or physically connected to the electronic device100via various interfaces. According to various exemplary embodiments, the memory110may acquire voice information and image information generated from a natural language in which the user speaks and the behavior of the user in association with the natural language, for the processor120to set an action to perform according to a condition, based on the acquired voice information and the image information, to determine an event to be detected according to the condition and a function to be executed according to the action when the event is detected, to determine at least one detection resource to detect the event, and in response to at least one event satisfying the condition being detected using the determined detection resource, to store a program to control the electronic device100to execute a function according to the condition. The processor120may include one or more of a central processing unit (CPU), an application processor (AP), and a communication processor (CP). The processor120may also be implemented as at least one of an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), and the like. Although not shown, the processor120may further include an interface, such as a bus, for communicating with each of the configurations. The processor120may control a plurality of hardware or software components connected to the processor120, for example, by driving an operating system or an application program, and may perform various data processing and operations. The processor120, for example, may be realized as a system on chip (SoC). According to an exemplary embodiment, the processor120may further include a graphic processing unit (GPU) and/or an image signal processor. The processor120may load and process commands or data received from at least one of the other components (e.g., non-volatile memory) into volatile memory and store the resulting data in non-volatile memory. According to various exemplary embodiments, the processor120may acquire audio information and image information generated from a natural language uttered by the user and user's actions (e.g. a user's behavior) associated with the natural language, for setting an action to be performed according to a condition. The processor120may determine an event to be detected according to the condition and a function to be executed according to the action when the event is detected, based on the acquired voice information and the image information. The processor120may determine at least one detection resource to detect the event. When at least one event satisfying the condition is detected using the determined detection resource, the processor120may control the electronic device100so that a function according to the condition is executed. According to various exemplary embodiments, the processor120may determine an event to be detected according to the condition and a function to be executed according to the action, based on a data recognition model generated using a learning algorithm. The processor120may also use the data recognition model to determine at least one detection resource to detect the event. This will be described later in more detail with reference toFIGS.10to13. According to various exemplary embodiments, when determining at least one detection resource, the processor120may search for available resources that are already installed. The processor120may determine at least one detection resource from among the available resources to detect the event, based on the functions detectable by the retrieved available resources. In an exemplary embodiment, the detection resource may be a module included in the electronic device100or an external device located outside the electronic device100. According to various exemplary embodiments, the electronic device100may further include a communicator (not shown) that performs communication with the detection resource. An example of the communicator will be described in more detail with reference to the communicator150ofFIG.1C, and a duplicate description will be omitted. In an exemplary embodiment, the processor120may, when at least one detection resource is determined, control the communicator (not shown) such that control information requesting detection of an event is transmitted to the at least one determined resource. According to various exemplary embodiments, the processor120may search for available resources that are already installed. The processor120may determine at least one execution resource to execute the function according to the action among the available resources based on the functions that the retrieved available resources can provide. According to various exemplary embodiments, the electronic device100may further include a communicator (not shown) that communicates with the execution resource. An example of the communicator will be described in more detail with reference to the communicator150ofFIG.1C, and a duplicate description will be omitted. In an exemplary embodiment, when the processor120controls a function according to the action to be executed, the processor120may transmit the control information to the execution resource so that the determined execution resource executes the function according to the action. According to various exemplary embodiments, the electronic device100may further include a display (not shown) for displaying a user interface (UI). The display may include, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a microelectromechanical system (MEMS) display, or an electronic paper display. The display may include a touch screen, and may receive the inputs of touch, gesture, proximity, or hovering, using, for example, an electronic pen or a user's body part. In an exemplary embodiment, the processor120can control the display to display a notification UI informing that execution of the action according to the condition is impossible, if there is no detection resource to detect the event or if the detection resource cannot detect the event. According to various exemplary embodiments, the processor120may determine an event to be detected according to a condition and a function to be executed according to the action when the event is detected, based on the acquired voice information and image information. The processor120applies the acquired voice information and image information to a data recognition model generated using a learning algorithm to determine the condition and the action according to the user's intention, and to determine an event to be detected according to a condition and a function to be executed according to the action. According to various exemplary embodiments, when the electronic device100further includes a display, the processor120may, when determining a condition and an action according to the user's intention, control the display to display a confirmation UI for confirming conditions and actions to the user. FIG.1Bis a block diagram showing a configuration of an electronic device100, according to another exemplary embodiment of the present disclosure. The electronic device100may include a memory110, a processor120, a camera130, and a microphone140. The processor120ofFIG.1Bmay include all or part of the processor120shown inFIG.1A. In addition, the memory110ofFIG.1Bmay include all or part of the memory110shown inFIG.1A. The camera130may capture a still image and a moving image. For example, the camera130may include one or more image sensors (e.g., front sensor or rear sensor), a lens, an image signal processor (ISP), or a flash (e.g., LED or xenon lamp). According to various exemplary embodiments, the camera130may capture image of the behavior of the user to set an action according to the condition, and generate image information. The generated image information may be transmitted to the processor120. The microphone140may receive external acoustic signals and generate electrical voice information. The microphone140may use various noise reduction algorithms for eliminating noise generated in receiving an external sound signal. According to various exemplary embodiments, the microphone140may receive the user's natural language to set the action according to the condition and generate voice information. The generated voice information may be transmitted to the processor120. According to various exemplary embodiments, the processor120may acquire image information via the camera130and acquire voice information via the microphone140. In addition, the processor120may determine an event to be detected according to a condition and a function to be executed according to the action when the event is detected, based on the acquired image information and voice information. The processor120may determine at least one detection resource to detect the determined event. In response to the at least one determined detection resource detecting at least one event satisfying the condition, the processor120may execute a function according to the condition. In an exemplary embodiment, the detection resource is a resource capable of detecting an event according to a condition among available resources, and may be a separate device external to the electronic device100or one module provided in the electronic device100. In an exemplary embodiment, the module includes units composed of hardware, software, or firmware, and may be used interchangeably with terms such as, for example, logic, logic blocks, components, or circuits. A “module” may be an integrally constructed component or a minimum unit or part thereof that performs one or more functions. In some exemplary embodiments, if the detection resource is a separate device external to the electronic device100, the detection resources may be, for example, IOT devices and may also be at least some of the exemplary embodiments of the electronic device100described above. Detailed examples of detection resources according to events to be detected will be described in detail later in various exemplary embodiments. FIG.1Cis a block diagram illustrating the configuration of an electronic device100and external devices230and240, according to an exemplary embodiment of the present disclosure. The electronic device100may include a memory110, a processor120, and a communicator150. The processor120ofFIG.1Cmay include all or part of the processor120shown inFIG.1A. In addition, the memory110ofFIG.1Cmay include all or part of the memory110shown inFIG.1A. The communicator150establishes communication between the external devices230and240, and may be connected to the network through wireless communication or wired communication so as to be communicatively connected with the external device. In an exemplary embodiment, the communicator150may communicate with the external devices230and240through a third device (e.g., a repeater, a hub, an access point, a server, or a gateway). The wireless communication may include, for example, LTE, LTE Advance (LTE-A), Code division multiple access (CDMA), Wideband CDMA (WCDMA), universal mobile telecommunications system (UMTS), Wireless Broadband (WiBro), Global System for Mobile Communications (GSM), and the like. According to an exemplary embodiment, the wireless communication may include, for example, at least one of wireless fidelity (WiFi), Bluetooth, Bluetooth low power (BLE), ZigBee, near field communication, Magnetic Secure Transmission, Radio Frequency (RF), and body area network (BAN). The wired communication may include, for example, at least one of a universal serial bus (USB), a high definition multimedia interface (HDMI), a recommended standard 232 (RS-232), a power line communication, and a plain old telephone service (POTS). The network over which the wireless or wired communication is performed may include at least one of a telecommunications network, a computer network (e.g., a LAN or WAN), the Internet, and a telephone network. According to various exemplary embodiments, the camera230may capture image or video of the behavior of the user to set an action according to the condition, and generate image information. The communicator (not shown) of the camera230may transmit the generated image information to the communicator150of the electronic device100. In an exemplary embodiment, the microphone240may receive the natural language (e.g., a phrase) uttered by the user to generate the voice information in order to set an action according to the condition. The communicator (not shown) of the microphone240may transmit the generated voice information to the communicator150of the electronic device100. The processor120may acquire image information and voice information through the communicator150. In an exemplary embodiment, the processor120may determine an event to be detected according to a condition and determine a function to be executed according to the action when the event is detected, based on the acquired image information and voice information. The processor120may determine at least one detection resource to detect an event. In response to at least one event satisfying the condition being detected using the determination resource, the processor120may execute a function according to the condition. FIG.2is a block diagram showing a configuration of a system10including an electronic device100, according to an exemplary embodiment of the present disclosure. The system10may include an electronic device100, external devices230,240, and available resources250. The electronic device100, for example, may include all or part of the electronic device100illustrated inFIGS.1A to1C. In addition, the external devices230and240may be the camera230and the microphone240ofFIG.1C. The available resources250ofFIG.2may be resource candidates that are able to detect conditions set by the user and perform actions according to the conditions. In an exemplary embodiment, the detection resource is a resource that detects a condition-based event among the available resources250, and the execution resource may be a resource capable of executing a function according to an action among the available resources250. The available resources250may be primarily IOT devices and may also be at least some of the exemplary embodiments of the electronic device100described above. According to various exemplary embodiments, the camera230may capture image or video of the behavior of the user to set an action according to the condition, and generate image information. The camera230may transmit the generated image information to the electronic device100. In addition, the microphone240may receive the natural language or voice uttered by the user to generate the voice information in order to set an action according to the condition. The microphone240may transmit the generated voice information to the electronic device100. The electronic device100may acquire image information from the camera230and acquire voice information from the microphone240. In an exemplary embodiment, the electronic device100may determine an event to be detected according to a condition and determine a function to be executed according to the action when the event is detected, based on the acquired image information and voice information. The electronic device100may search the available installed resources250and determine at least one detection resource, among the available resources250, to detect conditional events using the detection capabilities (i.e., a detection function) of the at least one detection resource. The electronic device100may also search for available installed resources250and determine at least one execution resource, among the available resources250, to perform a function according to the action based on the capabilities (i.e., an execution function) that the execution resource can provide. When at least one event satisfying the condition is detected using the determined detection resource, the electronic device100may control the selected execution resource to execute the function according to the condition. FIGS.3A to3Dare diagrams illustrating a situation in which an action according to a condition is executed in the electronic device100, according to an exemplary embodiment of the present disclosure. In an exemplary embodiment, the user1may perform a specific action while speaking in a natural language in order to set an action to be executed according to a condition. The condition may be referred to as a trigger condition in that it fulfills the role of a trigger in which an action is performed. For example, the user1performs a gesture instructing the drawer330with his or her fingers, or performs a glance toward the drawer, while saying “Record an image when another person opens the drawer over there.” In this example, the condition may be a situation where another person opens the drawer330indicated by the user1, and the action may be a image recording of a situation in which another person opens the drawer330. Peripheral devices310and320located in the periphery of the user1may generate audio information and image information from natural language uttered by the user1and an action of the user1associated with the natural language. For example, the microphone320may receive a natural word “record an image when another person opens the drawer over there” to generate audio information, and the camera310may photograph or record an action of instructing the drawer330with a finger to generate image information. In an exemplary embodiment, the peripheral devices310and320can transmit the generated voice information and image information to the electronic device100, as shown inFIG.3B. In an exemplary embodiment, the peripheral devices310and320may transmit the information to the electronic device100via a wired or wireless network. In another exemplary embodiment, in the case where the peripheral devices310and320are part of the electronic device100as shown inFIG.1B, the peripheral devices310and320may transmit the information to the processor100of the electronic device100via an interface, such as a data communication line or bus. In an exemplary embodiment, the processor120of the electronic device100may acquire voice information from a natural language through the communicator150and acquire image information from a user's action associated with the natural language. In another exemplary embodiment, when the peripheral devices310and320are part of the electronic device100as shown inFIG.1C, the processor120may acquire audio information and image information generated from the user's action through an interface such as a bus. The processor120may determine at least one event to be detected according to the condition and determine, when at least one event is detected, a function to be executed according to the action, based on the acquired voice information and image information. For example, the processor120may determine an event in which the drawer330is opened and an event in which another person is recognized as at least one event to detect conditionally. The processor120may determine the function of recording an image of a situation in which another person opens the drawer330as a function to perform according to an action. The processor120may select at least one detection resource for detecting at least one event among the available resources. In this example, the at least one detection resource may include, for example, a camera310located in the vicinity of the drawer, capable of detecting both an event in which drawers are opened and an event of recognizing another person, and an image recognition module (not shown) for analyzing the photographed or recorded image and recognizing an operation or a state of an object included in the image. In an exemplary embodiment, the image recognition module may be part of the camera310or part of the electronic device100. The image recognition module is described as part of the camera in this disclosure, but the image recognition module may be implemented as part of the electronic device100as understood by one of ordinary skill in the art. The camera may provide the image information to the electronic device100in a similar manner as the camera310providing the image information to the electronic device100inFIG.3C. In another exemplary embodiment, the at least one detection resource may include, for example, a distance detection sensor340for detecting an open event of the drawer330and a fingerprint recognition sensor350or iris recognition sensor for detecting an event that recognizes another person. In an exemplary embodiment, the processor120may determine at least one execution resource for executing a function according to an action among the available resources. For example, the at least one execution resource may be a camera located around the drawer330performing the function of recording. The camera may perform similar functions as the camera310providing the image information inFIG.3CandFIG.3D. Alternatively, the camera may be the same camera as the camera that detects the event. If at least one detection resource is selected, the processor120may transmit control information requesting detection of the event according to the condition to the selected detection resources340and350, as shown inFIG.3C. The detection resource receiving the control information may monitor whether or not an event according to the condition is detected. A situation may be met that satisfies the condition. For example, as shown inFIG.3D, a situation may occur in which the other person2opens the drawer330indicated by the user's finger. In an exemplary embodiment, the detection resources340and350may detect an event according to the condition. For example, the distance detection sensor340may detect an event in which a drawer is opened, and the fingerprint recognition sensor350may detect an event that recognizes another person. The detection resources340and350may transmit the detection result of the event to the processor120. The processor120may, when at least one event satisfying the condition is detected, control the function according to the action to be executed based on the received detection result. For example, when there are a plurality of events necessary for satisfying the condition, the processor120may, when all the plurality of events satisfy the condition, determine that the condition is satisfied and may control the function according to the action to be executed. The processor120may transmit the control information so that the selected execution resource executes the function according to the action. For example, the processor120may transmit control information requesting execution of the recording function to the camera310located near the drawer330. Accordingly, the camera310can record the situation in which the person2opens the drawer330as an image. As described above, when the condition according to the user's behavior is set, a visual If This Then That (IFTTT) environment using the camera310can be established. FIGS.4A to4Dare diagrams illustrating situations in which an action according to a condition is executed in the electronic device100, according to an exemplary embodiment of the present disclosure. InFIG.4A, the user1may utter a natural language (e.g., phrase) while performing a specific action in order to set an action to be executed according to a condition. For example, the user1may speak a natural language as “turn off” while pointing at the TV430with a finger and performing a gesture to rotate the finger clockwise. In this example, the condition may be that the user1rotates his or her finger in a clockwise direction towards the TV430, and the action in accordance with the condition may be to turn off the TV430. In another exemplary embodiment, the user1may speak a natural language as “turn off” while performing a gesture indicating the TV430with a finger. In this example, the condition may be a situation where the user1speaks “turn off” while pointing a finger toward the TV430, and the action may be to turn off the TV430. Peripheral devices410and420located in the vicinity of the user1may generate image information and voice information from a behavior of the user1and a natural language associated with the behavior of the user1. For example, the camera410may photograph a gesture of pointing at a TV with a finger and rotating the finger to generate image information, and the microphone420may receive the natural language “turn off” to generate voice information. InFIG.4B, the peripheral devices410and420may transmit the generated voice information and image information to the electronic device100. In an exemplary embodiment, the peripheral devices410and420may transmit the information to the electronic device100via a wired or wireless network. In another exemplary embodiment, in the case where the peripheral devices410and420are part of the electronic device100as shown inFIG.1B, the peripheral devices410and420may transmit the information to the processor100of the electronic device100via an interface, such as a data communication line or bus. In an exemplary embodiment, the processor120of the electronic device100may acquire voice information from a natural language through the communicator150and acquire image information from a user's action associated with the natural language. In another exemplary embodiment, when the peripheral devices410and420are part of the electronic device100as shown inFIG.1C, the processor120may acquire audio information and image information generated from the user's action through an interface such as a bus. The processor120may determine at least one event to be detected according to the condition. The processor120may determine, when at least one event is detected, a function to be executed according to the action, based on the acquired voice information and image information. For example, the processor120may determine an event that recognizes a gesture that rotates a finger clockwise toward the TV430as an event to detect. The processor120may determine that the function of turning off the TV430is a function to perform according to an action. The processor120may select at least one detection resource for detecting at least one event among the available resources. In this example, the at least one detection resource may be a camera440installed on top of the TV430and an image recognition module (not shown) recognizing the gesture, which may sense the gesture of the user1. The image recognition module may be part of the camera440or part of the electronic device100. The image recognition module is described as part of the camera440in this disclosure, but the image recognition module may be implemented as part of the electronic device100as understood by one of ordinary skill in the art. The processor120may determine at least one execution resource for executing a function according to an action among the available resources. In this example, at least one execution resource may be the TV430itself capable of being turned off. If at least one detection resource is selected, the processor120may transmit control information requesting detection of the event according to the condition to the selected detection resource440, as shown inFIG.4C. The detection resource440receiving the control information may monitor whether or not an event according to the condition is detected. A situation may be met when satisfying the condition. For example, as shown inFIG.4D, during the reproduction of the TV430, a situation may occur in which the user rotates the finger toward the TV430. In this case, the camera440as a detection resource may detect an event according to the condition. For example, the camera440may detect an event that recognizes a gesture that rotates a finger in a clockwise direction. The detection resource440may transmit the detection result of the event to the processor120. The processor120may, when at least one event satisfying the condition is detected, control the function according to the action to be executed based on the received detection result. For example, the processor120may transmit control information requesting the TV430to turn off the TV430. Accordingly, the TV430may turn off the screen being reproduced. As described above, when setting conditions according to the user's behavior for a home appliance (e.g., TV, etc.), a universal remote control environment for controlling a plurality of home appliances with a unified gesture may be established. FIGS.5A to5Dare diagrams illustrating situations in which an action according to a condition is executed in the electronic device100, according to an exemplary embodiment of the present disclosure. InFIG.5A, the user1may utter a natural language (e.g., a phrase) while performing a specific action in order to set an action to be executed according to a condition. For example, the user1may create a ‘V’-like gesture with his/her finger and utter a natural language saying “take a picture when I do this”. In this example, the condition may be a situation of making a ‘V’ shaped gesture, and an action according to the condition may be that an electronic device (for example, a smartphone with a built-in camera)100photographs the user. In another exemplary embodiment, the user1may speak a natural language saying “take a picture if the distance is this much” while distancing the electronic device100over a certain distance. In this example, the condition may be such that the user1distances the electronic device100over a certain distance, and the action according to the condition may be that the electronic device100photographs the user1. In another exemplary embodiment, when the subjects to be photographed including the user1are within the shooting range of the electronic device100, the user1may speak a natural language as “take a picture when all of us come in.” In this example, the condition may be a situation in which the subjects to be photographed including the user1are within the shooting range of the electronic device100, and the action in accordance with the condition may be that the electronic device100photographs the subjects. In another exemplary embodiment, the subjects including the user1may jump, and the user1may utter the natural language as “take a picture when all of us jump like this”. In this example, the condition may be a situation in which the subjects to be photographed including the user1jump into the shooting range of the electronic device100, and the action in accordance with the condition may be that the electronic device100photographs the subjects. In another exemplary embodiment, the user1may speak a natural language such as “take a picture when the child laughs”, “take a picture when the child cries”, or “take a picture when the child stands up”. In this example, the condition may be a situation where the child laughs, cries, or stands up, and an action according to the condition may be that the electronic device100photographs the child. In another exemplary embodiment, the user1may speak the natural language as “take a picture when I go and sit” while mounting the electronic device100at a photographable position. In this example, the condition may be a situation in which the user1sits while the camera is stationary, and an action according to the condition may be that the electronic device100photographs the user. The camera130and the microphone140built in the electronic device100may generate image information and audio information from a user's behavior and a natural language related to the user's behavior. For example, the camera130may photograph a ‘V’ shaped gesture to generate image information, and the microphone140may receive the natural language of “take a picture when I do this' to generate voice information. InFIG.5B, the camera130and the microphone140may transmit the generated audio information and image information to the processor120. The processor120may determine at least one event to be detected according to the condition. The processor120may determine, when at least one event is detected, an execution function according to the action, based on the acquired voice information and image information. For example, the processor120determines an event that recognizes a ‘V’ shaped gesture as an event to detect. The processor120determines the function of photographing as an action to be performed according to the action. The processor120selects at least one detection resource for detecting at least one event among the various types of sensible modules available in the electronic device100, which are available resources. In this example, the at least one detection resource may be a camera130provided in the electronic device100and an image recognition module (not shown) recognizing the gesture. The image recognition module may be included in the camera130, or may be part of the processor120. The processor120selects at least one execution resource for executing functions according to an action among various types of modules capable of providing detection functions provided in the electronic device100, which are available resources. In this example, at least one execution resource may be a camera130provided in the electronic device100. The processor120transmits control information requesting detection of the event according to the condition to the selected detection resource130, as shown inFIG.5C. The detection resource130receiving the control information monitors whether or not an event according to the condition is detected. A situation is met when satisfying the condition. For example, as shown inFIG.4D, a situation occurs in which the user1performs a ‘V’ shaped gesture toward the camera. In this example, the camera410as a detection resource detects an event according to the condition. For example, the camera410determines an event that recognizes a ‘V’ shaped gesture. The detection resource410transmits the detection result of the event to the processor120. The processor120, when at least one event satisfying the condition is detected, controls the function according to the action to be executed based on the received detection result. For example, the processor120sends control information requesting the camera130to take a picture. Accordingly, the camera130executes a function of photographing the user. In an exemplary embodiment, when the camera130automatically performs photographing in accordance with the conditions set by the user, the user's experience of using the camera130can be improved by providing the user with a natural and convenient user interface for shooting. The user may present conditions for more flexible and complex photographing or recording. The camera may automatically perform shooting when the condition is satisfied, thereby improving the user's experience with the electronic device100. FIG.6is a flowchart of executing an action according to a condition in the electronic device100, in accordance with an exemplary embodiment of the present disclosure. A user sets an action to be executed according to a condition based on a natural interface (601). The natural interface may be, for example, speech, text or gestures, for uttering a natural language. In an exemplary embodiment, a condition and an action to be executed according to the condition may be configured as a multi-model interface. In an example, the user may perform a gesture of pointing to the drawer with a finger, while saying “when the drawer here is opened”. The user may perform a gesture of pointing to the TV with a finger while saying “display a notification message on the TV there” as an action to be executed according to the condition. In an example, the user may utter “if the condition is a pleasant family atmosphere” as a condition and utter “store an image” as an action to be executed according to the condition. In an example, the user may utter “if the window is open in the evening” as a condition and utter “tell me to close the window” in an action to be performed according to the condition. In an example, the user may utter “if the child smiles” as a condition and utter “save an image” as an action to perform according to the condition. In an example, the user may, as a condition, utter “if I get out of bed in the morning and go out into the living room” and utter “tell me the weather” as an action to perform according to the condition. In an example, the user may utter “when I lift my fingers toward the TV” as a condition and utter “If the TV is turned on, turn it off, and if it is off, turn it on” as an action to perform according to the condition. In an example, the user may utter “If I do a push-up” as a condition and utter “give an order” as an action to be executed according to the condition. In an example, the user may utter “when no one is here when a stranger comes in” as a condition and utter “record an image and contact family” as an action to perform according to the condition. In an example, the user may utter “when there is a loud sound outside the door” as condition, and may perform a gesture of pointing a finger toward the TV while uttering “turn on the camera attached to the TV and show it on the TV” as an action to be performed according to the condition. When the user sets an action to be executed according to the condition, the user's peripheral device receives the natural language that the user utters and may photograph the user's behavior (603). The processor120acquires voice information generated based on a natural language and image information generated based on shooting from peripheral devices, and the processor120processes the acquired voice information and image information (605). For example, the processor120may convert the acquired voice information into text using a natural language processing technique, and may recognize an object and peripheral environment included in the image information using a visual recognition technique. In an exemplary embodiment, the processor120analyzes or interprets the processed voice information and the video information to understand the intention of the user. For example, the processor120may analyze voice information and image information using a multimodal reasoning technique. In this example, the processor120may analyze the voice information and the image information based on a data recognition model using a learning algorithm (e.g., a neural network algorithm, a genetic algorithm, a decision tree algorithm, a support vector machine, etc.). The processor120may determine the user's intention, determine a condition and an action to be performed according to the condition, and may also determine at least one event requiring detection according to the condition. In this example, the processor120may check a condition according to the analysis result and an action to be executed according to the condition, in order to clearly identify the intention of the user. According to various exemplary embodiments, the processor120may provide a user with a confirmation user interface (UI) as feedback to confirm conditions and actions. In an example, the processor120provides a confirmation UI that “is it right to record when the second drawer is opened on the right desk” by voice or image using the electronic device100or a peripheral device. In this example, when a user input that accepts the confirmation UI is received, the processor120determines a condition and an action to be executed according to the condition. In another example, when a user input rejecting the confirmation UI is received, the processor120provides a UI requesting the user's utterance and action to set an action to be executed according to the condition using the electronic device100or peripheral device. The processor120establishes an event detection plan (609). For example, the processor120selects at least one detection resource for detecting at least one event determined (607). In this example, the processor120may determine at least one detection resource for detecting at least one event based on a data recognition model generated using a learning algorithm (e.g., a neural network algorithm, a genetic algorithm, a decision tree algorithm, or a support vector machine). The processor120may search for available resources that are already installed. In an exemplary embodiment, the available resources may be available resources that are located at a place where an event according to a condition is detectable or located at a place where a function according to an action is executable, in order to execute an action according to a condition set by the user. The available resources may transmit information about their capabilities to the processor120in response to a search of the processor120. The processor120may determine at least one detection resource to detect an event among the available resources based on the detectable function among the functions of the available resources. Detectable functions may include a function to measure a physical quantity, such as gesture sensing function, air pressure sensing function, magnetic sensing function, acceleration sensing function, proximity sensing function, color sensing function, temperature sensing function, humidity sensing function, distance sensing function, pressure sensing function, touch sensing function, illumination sensing function, wavelength sensing function, smell or taste sensing function, fingerprint sensing function, iris sensing function, voice input function or image shooting function, or may include a function to detect a state of a peripheral environment and convert the detected information to an electrical signal. In another exemplary embodiment, when the same functions among the detectable functions of the available resources exist, the processor120may determine the detected resources according to the priority of the function. For example, it is possible to determine at least one detection resource to detect an event in consideration of priorities such as a detection range, a detection period, a detection performance, or a detection period of each of the detectable functions. In an example, when the condition set by the user is “when the window is open in the room when no one is in the room”, the processor120may select a motion sensor that detects an event that an object in the room moves, a camera for detecting an event to recognize a person in the room, and a window opening sensor for detecting an event in which a window is opened, as detection resources. In this example, when an event without movement of an object is detected from the motion sensor, an event without a person in the room is detected from the camera, and an event in which a window is open is detected, the processor120may establish a detection plan as an event satisfying the condition is detected. In another example, if at least one event among the events is not detected, the processor120may determine that a situation where the condition is not satisfied occurred. The processor120may provide a situation according to the condition set by the user as the input value of the model using the previously learned data recognition model, and according to the established detection plan, may determine whether the available resource can detect an event according to the condition. This can be defined as an event detection method based on multimodal learning. The processor120may determine at least one execution resource to execute the function according to the action among the available resources based on the functions that the available resources can provide. In an exemplary embodiment, the processor120may determine at least one execution resource for performing other functions on the action based on a data recognition model generated using a learning algorithm (e.g., a neural network algorithm, a genetic algorithm, a decision tree algorithm, or a support vector machine). For example, the executable functions include the above-described detectable functions, and may be at least one of a display function, an audio playback function, a text display function, a video shooting function, a recording function, a data transmission function, a vibration function, or a driving function for transferring power. In another exemplary embodiment, when the same functions among the executable functions of the available resources exist, the processor120may determine execution resources according to the priority of the function. For example, it is possible to determine at least one execution resource to execute a function according to an action in consideration of priority such as execution scope, execution cycle, execution performance or execution period of each of the executable functions. According to various exemplary embodiments, the processor120may provide a confirmation UI as feedback for the user to confirm the established event detection plan. In an example, the processor120may provide a confirmation UI “Recording starts when the drawer opens. Open the drawer now to test.” by voice using the electronic device100or the user's peripheral device. The processor120may display a drawer on a screen of a TV that performs a recording function as an action in response to an event detection. According to various exemplary embodiments, the processor120may analyze common conditions of a plurality of events to optimize the detection resources to detect events if there are multiple events to detect according to the condition. In an example, if the condition set by the user is “when a drawer is opened by another person”, the processor120may determine that the event to be detected according to the condition is an event in which the drawer is opened and an event in which the person is recognized. In this example, the processor120may select a distance sensing sensor attached to the drawer as a detection resource to detect a drawer opening event, and a camera around the drawer as a detection resource to detect an event that recognizes another person. The processor120may optimize the plurality of events into one event where the camera recognizes that another person opens the drawer. According to various exemplary embodiments, the processor120may substitute the available resources that detect a particular event with other available resources, depending on the situation of the available resources. In another exemplary embodiment, the processor120may determine whether to detect an event according to the condition according to the situation of the available resources, and may provide feedback to the user when the event cannot be detected. For example, if the condition set by the user is “when another person opens the drawer over there”, the processor120may replace the camera, in the vicinity of the drawer, with a fingerprint sensor, provided in the drawer, to detect an event for recognizing another person if the camera around the drawer is inoperable. In an exemplary embodiment, if there is no available resource for detecting an event recognizing another person, or if the event cannot be detected, the processor120may provide the user with a notification UI with feedback indicating that the execution of the condition is difficult. For example, the processor120may provide the user with a notification UI that “a condition corresponding to another person cannot be performed”. When a situation satisfying the condition occurs, the detection resource determined by the processor120may detect the event according to the condition (611). If it is determined that an event satisfying the condition is detected based on the detection result, the processor120may execute the function according to the action set by the user. This situation may be referred to as triggering, by the processor120, an action set by the user according to the condition in response to the trigger condition described above, at step613. FIG.7is a diagram illustrating a process of setting identification information of available resources in the electronic device100, according to an exemplary embodiment of the present disclosure. A camera710may be located near the available resources720,730and can capture the state of the available resources720,730. The camera710may capture the available resources720and730in real time, at a predetermined period, or at the time of event occurrence. During a period of time, an event or the operating state of available resources in the first available resource (e.g., a touch sensor or distance sensor)720and the second available resource (e.g., digital ramp)730may be detected. In an exemplary embodiment, the camera710may transmit the image information of the available resources720and730photographed or recorded for a predetermined time to the electronic device100. The available resources720and730may transmit the detected information to the electronic device100. For example, during time t1741, in which the user opens a door, the first available resource720detects (751) the door open event and sends the detection result to the electronic device100. The camera710located in the vicinity of the first available resource720acquires image information by photographing the first available resource720located at the first location during time t1741(753). The camera710transmits the acquired image information to the electronic device100. In an exemplary embodiment, the electronic device100may automatically generate identification information of the first available resource720, based on the detection result detected by the first available resource720, and the image information obtained by photographing the first available resource720. The identification information of the first available resource720may be determined based on the first location, which is the physical location of the first available resource720, and the type of the first available resource720or the attribute of the detection result. For example, when the first location is the front door and the type of the first available resource720is a touch sensor or a distance sensing sensor capable of sensing movement or detachment of an object, the electronic device100may set the identification information of the first available resource720as “front door opening sensor” (755). The electronic device100may automatically map the detection result received by the first available resource720and the image information generated by photographing the first available resource720and may automatically set a name or label for the first available resource720. In an exemplary embodiment, when the electronic device100automatically generates the identification information of the first available resource720, the electronic device100automatically generates the identification information of the first available resource720using a data recognition model generated using a learning algorithm (e.g., a neural network algorithm, a genetic algorithm, a decision tree algorithm, or a support vector machine). In another exemplary embodiment, during time t2742when the user opens the door, the second available resource730may be changed to the user's operation or automatically turned on. The second available resource730detects (761) its own on-state and sends the on-state to the electronic device100. The camera710located in the vicinity of the second available resource730acquires image information by photographing the second available resource730located at the second location during time t2742(763). The camera710transmits the acquired image information to the electronic device100. In an exemplary embodiment, the electronic device100may automatically generate the identification information of the second available resource730based on the operating state of the second available resource730and the image information of the second available resource730. The identification information of the second available resource730may be determined based on, for example, the properties of the second location, which is the physical location of the second available resource730, and the type or operating state of the second available resource730. For example, if the second location is on the cabinet of the living room and the type of the second available resource730is a lamp, the electronic device100may set the identification information of the second available resource730to “living room cabinet lamp” (765). According to various exemplary embodiments, the electronic device100may set the identification information of the available resources based on the initial installation state of the available resources and the image information obtained from the camera during installation, even when the available resources are initially installed. According to various exemplary embodiments, the electronic device100may provide a list of available resource identification information using a portable terminal provided by a user or an external device having a display around the user. In an exemplary embodiment, the portable terminal or the external device may provide the user with a UI capable of changing at least a part of the identification information of the available resource. When the user changes the identification information of the available resource in response to the provided UI, the electronic device100may receive the identification information of the changed available resource from the portable terminal or the external device. Based on this identification information of the available resource, the electronic device100may reset the identification information of the available resource. FIG.8is a flowchart of executing an action according to a condition in the electronic device100, in accordance with an exemplary embodiment of the present disclosure. In an exemplary embodiment, the electronic device100acquires audio information and image information generated from a natural language uttered by the user and user's actions associated with the natural language, for setting an action to be performed according to a condition (801). The audio information is generated from a natural language (e.g. a phrase) uttered by the user. The image information is generated from a user's actions associated with the natural language. The electronic device100acquires the audio information and image information to set an action to be performed when a condition is met. In an exemplary embodiment, the electronic device100acquires at least one of an audio information and image information to set an action to be performed when a condition is met. The electronic device100determines an event to be detected according to a condition and a function to be executed according to the action when the event is detected, based on the acquired voice information and image information (803). In an exemplary embodiment, the electronic device100applies the acquired voice information and image information to a data recognition model generated using a learning algorithm to determine a condition and action according to the user's intention. The electronic device100determines an event to be detected according to a condition and a function to be executed according to the action. The electronic device100determines at least one detection resource to detect a determined event (805). The detection resource may be a module included in the electronic device100or in an external device located outside the electronic device100. The electronic device100may search for available resources that are installed and may determine at least one detection resource to detect an event among the available resources based on a function detectable by the retrieved available resources. In an exemplary embodiment, if there is no resource to detect an event, or if the detection resource is in a situation in which an event cannot be detected, the electronic device100provides a notification UI informing that execution of an action according to the condition is impossible. The electronic device100may use the determined at least one detection resource to determine if at least one event satisfying the condition has been detected (decision block807). As a result of the determination, if at least one event satisfying the condition is detected, (decision block807“YES” branch), the electronic device100controls the function according to the action to be executed (809) and ends. For example, when the detection result of the event is received from the detection resource, the electronic device100may control the function according to the action to be executed based on the received detection result. FIG.9is a flowchart of executing an action according to a condition in the electronic device100, in accordance with another exemplary embodiment of the present disclosure. In an exemplary embodiment, the electronic device100acquires audio information and image information generated from a natural language uttered by the user and user's actions associated with the natural language, for setting an action to be performed according to a condition (901). The audio information is generated from a natural language (e.g. a phrase) uttered by the user. The image information is generated from a user's actions associated with the natural language. The electronic device100acquires the audio information and image information to set an action to be performed when a condition is met. In an exemplary embodiment, the electronic device100acquires at least one of an audio information and image information to set an action to be performed when a condition is met. The electronic device100determines an event to be detected according to a condition and a function to be executed according to the action when the event is detected, based on the acquired voice information and image information (903). The electronic device100determines at least one detection resource to detect a determined event and at least one execution resource to execute a function according to an action (905). For example, the electronic device100searches for available installed resources and determines at least one execution resource to execute a function according to an action among the available resources, based on a function that the retrieved available resources can provide. When at least one detection resource is determined, the electronic device100transmits control information, requesting detection of the event, to the determined at least one detection resource (907). The electronic device100determine whether at least one event satisfying the condition has been detected using the detection resource (decision block909). As a result of the determination, if at least one event satisfying the condition is detected, (decision block907“YES” branch), the electronic device100transmits the control information to the execution resource so that the execution resource executes the function according to the action (911). The execution resource that has received the control information executes the function according to the action (913). FIGS.10to13are diagrams for illustrating an exemplary embodiment of constructing a data recognition model and recognizing data through a learning algorithm, according to various exemplary embodiments of the present disclosure. Specifically,FIGS.10to13illustrate a process of generating a data recognition model using a learning algorithm and determining a condition, an action, an event to detect according to the condition, and a function to be executed according to the action through the data recognition model. Referring toFIG.10, the processor120according to some exemplary embodiments may include a data learning unit1010and a data recognition unit1020. The data learning unit1010may generate or make the data recognition model learn so that the data recognition model has a criterion for a predetermined situation determination (for example, a condition and an action, an event according to a condition, deter ruination on a function based on an action, etc.). The data learning unit1010may apply the learning data to the data recognition model to determine a predetermined situation and generate the data recognition model having the determination criterion. For example, the data learning unit1010according to an exemplary embodiment of the present disclosure can generate or make the data recognition model learn using learning data related to voice information and learning data associated with image information. As another example, the data learning unit1010may generate and make the data recognition model learn using learning data related to conditions and learning data associated with an action. As another example, the data learning unit1010may generate and make the data recognition model learn using learning data related to an event and learning data related to the function. The data recognition unit1020may determine the situation based on the recognition data. The data recognition unit1020may determine the situation from predetermined recognition data using the learned data recognition model. The data recognition unit1020can acquire predetermined recognition data according to a preset reference and applies the obtained recognition data as an input value to the data recognition model to determine (or estimate) a predetermined situation based on predetermined recognition data. The result value by applying the obtained recognition data to the data recognition model may be used to update the data recognition model. In particular, the data recognition unit1020according to an exemplary embodiment of the present disclosure applies the recognition data related to the voice information and the recognition data related to the image information to the data recognition model as the input value, and may acquire the result of the determination of the situation (for example, the action desired to be executed according to the condition and the condition) of the electronic device100. The data recognition unit1020applies recognition data related to the condition and recognition data related to the action as input values to the data recognition model to determine the state of the electronic device100(for example, an event to be detected according to a condition, and a function to perform according to an action). In addition, the data recognition unit1020may apply, to the data recognition model, the recognition data related to an event and recognition data related to a function as input values and acquire a determination result (detection source for detecting an event, execution source for executing a function) which determines a situation of the electronic device100. At least a part of the data learning unit1010and at least a part of the data recognition unit1020may be implemented in a software module or in a forth of at least one hardware chip and mounted on an electronic device. For example, at least one of the data learning unit1010and the data recognition unit1020may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or the existing general purpose processor (e.g.: CPU or application processor) or graphics-only processor (e.g., a CPU) and may be mounted on the various electronic devices described above. At this time, the dedicated hardware chip for artificial intelligence is a dedicated processor specialized for probability calculation, and it has a higher parallel processing performance than conventional general purpose processors, so that it is possible to quickly process computation tasks artificial intelligence such as machine learning. When the data learning unit1010and the data recognition unit1020are implemented as a software module (or a program module including instructions), the software module may be stored in a computer-readable and non-transitory computer readable media). In this case, the software module may be provided by the operating system (OS) or by a predetermined application. A part of the software module may be provided by the operating system (OS) and a part of the remaining portion may be provided by a predetermined application. In an exemplary embodiment, the data learning unit1010and the data recognition unit1020may be mounted on one electronic device or on separate electronic devices, respectively. For example, one of the data learning unit1010and the data recognition unit1020may be included in the electronic device100, and the other may be included in an external server. The data learning unit1010may provide the model information, constructed by the data learning unit1010, to the data recognition unit1020, via wire or wirelessly. The data input to the data recognition unit1020may be provided to the data learning unit1010as additional learning data, via wire or wirelessly. FIG.11is a block diagram of a data learning unit1010according to exemplary embodiments. Referring toFIG.11, the data learning unit1010according to some exemplary embodiments may include the data acquisition unit1010-1and the model learning unit1010-4. The data learning unit1010may further include, selectively, at least one of the preprocessing unit1010-2, the learning data selection unit1010-3, and the model evaluation unit1010-5. The data acquisition unit1010-1may acquire learning data which is necessary for learning to determine a situation. The learning data may be data collected or tested by the data learning unit1010or the manufacturer of the electronic device100. Alternatively, the learning data may include voice data generated from the natural language uttered by the user via the microphone according to the present disclosure. The voice data generated from the user's actions associated with the natural language uttered by the user via the camera can be included. In this case, the microphone and the camera may be provided inside the electronic device100, but this is merely an embodiment, and the voice data and the image data for the action obtained through the external microphone and camera are used as learning data. The model learning unit1010-4may use the learning data so that the model learning unit1010-4can make the data recognition model learn to have a determination criteria as to how to determine a predetermined situation. For example, the model learning unit1010-4can make the data recognition model learn through supervised learning using at least some of the learning data as a criterion. Alternatively, the model learning unit1010-4may make the data recognition model learn through unsupervised learning that the data recognition model learn by itself using learning data without separate guidance. The model learning unit1010-4may learn the selection criteria to use which learning data to determine a situation. In particular, the model learning unit1010-4according to an exemplary embodiment of the present disclosure may generate or make the data recognition model learn using learning data related to voice information and learning data associated with video information. In this case, when the data recognition model is learned through the supervised learning method, an action to be executed may be added as learning data in accordance with conditions and conditions according to the user's intention as a determination criterion. Alternatively, an event to be detected according to the condition and a function to be executed for the action may be added as learning data. Alternatively, a detection resource for detecting the event and an execution resource for executing the function may be added as learning data. The model learning unit1010-4may generate and make the data recognition model learn using learning data related to the conditions and learning data related to an action. In this case, when making the data recognition model learn through the supervised learning method, an event to be detected according to a condition and a function to be executed for the action can be added as learning data. Alternatively, a detection resource for detecting the event and an execution resource for executing the function may be added as learning data. The model learning unit1010-4may generate and make the data recognition model learn using learning data related to an event and learning data related to a function. In this case, when making the data recognition model learn through the supervised learning, a detection resource for detecting an event and an execution resource for executing the function can be added as learning data. In the meantime, the data recognition model may be a model which is pre-constructed and updated by learning of the model learning unit1010-4. In this case, the data recognition model may receive the basic learning data (for example, a sample image, etc.) and be pre-constructed. The data recognition model can be constructed in consideration of the application field of the recognition model, the purpose of learning, or the computer performance of the apparatus. The data recognition model may be, for example, a model based on a neural network. The data recognition model can be designed to simulate the human brain structure on a computer. The data recognition model may include a plurality of weighted network nodes that simulate a neuron of a human neural network. The plurality of network nodes may each establish a connection relationship such that the neurons simulate synaptic activity of sending and receiving signals through synapses. The data recognition model may include, for example, a neural network model or a deep learning model developed in a neural network model. In the deep learning model, the plurality of network nodes are located at different depths (or layers) and can exchange data according to a convolution connection relationship. The data recognition model may be constructed considering the application field of the recognition model, the purpose of learning, or the computer performance of the device. The data recognition model may be, for example, a model based on a neural network. For example, a model such as Deep Neural Network (DNN), Recurrent Neural Network (RNN), and Bidirectional Recurrent Deep Neural Network (BDNR) may be used as a data recognition model, but the present disclosure is not limited thereto. According to various exemplary embodiments, the model learning unit1010-4may be a data recognition model for learning a data in which the input learning data and the basic learning data are highly relevant, when a plurality of pre-built data recognition models are present. In an exemplary embodiment, the basic learning data may be pre-classified according to a data type, and the data recognition model may be pre-built for each data type. For example, the basic learning data may be pre-classified by various criteria such as an area where the learning data is generated, a time at which the learning data is generated, a size of the learning data, a genre of the learning data, a creator of the learning data, a kind of objects in learning data, etc. In another exemplary embodiment, the model learning unit1010-4may teach a data recognition model using, for example, a learning algorithm including an error back-propagation method or a gradient descent method. Also, the model learning unit1010-4may make the data recognition model learn through supervised learning using, for example, a determination criterion as an input value. Alternatively, the model learning unit1010-4may learn by itself using the necessary learning data without any supervision, for example, through unsupervised learning for finding a determination criterion for determining a situation. Also, the model learning unit1010-4may make the data recognition model learn through reinforcement learning using, for example, feedback as to whether or not the result of the situation determination based on learning is correct. In an exemplary embodiment, when the data recognition model is learned, the model learning unit1010-4may store the learned data recognition model. The model learning unit1010-4may store the learned data recognition model in the memory110of the electronic device100. The model learning unit1010-4may store the learned data recognition model in a memory of a server connected to the electronic device100via a wired or wireless network. The data learning unit1010may further include a preprocessing unit1010-2and a learning data selection unit1010-3in order to improve a recognition result of the data recognition model or save resources or time necessary for generation of the data recognition model. A preprocessor1010-2may perform preprocessing of data acquired by the data acquisition unit1010-1to be used for learning to determine a situation. For example, the preprocessing unit1010-2may process the acquired data into a predefined format so that the model learning unit1010-4may easily use data for learning of the data recognition model. For example, the preprocessing unit1010-2may process the voice data obtained by the data acquisition unit1010-1into text data, and may process the image data into image data of a predetermined format. The preprocessed data may be provided to the model learning unit1010-4as learning data. Alternatively, the learning data selection unit1010-3may selectively select learning data required for learning from the preprocessed data. The selected learning data may be provided to the model learning unit1010-4. The learning data selection unit1010-3may select learning data necessary for learning from the preprocessed data in accordance with a predetermined selection criterion. Further, the learning data selection unit1010-3may select learning data necessary for learning according to a predetermined selection criterion by learning by the model learning unit1010-4. In one exemplary embodiment of the present disclosure, the learning data selection unit1010-3may select only the voice data that has been uttered by a specific user among the inputted voice data, and may select only the region including the person excluding the background among the image data. The data learning unit1010may further include the model evaluation unit1010-5to improve a recognition result of the data recognition model. The model evaluation unit1010-5inputs evaluation data to the data recognition model. When a recognition result output from the evaluation data does not satisfy a predetermined criterion, the model evaluating unit1010-5may instruct the model learning unit1010-4to learn again. The evaluation data may be predefined data for evaluating the data recognition model. In an exemplary embodiment, when the number or the ratio of the evaluation data of the recognition results from the learned data recognition model exceeds a predetermined threshold value, the model evaluation unit1010-5may evaluate that a predetermined criterion is not satisfied. For example, in the case where a predetermined criterion is defined as a ratio of 2%, when the learned data recognition model outputs an incorrect recognition result for evaluation data exceeding 20 out of a total of 1000 evaluation data, the model evaluation unit1010-5may evaluate that the learned data recognition model is not suitable. In another exemplary embodiment, when there are a plurality of learned data recognition models, the model evaluation unit1010-5may evaluate whether each of the learned data recognition models satisfies a predetermined criterion, and determine a model satisfying the predetermined criterion as a final data recognition model. In an exemplary embodiment, when there are a plurality of models satisfying a predetermined criterion, the model evaluation unit1010-5may determine any one or a predetermined number of models previously set in descending order of an evaluation score as a final data recognition model. In another exemplary embodiment, at least one of the data acquisition unit1010-1, the preprocessing unit1010-2, the learning data selecting unit1010-3, the model learning unit1010-4, and the model evaluation unit1010-5may be implemented as a software module, fabricated in at least one hardware chip form and mounted on an electronic device. For example, at least one of the data acquisition unit1010-1, the preprocessing unit1010-2, the learning data selecting unit1010-3, the model learning unit1010-4, and the model evaluation unit1010-5may be made in the form of an exclusive hardware chip for artificial intelligence (AI), or may be fabricated as part of a conventional general-purpose processor (e.g., a CPU or application processor) or a graphics-only processor (e.g., a GPU), and may be mounted on various electronic devices. The data acquisition unit1010-1, the preprocessing unit1010-2, the learning data selecting unit1010-3, the model learning unit1010-4, and the model evaluation unit1010-5may be mounted on one electronic device, or may be mounted on separate electronic devices, respectively. For example, some of the data acquisition unit1010-1, the preprocessing unit1010-2, the learning data selecting unit1010-3, the model learning unit1010-4, and the model evaluation unit1010-5may be included in an electronic device, and the rest may be included in a server. At least one of the data acquisition unit1010-1, the preprocessing unit1010-2, the learning data selecting unit1010-3, the model learning unit1010-4, and the model evaluation unit1010-5may be realized as a software module. At least one of the data acquisition unit1010-1, the preprocessing unit1010-2, the learning data selecting unit1010-3, the model learning unit1010-4, and the model evaluation unit1010-5(or a program module including an instruction), the software module may be stored in a non-transitory computer readable media. At least one software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, part of at least one of the at least one software module may be provided by an operating system (OS), and some of the at least one software module may be provided by a predetermined application. FIG.12is a block diagram of a data recognition unit1020according to some exemplary embodiments. Referring toFIG.12, the data recognition unit1020according to some exemplary embodiments may include a data acquisition unit1020-1and a recognition result providing unit1020-4. The data recognition unit1020may further include at least one of the preprocessing unit1020-2, the recognition data selecting unit1020-3, and the model updating unit1020-5selectively. The data acquisition unit1020-1may acquire recognition data which is required for determination of a situation. The recognition result providing unit1020-4can determine the situation by applying the data obtained by the data acquisition unit1020-1to the learned data recognition model as an input value. The recognition result providing unit1020-4may provide the recognition result according to the data recognition purpose. Alternatively, the recognition result providing unit1020-4may provide the recognition result obtained by applying the preprocessed data from the preprocessing unit1020-2to the learned data recognition model as an input value. Alternatively, the recognition result providing unit1020-4may apply the data selected by the recognition data selecting unit11020-3, which will be described later, to the data recognition model as an input value to provide the recognition result. The data recognition unit1210may further include the preprocessing unit1020-2and the recognition data selection unit1020-3to improve a recognition result of the data recognition model or save resource or time for providing the recognition result. The preprocessing unit1020-2may preprocess data acquired by the data acquisition unit1020-1to be used for recognition to determine a situation. The preprocessing unit1020-2may process the acquired data into a predefined format so that the recognition result providing unit1020-4may easily use the data for determination of the situation. Particularly, according to one embodiment of the present disclosure, the data acquisition unit1020-1may acquire voice data and image data for determination of a situation (determination of a condition, action, event according to a condition, a function according to an action, detection resource for detecting an event, etc.) and the preprocessing unit1020-2may preprocess with the predetermined format as described above. The recognition data selection unit1020-3may select recognition data required for situation determination from the preprocessed data. The selected recognition data may be provided to the recognition result providing unit1020-4. The recognition data selection unit1020-3may select the recognition data necessary for the situation determination among the preprocessed data according to a predetermined selection criterion. The recognition data selection unit1020-3may also select data according to a predetermined selection criterion by learning by the model learning unit1010-4as described above. The model updating unit1020-5may update a data recognition model based on an evaluation of a recognition result provided by the recognition result providing unit1020-4. For example, the model updating unit1020-5may provide a recognition result provided by the recognition result providing unit1020-4to the model learning unit1010-4, enabling the model learning unit1010-4to update a data recognition model. At least one of the data acquisition unit1020-1, the preprocessing unit1020-2, the recognition data selecting unit1020-3, the recognition result providing unit1020-4, and the model updating unit1020-5in the data recognition unit1020may be implemented as a software module fabricated in at least one hardware chip form and mounted on an electronic device. For example, at least one among the data acquisition unit1020-1, the preprocessing unit1020-2, the recognition data selecting unit1020-3, the recognition result providing unit1020-4, and the model updating unit1020-5may be made in the form of an exclusive hardware chip for artificial intelligence (AI) or as part of a conventional general purpose processor (e.g., CPU or application processor) or a graphics only processor (e.g., GPU), and may be mounted on a variety of electronic devices. The data acquisition unit1020-1, the preprocessing unit1020-2, the recognition data selecting unit1020-3, the recognition result providing unit1020-4, and the model updating unit1020-5may be mounted on an electronic device, or may be mounted on separate electronic devices, respectively. For example, some of the data acquisition unit1020-1, the preprocessing unit1020-2, the recognition data selecting unit1020-3, the recognition result providing unit1020-4, and the model updating unit1020-5may be included in an electronic device, and some may be included in a server. At least one of the data acquisition unit1020-1, the preprocessing unit1020-2, the recognition data selecting unit1020-3, the recognition result providing unit1020-4, and the model updating unit1020-5may be implemented as a software module. At least one of the data acquisition unit1020-1, the preprocessing unit1020-2, the recognition data selecting unit1020-3, the recognition result providing unit1020-4, and the model updating unit1020-5(or a program module including an instruction), the software module may be stored in a non-transitory computer readable media. In an exemplary embodiment, at least one software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, part of at least one of the at least one software module may be provided by an operating system (OS), and some of the at least one software module may be provided by a predetermined application. FIG.13is a diagram showing an example of learning and recognizing data by interlocking with the electronic device100and a server1300according to some exemplary embodiments. The server1300may learn a criterion for determining a situation. The electronic device100may determine a situation based on a learning result by the server1300. In an exemplary embodiment. The model learning unit1010-4of the server1300may learn what data to use to determine a predetermined situation and a criterion on how to determine the situation using data. The model learning unit1010-4may acquire data to be used for learning, and apply the acquired data to a data recognition model, so as to learn a criterion for the situation determination. The recognition result providing unit1020-4of the electronic device100may apply data selected by the recognition data selecting unit1020-3to a data recognition model generated by the server1300to determine a situation. The recognition result providing unit1020-4may transmit data selected by the recognition data selecting unit1020-3to the server1300, and may request that the server1300applies the data selected by the recognition data selecting unit1020-3to a recognition model and determines a situation. In an exemplary embodiment, the recognition result providing unit1020-4may receive from the server1300information on a situation determined by the server1300. For example, when voice data and image data are transmitted from the recognition data selecting unit1020-3to the server1300, the server1300may apply the voice data and the image data to a pre-stored data recognition model to transmit information on a situation (e.g., condition and action, event according to condition, function according to action) to the electronic device100. FIGS.14A to14Care flowcharts of the electronic device100which uses the data recognition model according to an exemplary embodiment. In operation1401ofFIG.14A, the electronic device100may acquire voice information and image information generated from a natural language and actions of a user which sets an action to be executed according to a condition. In operation1403, the electronic device100may apply the acquired voice information and image information to the learned data recognition model to acquire an event to detect according to a condition and a function to perform according to an action. For example, in the example shown inFIG.3A, when the user1performs a gesture indicating a drawer with his/her finger while speaking a natural language saying “record an image when another person opens the drawer over there,” the electronic device100may acquire voice information generated according to the natural language and acquire image information generated according to the action. In addition, the electronic device100may apply the audio information and the image information to the learned data recognition model as the recognition data, determine “an event to open the drawer330and an event to recognize another user” as an event to be detected according to a condition and determine a “function of recording a situation to open the drawer330by another user as a video” as a function to perform according to an action. In operation1405, the electronic device100may determine a detection resource to detect an event and an execution resource to execute an event based on the determined event and function. While the detection resource and execution resource are determined, in operation1407, the electronic device100may determine whether at least one event which satisfies a condition can be detected using the determined detection resource. At least one event is detected1407-Y, the electronic device100may control so that a function according to an action can be executed. As another exemplary embodiment, in operation1411ofFIG.14B, the electronic device100may acquire voice information and image information generated from the natural language and action to set an action to be executed according to a condition. In operation1413, the electronic device100may determine an event to detect according to a condition and a function to execute according to an action may be determined based on the acquired voice information and image information. Next, in operation1415, the electronic device100may apply the determined events and functions to the data recognition model to acquire detection resource to detect an event and execution resource to execute a function. For example, in the example shown inFIG.3A, if the determined event and functions are each an event in which “the drawer330is opened and another person is recognized”, and the function to be executed according to the action is “a function to record a situation in which another user opens the drawer330as a video”, the electronic device100can apply the determined event and function to the data recognition model as recognition data. As a result of applying the data recognition model, the electronic device100may determine a distance detection sensor that detects an open event of the drawer330as a detection resource and a fingerprint recognition sensor or an iris recognition sensor that detects an event that recognizes another person, and determine a camera located around the drawer330as an execution resource. In operations1417to1419, when at least one event to satisfy a condition is detected, the electronic device100may control so that a function according to an action is executed. As still another exemplary embodiment, in operation1421ofFIG.14C, the electronic device100may acquire voice information and image information which are generated from a natural language and an action to set an action to be executed according to a condition. In operation1423, the electronic device100may apply the acquired voice information and image information to the data recognition model to determine the detection resources to detect the event and the execution resources to execute the function. For example, in the example shown inFIG.3A, if the acquired voice information is “Record an image when another person opens a drawer over there” and the image information includes a gesture indicating a drawer with a finger, the electronic device100may apply the acquired voice information and image information to the data recognition model as recognition data. The electronic device100may then detect an open event of the drawer330as a result of applying the data recognition model, and determine the camera located around the drawer330as an execution resource. In operations1425to1427, the electronic device100, when at least one event which satisfies a condition is detected, may control so that a function according to an action is executed. FIGS.15A to15Care flowcharts of network system which uses a data recognition model according to an exemplary embodiment. InFIGS.15A to15C, the network system which uses the data recognition model may include a first component1501and a second component1502. As one example, the first component1501may be the electronic device100and the second component1502may be the server1300that stores the data recognition model. Alternatively, the first component1501may be a general purpose processor and the second component1502may be an artificial intelligence dedicated processor. Alternatively, the first component1501may be at least one application, and the second component1502may be an operating system (OS). That is, the second component1502may be more integrated than the first component1501, dedicated, less delayed, perform better, or have more resources than the first component1501. The second component1502may be a component that can process many operations required at the time of generation, update, or application more quickly and efficiently than the first component1501. In this case, interface to transmit/receive data between the first component1501and the second component1502may be defined. For example, an application program interface (API) having an argument value (or an intermediate value or a transfer value) of learning data to be applied to the data recognition model may be defined. The API can be defined as a set of subroutines or functions that can be called for any processing of any protocol (e.g., a protocol defined in the electronic device100) to another protocol (e.g., a protocol defined in the server1300). That is, an environment can be provided in which an operation of another protocol can be performed in any one protocol through the API. As an exemplary embodiment, in operation1511ofFIG.15A, the first component1501may acquire voice information and image information generated from the natural language and action to set an action to be executed according to a condition. In operation1513, the first component1501may transmit data (or a message) regarding the acquired voice information and image information to the second component1502. For example, when the first component1501calls the API function and inputs voice information and image information as data argument values, the API function may transmit the voice information and image information to the second component1502as the recognition data to be applied to the data recognition model. In operation1515, the second component1502may acquire an event to detect according to a condition and a function to execute according to an action by applying the received voice information and image information to the data recognition model. In operation1517, the second component1502may transmit data (or message) regarding the acquired event and function to the first component1501. In operation1519, the first component1501may determine a detection resource to detect an event and an execution resource to execute a function based on the received event and function. In operation1521, the first component1501, when at least one event is detected which satisfies a condition using the determined detection resource, may execute a function according to an action using the determined execution resource. As another exemplary embodiment, in operation1531ofFIG.15B, the first component1501may acquire voice information and image information generated from the natural language and action to set an action to be executed according to a condition. In operation1533, the first component1501may determine a detection resource to detect an event and an execution resource to execute a function based on the acquired voice information and image information. In operation1535, the first component1501may transmit data (or a message) regarding the acquired voice information and image information to the second component1502. For example, when the first component1501calls the API function and inputs event and function as data argument values, the API function may transmit the event and function to the second component1502as the recognition data to be applied to the data recognition model. In operation1537, the second component1502may acquire an event to detect according to a condition and a function to execute according to an action by applying the received event and function to the data recognition model. In operation1539, the second component1502may transmit data (or message) regarding the acquired detection resource and execution resource to the first component1501. In operation1541, the first component1501, when at least one event which satisfies a condition is detected using the received detection resources, may execution a function according to an action using the received execution resource. As another exemplary embodiment, in operation1551ofFIG.15C, the first component1501may acquire voice information and image information generated from the natural language and action to set an action to be executed according to a condition. In operation1553, the first component1501may transmit data (or a message) regarding the acquired voice information and image information to the second component1502. For example, when the first component1501calls the API function and inputs voice information and image information as data argument values, the API function may transmit the image information and voice information to the second component1502as the recognition data to be applied to the data recognition model. In operation1557, the second component1502may acquire an event to detect according to a condition and a function to execute according to an action by applying the received voice information and image information to the data recognition model. In operation1559, the second component1502may transmit data (or message) regarding the acquired detection resource and execution resource to the first component1501. In operation1561, the first component1501may execute a function according to an action using the received execution resource, if at least one event which satisfies a condition is detected using the received detection resource. In another exemplary embodiment, the recognition result providing unit1020-4of the electronic device100may receive a recognition model generated by the server1300, and may determine a situation using the received recognition model. The recognition result providing unit1020-4of the electronic device100may apply data selected by the recognition data selecting unit1020-3to a data recognition model received from the server1300to determine a situation. For example, the electronic device100may receive a data recognition model from the server1300and store the data recognition model, and may apply voice data and image data selected by the recognition data selecting unit1020-3to the data recognition model received from the server1300to determine information (e.g., condition and action, event according to condition, function according to action, etc.) on a situation. The present disclosure is not limited to these exemplary embodiments, as all the elements constituting the exemplary embodiments of the present disclosure are described as being combined or operated in one operation. Within the scope of the present disclosure, all of the elements may be selectively coupled to one or more of them. Although all of the components may be implemented as one independent hardware, some or all of the components may be selectively combined and implemented as a computer program having a program module to perform a part or all of the functions in one or a plurality of hardware. At least a portion of a device (e.g., modules or functions thereof) or method (e.g., operations) according to various exemplary embodiments may be embodied as a command stored in a non-transitory computer readable media) in the form of a program module. When a command is executed by a processor (e.g., processor120), the processor may perform a function corresponding to the command. In an exemplary embodiment, the program may be stored in a computer-readable non-transitory recording medium and read and executed by a computer, thereby realizing the exemplary embodiments of the present disclosure. In an exemplary embodiment, the non-transitory readable recording medium refers to a medium that semi-permanently stores data and is capable of being read by a device, and includes a register, a cache, a buffer, and the like, but does not include transmission media such as a signal, a current, etc. In an exemplary embodiment, the above-described programs may be stored in non-transitory readable recording media such as CD, DVD, hard disk, Blu-ray disc, USB, internal memory (e.g., memory110), memory card, ROM, RAM, and the like. In addition, a method according to exemplary embodiments may be provided as a computer program product. A computer program product may include an S/W program, a computer-readable storage medium which stores the S/W program therein, or a product which is traded between a seller and a purchaser. For example, a computer program product may include an S/W program product (e.g., a downloadable APP) which is electronically distributed through an electronic device, a manufacturer of the electronic device, or an electronic market (e.g., Google Play Store, App Store). For electronic distribution, at least a portion of the software program may be stored on a storage medium or may be created temporarily. In this case, the storage medium may be a storage medium of a server of a manufacturer or an electronic market, or a relay server. While the present disclosure has been shown and described with reference to various exemplary embodiments thereof, it will be understood by one of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents.
108,418
11862155
Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Ideally, when conversing with a digital assistant interface, a user should be able to communicate as if the user were talking to another person, via spoken requests directed toward their assistant-enabled device running the digital assistant interface. The digital assistant interface will provide these spoken requests to an automated speech recognizer to process and recognize the spoken request so that an action can be performed. In practice, however, it is challenging for a device to always be responsive to these spoken requests since it is prohibitively expensive to run speech recognition continuously on a resource constrained voice-enabled device, such as a smart phone or smart watch. To create user experiences supporting always-on speech, assistant-enabled devices typically run compact hotword detection models configured to recognize audio features that characterize a narrow set of phrases, that when spoken by the user, initiate full automated speech recognition (ASR) on any subsequent speech spoken by the user. Advantageously, hotword detection models can run on low power hardware such as digital signal processor (DSP) chips and may respond to various fixed-phrase commands such as “Hey Google” or “Hey living room speaker”. As the number of assistant-enabled devices within a user's environment (e.g., home or office) grows, the user may wish to trigger multiple assistant-enabled devices at the same time, e.g., to adjust a volume level across a group of assistant-enabled smart speakers or to adjust a lighting level across a group of assistant-enabled smart lights. When a user wants to trigger multiple different assistant-enabled devices, the user is presently required to issue separate queries to each device independently. For example, to turn off a kitchen light and a dining room light in the user's home, the user would have to speak separate queries such as, “Hey kitchen lightbulb, turn off” and “Hey dining room lightbulb, turn off”. Implementations herein are directed toward permitting a user to issue a single query to a group of assistant-enabled devices to allow for faster and more natural interactions with multiple different assistant-enabled devices (AEDs) the user may want to control simultaneously. Specifically, implementations are directed toward creating and assigning a group hotword to a group of two or more AEDs selected by a user such that each device will respond to a spoken query that includes the group hotword by triggering from a low-power state when the group hotword is detected in streaming audio. That is, each AED in the selected group of AEDs assigned the hotword may run a hotword detection model trained to detect the presence of the group hotword in streaming audio without performing speech recognition. In some implementations, the group hotword assigned to the selected group of AEDs is predefined such that the corresponding hotword detection model is pre-trained to detect the presence of the predefined group hotword. On the other hand, a user may also create a custom group hotword that includes any word or phrase the user wants to use for addressing a specific group of AEDs in a single query. Here, the user may be required to provide one or more training utterances of the user speaking the custom hotword to train a corresponding hotword detection model to detect the custom hotword. In some examples, a user uses a digital assistant interface to select a group of AEDs and manually enable a group hotword (e.g., predefined or custom) to assign to the selected group of AEDs that the user wants to address simultaneously in a single query. The AEDs in the selected group of AEDs may receive an assignment instruction from the digital assistant interface assigning the group hotword to the group of AEDs, thereby configuring each AED in the selected group to wake-up from a low-power state when the group hotword is detected in streaming audio by at least one of the AEDs in the selected group of AEDs. For instance, the user may assign the group hotword “family room speakers” to a group of four smart speakers located in the family room of the user's home such that the user may address all four of these smart speakers by speaking an utterance that includes the group hotword “Family room speakers” followed by a single query, e.g., “play that 12-6-97 Phish show”, specifying an operation to perform. In this instance, at least one of the smart speakers in the group of four smart speakers detecting the group hotword “Family room speakers” in the user's utterance will trigger the corresponding smart speaker to wake-up from a low-power state and execute a collaboration routine to cause each smart speaker in the group of four smart speakers to collaborate with one another to fulfill performance of the operation specified by the query. For example, the four family room speakers may collaborate to playback music corresponding to a concert performed by the band Phish on the date Dec. 6, 1997. In this example, one of the speakers may be tasked with streaming the music from a local storage device, a network storage device, or from a remote streaming service, and then broadcasting the music to the other speakers to audibly playback the music from the speakers. Optionally, in collaborating to fulfill the operation, some of the smart speakers may perform different playback responsibilities related to the operation such as two of the smart speakers may play audio corresponding to a left channel and the other two of the smart speakers may play audio corresponding to a right channel, thereby providing a stereo arrangement. Continuing with this same example, other AEDs not in the selected group assigned to the group hotword, such as AEDs corresponding to device types other than smart speakers and smart speakers located in rooms other than the family room of the user's home, will not respond to the group hotword and will remain in a sleep state when the user speaks “Family room speakers”. Additionally, each AED may be assigned a unique device-specific hotword that only the corresponding AED is configured to detect in streaming audio when the user only wants to address the corresponding AED. For instance, a unique device-specific hotword assigned to an AED may include an identifier of the AED such as “Hey Device 1”, or could include a device type and/or other attribute associated with the AED such as “Hey Smart Speaker 1”. Furthermore, the selected group of four smart speakers located in the family room, as well as any other AED associated with the user but not assigned the group hotword, may be configured to also respond to a global default hotword such as “Hey Assistant”. In some examples, it is possible that at least one AED associated with a user is assigned two or more group hotwords simultaneously such that the at least one AED will be a member of different selected groups of AEDs each assigned a corresponding one of the two or more group hotwords. In these examples, each selected group of AEDs may include a combination of AEDs assigned a corresponding group hotword that is different than the combination of AEDs assigned a different corresponding hotword. In some implementations, the user manually enables a group hotword to assign to a selected group of AEDs. For instance, the user may access a digital assistant application that displays a graphical user interface for permitting the user to configure and adjust settings of all AEDs associated with the user. Here, the graphical user interface may provide a group hotword screen that renders various graphical objects (text fields, buttons, pull-down menus) for creating and enabling group hotwords and selecting which AEDs the user wants to assign the group hotwords to. As such, a selected group of AEDs may receive an assignment instruction to assign a group hotword responsive to receiving a user input indication indicating user interaction with one or more objects displayed in the graphical user interface to instruct the digital assistant to enable the group hotword and each AED in the selected group of AEDs to be assigned the group hotword. The user may update the selected group of AEDs via the GUI by selecting one or more additional AEDs to add to the group and/or selecting one or more AEDs to remove from the group. The user may also select a group of AEDs and enable a group hotword to assign to the selected group of AEDs via a voice input corresponding to a group hotword query. Here, the user may speak a voice input requesting the digital assistant to enable the group hotword and assign the group hotword to the selected group of AEDs. For instance, the voice input spoken by the user to enable the group hotword to assign to a first AED and a second AED located in a downstairs zone of the user's home may include “Device 1 and device 2, respond to downstairs devices”. Here, the term “Device 1” spoken by the user includes a respective device-specific hotword assigned to the first AED and the term “Device 2” spoken by the user includes a different respective device-specific hotword assigned to the second AED such that each of the first and second AEDs will detect their respective device-specific hotword and wake-up to process the following audio data corresponding to the group hotword query “respond to downstairs devices”. As such, at least one of the first AED or the second AED may instruct a speech recognizer (e.g., on-device ASR or server-side ASR) to perform speech recognition on the audio data to generate an ASR result for the voice input and then perform query interpretation on the ASR result to identify the group hotword query. The group hotword query identified by the query interpretation performed on the ASR result specifies a name of the group hotword (e.g., “downstairs devices”) to enable and each AED in the selected set of AEDs to be assigned the group hotword. The user could have similarly provided the voice input corresponding to the group hotword query by invoking the digital assistant directly through a global hotword. For example, the user could speak the group hotword query “Hey Assistant, have device 1 and device 2 respond to downstairs devices”. In this example, any AED associated with the user may detect the predefined default hotword “Hey Assistant” and wake-up to initiate speech recognition on the audio data to generate the ASR result and perform speech recognition to identify the group hotword and each AED in the selected group of AEDs to be assigned the group hotword. As with the GUI example above, the user may similarly update the selected group of AEDs via subsequent voice inputs that specify one or more additional AEDs to add to the group and/or selecting one or more AEDs to remove from the group. For instance, the user may speak “Hey downstairs devices, add device 3” to add a third AED104c(device 3) to the group of AEDs assigned the group hotword “downstairs devices”. Similarly, the user may speak “Hey device 1, leave the downstairs devices group” to remove the first AED104a(device 1) from the group so that the first AED is no longer assigned the group hotword and will not trigger when the user speaks “Hey downstairs devices”. The user may provide a spoken confirmation to confirm (o0r undo) and update made to the group of AEDs Additionally, once all devices have left the selected group, the hotword may cease to exist requiring the user to re-create or re-enable the group hotword. In additional examples, a group hotword is available implicitly. For instance, the user may speak the group hotword “Hey nearby devices” or “Hey nearby device” to only address AEDs in close proximity to the user. The hotword detector could detect both the singular and plural group hotword, or only detect the singular and rely on speech recognition to recognize the suffix “s”. This type of implicit group hotword includes a proximity-based group hotword. The user may access the digital assistant application and interact with the GUI to specify which AEDs should be assigned an implicit proximity-based group hotword. Accordingly, the group hotword in this instance provides context to specify that the user only wants to invoke one or more AEDs that are currently closest to the user in proximity without requiring the user to explicitly identify those AEDs, whether by a respective unique hotword assigned thereto or naming the AEDs in a query portion of the utterance. Notably, each AED assigned the implicit proximity-based group hotword may run a hotword detection model to detect the presence of the group hotword in streaming audio to trigger the wake-up process and initiate speech recognition on the audio. As the implicit group hotword in this instance is proximity-based, even though multiple AEDs may detect the group hotword in captured streaming audio, these AEDs may each subsequently process the audio to determine a respective proximity value relative to the user and then perform arbitration using these proximity values across the multiple AEDs to elect one or more of these AEDs to fulfill an operation specified by the user's query. Here, AEDs outside some upper distance threshold from the user may be ineligible to fulfill the query. Optionally, AEDs inside some lower distance threshold, such as a smart phone AED in the user's pocket that detected the proximity based group hotword “Hey nearby device(s)”, may also be ineligible to respond to the query. The user also has the option to add/remove AEDs from the selected group assigned the proximity-based group hotword. Additionally or alternatively, the one or more AEDs elected to respond to the user's query may be based on the type of query and/or respective device properties associated with each AED so that only one or more AEDs best equipped to fulfill the query are elected. Here, the device properties associated with each AED may include processing capabilities, device type, user-configurable device settings, power usage, battery level, physical location of the AED, or network capabilities, etc. As such, when the query is a single-device query such as “Hey nearby device, set a timer”, device arbitration may determine that the closest AED to the user is ineligible to fulfill the query because the AED is a battery-powered smart speaker and the battery capacity is very low (e.g., less than 5-percent). Accordingly, a next closest AED assigned the implicit proximity-based group hotword may fulfill the query. In some implementations, implicit group hotwords are assigned to AEDs in a selected group that are associated with a same device type. For instance, an implicit device-type group hotword could include “Hey smart speakers” to address all AEDs associated with the user that include the device type of smart speakers. Similarly, another implicit device-type group hotword could include “Hey smart lights” to address all AEDs that include the device type of smart lights. Notably, device-type group hotwords provide context indicating which AEDs the user wants to address by uniquely identifying the device type associated with the selected group of AEDs. Each AED may run a hotword detection model that is pre-trained to detect the presence of device-type group(s) hotword in streaming audio without performing speech recognition on the audio data. In additional implementations, an implicit group hotword is assigned to AEDs in a selected group that share a common attribute. For instance, an implicit attribute-based group hotword could include “Hey blue devices” to address all AEDs associated with the user that are labeled as having the color blue or “Hey red devices” to address all AEDs associated with the user that are labeled as having the color red. Attribute-based group hotwords could similarly specify any other attribute such as size, e.g., “Hey large devices” or “Hey small devices”. Notably, attribute-based group hotwords can further narrow down a specific group of AEDs a user wants to address. In a non-limiting example, where the implicit device-type group hotword “Hey smart speakers” would address all smart speakers throughout the user's home and the manually-enabled group hotword “Hey family room speakers” would address only four smart speakers located in the family room of the user's home, the implicit attribute-based group hotword “Hey blue devices” could be used to address only two of the four smart speakers located in the family room of the user that are labeled as having the color appearance red. The implicit group hotwords may be enabled/disabled via the GUI of the digital assistant application. Similarly, the group of AEDs assigned implicit group hotwords may be specified/selected via the GUI of the digital assistant application. The selected group of AEDs assigned an implicit group hotword may be updated by adding additional AEDs to the group and/or removing AEDs from the group as described above. In yet additional implementations, the digital assistant automatically creates and assigns a group hotword to a selected group of AEDs performing a long-standing action while the long-standing action is in progress. For instance, a user may speak a voice query/command that commands the digital assistant to perform a long-standing action on two or more AEDs. In a non-limiting example, the voice query/command “Hey Assistant, play party music playlist on speaker 1 and speaker 2” causes the digital assistant to perform the long-standing operation by streaming the user's party music playlist as audible playback from speakers 1 and 2. In this example, the digital assistant is configured to automatically create an action-specific group hotword “Party music” for the user to use in follow-up queries pertaining to the long-standing operation. As such, speaker 1 and speaker 2 each receive an assignment instruction assigning the group hotword “Party music” that was automatically created by the digital assistant. Thereafter, the user may address the long-standing operation performed on speakers 1 and 2 by simply speaking “Party music”. For instance, the user may speak utterances such as “Party music, next song” or “Party music, turn up the volume” to advance to a next track in the playlist or instruct the speakers 1 and 2 to each increase their volume. To inform the user of the action-specific group hotword created by the digital assistant, the digital assistant may output, for audible playback from one of the AEDs (e.g., speaker 1 or speaker 2), synthesized speech corresponding to a response to indicate performance of the long-standing operation is in progress and the automatically created group hotword for use in follow-up queries that pertain to the long-standing action. In the example above, the response may include synthesized speech that conveys “Got it, now playing that. In the future, you can control playback using the ‘party music’ hotword”. The digital assistant may revoke use of the automatically created group hotword when the long-standing action ends. FIGS.1A-1Cillustrate a system100for assigning a group hotword50gto a selected group of two or more assistant-enabled devices (AEDs)104associated with a user102to permit the user102to address the selected group of two or more AEDs in a single query by speaking the group hotword50g. Briefly, and as described in more detail below,FIG.1Ashows the user102manually-enabling a group hotword to assign to a selected group of two or more AEDs104,104a-cassociated with the user102by speaking an utterance106, “Hey Assistant, have device 1 and device 2 respond to downstairs speakers”. In response to the utterance106, a digital assistant105executing on the AEDs104(and optionally a remote server120in communication with the AEDs) provides assignment instructions assigning the group hotword “downstairs speakers” to the selected group of AEDs104that includes a first AED104anamed “device 1” and a second AED104bnamed “device 2”. Each AED104a,104bassigned the group hotword is configured to wake-up from a low-power state when the group hotword is detected in streaming audio by at least one of the AEDs in the selected group of AEDs104a,104b. For instance, when the user speaks a subsequent utterance126, “Downstairs speakers, play my playlist”, the first AED104aand the second AED104bdetect the group hotword “Downstairs speakers” in audio data corresponding to the utterance126that triggers each AEDs104a,104bto wake-up from a low-power state and execute a collaboration routine150to collaborate with one another to begin to play music122from the user's102playlist (e.g., Track #1). In the example shown, the system100includes three AEDs104a-cassociated with the user102and executing the digital assistant105that the user102may interact with through speech. While three AEDs104are depicted, the user102may include any number of AEDs104located throughout a speech-enabled environment associated with the user102. While the AEDs104all correspond to smart speakers, AEDs104can include other computing devices without departing from the scope of the present disclosure, such as, without limitation, a smart phone, tablet, smart display, desktop/laptop, smart watch, smart appliance, headphones, or vehicle infotainment device. Each AED104includes data processing hardware10and memory hardware12storing instructions that when executed on the data processing hardware10cause the data processing hardware10to perform operations. Each AED104includes an array of one or more microphones16configured to capture acoustic sounds such as speech directed toward the AED104. Each AED104may also include, or be in communication with, an audio output device (e.g., speaker)18that may output audio such as music122and/or synthesized speech450(FIG.4) from the digital assistant105. FIG.1Ashows the user102speaking the utterance106, “Hey Assistant, have device 1 and device 2 respond to downstairs speakers” in the vicinity of at least the first AED104ato request the digital assistant105to enable the group hotword “downstairs speakers” and assign the group hotword to a selected group of AEDs that includes the first AED104anamed “device 1” and the second AED104bnamed “device 2”. The microphone16of the first AED104areceives the utterance106and processes audio data20that corresponds to the utterance106. The initial processing of the audio data20may involve filtering the audio data20and converting the audio data20from an analog signal to a digital signal. As the first AED104aprocesses the audio data20, the first AED104amay store the audio data20in a buffer of the memory hardware12for additional processing. With the audio data20in the buffer, the first AED104amay use a hotword detector108to detect whether the audio data20includes a predefined global hotword50“Hey Assistant” assigned to each AED associated with the user102. The hotword detector108is configured to identify hotwords that are included in the audio data20without performing speech recognition on the audio data20. The hotword detector108may include an initial hotword detection stage that coarsely listens for the presence of the hotword50, and if detected, triggers a second hotword detection stage to confirm the presence of the hotword50. The initial hotword detection stage may execute on a low-power digital signal processor (DSP) of the data processing hardware10, while the second hotword detection stage may run on a more computationally intensive application processor (AP) (e.g. system on a chip (SoC)) to provide more accurate hotword detection. In some implementations, the hotword detector108is configured to identify hotwords that are in the initial portion of the utterance106. In this example, the hotword detector108may determine that the utterance106“Hey Assistant, have device 1 and device 2 respond to downstairs speakers” includes the predefined global hotword50“Hey Assistant” if the hotword detector108detects acoustic features in the audio data20that are characteristic of the hotword50. The acoustic features may be mel-frequency cepstral coefficients (MFCCs) that are representations of short-term power spectrums of the utterance106or may be mel-scale filterbank energies for the utterance106. For example, the hotword detector108may detect that the utterance106“Hey Assistant, have device 1 and device 2 respond to downstairs speakers” includes the hotword50“Hey Assistant” based on generating MFCCs from the audio data20and classifying that the MFCCs include MFCCs that are similar to MFCCs that are characteristic of the hotword “Hey Assistant” as stored in a hotword model of the hotword detector108. As another example, the hotword detector108may detect that the utterance106“Hey Assistant, have device 1 and device 2 respond to downstairs speakers” includes the hotword50“Hey Assistant” based on generating mel-scale filterbank energies from the audio data402and classifying that the mel-scale filterbank energies include mel-scale filterbank energies that are similar to mel-scale filterbank energies that are characteristic of the hotword “Hey Assistant” as stored in the hotword model of the hotword detector108. When the hotword detector108determines that the audio data20that corresponds to the utterance106includes the predefined global hotword50, the AED104may trigger a wake-up process to initiate speech recognition on the audio data20that corresponds to the utterance106. For example, a speech recognizer116running on the AED104may perform speech recognition and/or semantic interpretation on the audio data20that corresponds to the utterance106. The speech recognizer116may perform speech recognition on the audio data20to generate an automated speech recognition (ASR) result for the utterance106and then perform query interpretation on the ASR result to identify a group hotword query118that specifies a name of the group hotword to enable and each AED104in the selected group of AEDs to be assigned the group hotword. In this example, the speech recognizer116may perform query interpretation on the ASR result that includes the phrase “have devices 1 and 2 respond to downstairs devices” as the group hotword query118that specifies the name “downstairs speakers” of the group hotword and each AED104a,104b“device 1 and device 2” in the selected group of AEDs104to be assigned the group hotword. In some implementations, the speech recognizer116is located on a server120in addition to, or in lieu, of the AEDs104. Upon the hotword detector108triggering the AED104ato wake-up responsive to detecting the predefined global hotword50in the utterance106, the AED104amay transmit the audio data20corresponding to the utterance106to the server120via a network132. The AED104amay transmit the portion of the audio data20that includes the hotword50for the server120to confirm the presence of the global hotword50. Alternatively, the AED104amay transmit only the portion of the audio data20that corresponds to the portion of the utterance106after the global hotword50to the server120. The server120executes the speech recognizer116to perform speech recognition and returns a transcription of the audio data20to the AED104a. In turn, the AED104aidentifies the words in the utterance106, and the AED104aperforms semantic interpretation and identifies the group hotword query118. The AED104a(and/or the server120) may identify the group hotword query118for the digital assistant105to enable and provide assignment instructions assigning the group hotword “downstairs speakers” to the selected group of AEDs104that includes the first AED104aand the second AED104b. In the example shown, the digital assistant105begins to perform the long-standing operation of playing music122as playback audio from the speaker 18 of the AED104. The digital assistant105may stream the music122from a streaming service (not shown) or the digital assistant105may instruct the AED104to play music stored on the AED104. After the group hotword50g“downstairs devices” is enabled and assigned to the first and second AEDs104a,104b, the respective hotword detector108running on each of the first and second AEDs104a,104bis configured to identify the group hotword50g“downstairs devices” in audio data20corresponding to subsequent utterances126. Here, each respective hotword detector108may activate a respective group hotword model114to run on the respective AED104a,104bthat is trained to detect subsequent utterances126of the group hotword50g“downstairs speakers” in streaming audio captured by the respective AED104a,104bwithout performing speech recognition on the captured audio. The group hotword model114may be stored on the memory hardware12of the AEDs104or the server120. If stored on the server120, the AEDs104may request the server to retrieve the group hotword model114for a corresponding group hotword50gand provide the retrieved group hotword model114so that the AEDs104can activate the group hotword model114. In some examples, the group hotword50gis predefined and available as a suggested group hotword that the user102may enable and assign to the selected group of AEDs104. In these examples, the corresponding group hotword model114is pre-trained to detect the group hotword50gin streaming audio. In other examples, the group hotword50gis a custom group hotword created by the user. In these other examples, the user102may train a corresponding group hotword model114to detect the custom group hotword50gby speaking training utterances that include the user102speaking the custom group hotword50g. In additional implementations, assigning the group hotword to the selected group of AEDs104causes one or more of the AEDs104to execute the speech recognizer116in a low-power and low-fidelity state where the speech recognizer116is constrained or biased to only recognize the group hotword50gassigned to the AEDs104when spoken in subsequent utterances126captured by the AEDs104. Since the speech recognizer116is only recognizing a limited number of terms/phrases, the number of parameters of the speech recognizer116may be drastically reduced, thereby reducing the memory requirements and number of computations needed for recognizing the group hotword50gin speech. Accordingly, the low-power and low-fidelity characteristics of the speech recognizer116may be suitable for execution on a digital signal processor (DSP). In these implementations, the speech recognizer116executing on at least one of the AEDs104may recognize an utterance106of the enabled group hotword50gin streaming audio captured by the at least one AED104in lieu of using a group hotword model114. One or more of the AEDs104may store a hotword registry500locally on the memory hardware12. The hotword registry500contains a list of one or more hotwords50each assigned to one or more AEDs104associated with the user102. The digital assistant105and/or the AEDs104in the selected group may populate the hotword registry500to include the enabled group hotword50gin the list of one or more hotwords and identify each AED104in the selected group of AEDs104assigned the group hotword50g. Upon enabling and assigning the group hotword50g“downstairs speakers” to the first AED104anamed Device 1 and the second AED104bnamed Device 2,FIG.1Ashows the digital assistant105updating the hotword registry500to designate the assignment of the group hotword50g“downstairs speakers” to Device 1 and Device 2. In some examples, after the first and second AEDs104a,104bin the selected group of AEDs receive the assignment instruction assigning the group hotword50g, the first and second AEDs104a,104bexecute a leader election process300to elect, based on respective device properties302associated with each AED104, one or more AEDs from the selected group to listen for the presence of the group hotword50gin the streaming audio on behalf of the selected group of AEDs.FIG.3shows an example leader election process300configured to receive, as input, the respective device properties302associated with each AED104in a selected group of AEDs, and generate, as output, election instructions310electing one or more of the AEDs to listen for the presence of the group hotword50g. The device properties302associated with each AED104may include, without limitation, at least one of processing capabilities, device type, user-configurable device settings, power usage, battery level, physical location of the AED, or network capabilities. In the example ofFIG.1A, the device properties302associated with the second AED104bnamed Device 2 may indicate that the second AED102bis a portable device and is currently powered by a battery whereas the device properties302associated with the first AED104anamed Device 1 may indicate that the first AED102ais a stationary device powered by an external power source, e.g., a power outlet. As such, the election instructions310output by the leader election process300may indicate that the closest one of the Device 1 or Device 2 relative to the user102speaking “downstairs devices” should respond by performing speech recognition and semantic interpretation to identify the query unless the battery level of Device 2 is less than 5-percent (5%). That is, when the subsequent utterance126that includes the group hotword50g“downstairs devices” is detected in streaming audio by each of the AEDs104a,104b, execution of the collaboration routine150by the AEDs104a,104bwill cause the second AED104bnamed Device 2 to not respond if the battery level is less than 5-percent even if Device 2 is closest to the user102. This would allow the Device 2 to conserve power by not having to consume processing resources to perform speech recognition and/or semantic interpretation on the audio data. Otherwise, when power conservation is not a concern, the leader election process300may generally elect a closest AED104to process audio since the audio captured by that AED104is more likely to have a higher quality than the audio captured by further AEDs104, and therefore provide more accurate speech recognition. In additional examples, the leader election process300is capable of generating more granular election instructions310. For instance, the election instructions310may elect only one of the AEDs104to trigger second stage hotword detection (i.e., using a computationally-intensive hotword detection model114or using the speech recognizer116) to confirm the presence of the group hotword50gwhen a first stage hotword detector108initially detects the group hotword50g. That is, the election instructions310may inform each AED104in the selected group of AEDs assigned the group hotword50gthat when each AED104detects the presence of the group hotword50gusing the first stage hotword detector108, that only an elected one of the AEDs104will trigger second stage hotword detection to confirm the presence of the group hotword50g. Expanding further, device properties302may indicate that one of the AEDs is battery-powered and configured to run a first stage hotword detector108on a DSP chip which consumes low power at the cost of low-fidelity to coarsely listen for the group hotword50g, and once the group hotword50gis detected by the first stage hotword detector108, an application processor (e.g., SoC chip) is triggered to wake up and run the second stage hotword detection (e.g., hotword model114or on-device ASR116) to confirm the presence of the group hotword50g. Thus, if the device properties302indicate that one or more other AEDs in the selected group of AEDs are non-battery powered devices, it may be efficient to leverage those devices for at least the task of second stage hotword detection so the battery-powered device does not waste power by triggering the AP to wake-up from a low-power state. Other scenarios may exist where device properties302for an AED in a selected group of AEDs indicate that the AED capable of performing speech recognition on-device for a limited set of common queries/commands while other AEDs in the selected group need to provide audio to the server120to perform server-side ASR. The leader election process300may generate election instructions310that cause the collaboration routine150to elect the AED that is capable of performing on-device speech recognition to attempt to perform speech recognition on captured audio data20on-device first to determine if one of the common queries/commands in the limited set is recognized in the captured audio data20. If one of the common queries/commands is not recognized, the generated election instructions310may permit the collaboration routine150to elect one of the other AEDs to provide the audio data20to the server120to perform server-side ASR on the audio data20. With continued reference toFIG.3, the AEDs104in the selected group of AEDs104may re-execute the leader election process300periodically and/or in response to specific events. In one example, re-executing the leader election process300occurs responsive to a device state change304at one of the AEDs in the selected group of AEDs104. The device state change304may include, without limitation, processing load on the AED104increasing to a level that violates a processing threshold, processing load on the AED104reducing to a level that no longer violates the processing level, a change in background noise levels, a battery capacity falling below a battery capacity threshold, a loss of network connection, the AED104powering off, etc. The device state change304allows the leader election process300to re-evaluate the respective device properties302associated with each AED104in the selected group to elect the one or more AEDs that are currently best suited to listen for the group hotword. In one example, re-executing the leader election process300occurs responsive to an update306to the selected group of AEDs104that adds one or more additional AEDs104to the selected group of AEDs104. For instance,FIG.1Bshows the user102speaking another utterance136, “Downstairs speakers, add device 3” in the vicinity of at least the first AED104ato request the digital assistant105to assign the group hotword50g“downstairs speakers” to the third AED104cnamed “Device 3” in addition to the first and second AEDs104a,104bnamed Device 1 and Device 2. Here, the utterance136includes the group hotword50g“downstairs speakers” that at least the first AED104a(i.e., based on the election instructions310) detects, using the hotword detection model114corresponding to the group hotword50g, in audio data20corresponding to the utterance136to trigger the first AED104ato wake-up from the low-power state. Once awake, the first AED104ainstructs a speech recognizer116to perform speech recognition on the audio data20to generate an ASR result for the utterance136and performs query interpretation on the ASR result to identify the group hotword query118that specifies a device identifier “Device 3” for an additional AED102cto add to the selected group of AEDs104assigned the group hotword50g“downstairs devices”. Accordingly, the third AED104cmay receive an assignment instruction assigning the group hotword50g“downstairs speakers” to the selected group of AEDs that has been updated to now include the third AED104c. The third AED104cmay activate the hotword detection model114corresponding to the group hotword50cas described above with reference toFIG.1A. The digital assistant105may update the hotword registry500to add the third AED104cnamed Device 3 to the selected group of AEDs104assigned the group hotword50g“downstairs speakers”. The leader election process300ofFIG.3may re-execute to consider the respective device properties302associated with the third AED104cresponsive to the update306adding the third AED104cto the selected group of AEDs104. All three AEDs104a-cmay collaborate with one another to fulfill performance of the long-standing operation of streaming the music122from the user's playlist. Additionally or alternatively, re-executing the leader election process300may occur responsive to an update306to the selected group of AEDs104that removes one or more AEDs104from the selected group of AEDs104. For instance,FIG.1Cshows the user102speaking another utterance146, “Device 1, leave the downstairs speakers group” in the vicinity of at least the first AED104ato request the digital assistant105to remove the first AED104anamed Device 1 from the selected group of AEDs104assigned the group hotword50g“downstairs speakers”. Here, the utterance146includes a device-specific hotword50d“Device 1” uniquely assigned to the first AED104aand detected by the first AED10ain audio data20corresponding to the utterance146to trigger the first AED104ato wake-up from the low-power state and process the audio data20to identify the group hotword query118requesting the digital assistant105to remove the first AED104afrom the selected group of AEDs104assigned the group hotword50g“downstairs speakers”. Accordingly, the first AED104amay deactivate the hotword detection model114corresponding to the group hotword50gso that the first AED104ano longer listens for the presence of the group hotword50gin audio data. The digital assistant105may update the hotword registry500to remove the first AED104anamed Device 1 from the selected group of AEDs104assigned the group hotword50g“downstairs speakers”. The leader election process300ofFIG.3may re-execute to determine updated election instructions310based on Device 1 no longer being a member of the selected group of AEDs. The second and third AEDs104b,104cmay now collaborate with one another without the first AED104ato fulfill performance of the long-standing operation specified by the query128in the utterance126spoken by the user102inFIG.1A. Referring back toFIG.1A, at least the first AED104adetects, using the corresponding group hotword model114, the presence of the group hotword50g“downstairs devices” in audio data20corresponding to a subsequent utterance126spoken by the user102that includes a query128specifying an operation to perform. Specifically, the example shows the user102speaking the subsequent utterance126“Downstairs speakers, play my playlist” and at least the first AED104ausing the group hotword model114to detect the group hotword50g“downstairs devices” in the corresponding audio data20. Detecting the group hotword50gin the audio data20triggers the first AED104a(and optionally the second AED104b) to wake-up from the low-power state and execute the collaboration routine150to cause the first AED104aand each other AED104assigned to the group hotword50gto collaborate with one another to fulfill performance of the operation specified by the query128. Here, the query128specifies a long-standing operation and the first and second AEDs104a,104bcollaborate with one another by pairing with one another for a duration of the long-standing operation and coordinating performance of sub-actions related to the long-standing operation to playback music122from the user's playlist. For instance, one AED104may perform a sub-action of connecting to a remote music streaming service to stream the playlist and broadcast the streaming playlist to the other AED104. In some examples, the collaborating AEDs104may assume different music playback responsibilities such as one of the AEDs assuming the role of a left audio channel and the other one of the AEDs assuming the role of a right audio channel to provide a stereo arrangement.FIG.1Ashows the first AED104anamed Device 1 and the second AED104bnamed Device 2 executing the collaboration routine150to collaborate with each other to fulfill performance of the long-standing operation of playback music122(e.g., Track #1) from the user's playlist. In some examples, in response to the first AED104adetecting the group hotword50gin the audio data20, the first AED104ainvokes each other AED104in the selected group of AEDs104that did not detect the group hotword50gto wake-up from the low-power state and collaborate with the first AED104ato fulfill performance of the operation specified by the query128. In these examples, responsive to detecting the group hotword50g, the first AED104amay identify each of the one or more other AEDs104in the selected group assigned the group hotword by accessing the hotword registry500. Here, the hotword registry500containing the list of one or more hotwords includes the group hotword50g“downstairs stairs” assigned to the first AED104anamed Device 1 and the second AED104bnamed Device 2. Thus, the first AED104amay identify that the second AED104bnamed Device 2 is also assigned the group hotword50gto thereby invoke the second AED104bto collaborate with the first AED104ato fulfill performance of the operation (e.g., streaming music122from the user's102playlist) specified by the query128. While the query128in the example shown specifies a long-standing operation to perform, other examples may include a query specifying a device-level operation to perform on each AED in the selected group of AEDs individually. That is, during execution of the collaboration routine150, each AED in the selected group of AEDs collaborate by fulfilling performance of the device-level operation independently. For instance, if the first and second AEDs104a,104bcorresponded to smart lightbulbs assigned the same group hotword50g, a query specifying a device-level operation to turn off lights would cause each smart lightbulb to perform the operation of power off independently. Referring toFIG.2A, in some implementations, a software application205associated with the digital assistant105executes on a user device to display a user-defined group hotword selection screen200,200ain a graphical user interface (GUI)208of the user device. In the example shown, the user device includes an AED104corresponding to a smart phone (e.g., smart phone104jofFIG.4). The user-defined group hotword selection screen200apermits the user to enable and assign a group hotword50gto a group of two or more AEDs104selected by the user. The user102may use the group hotword selection screen200ato enable and assign group hotwords in addition to, or lieu of, providing voice inputs as described above with reference toFIGS.1A-1C. In the example shown, the group hotword selection screen200adisplays a plurality of objects210,210a-din the GUI208that the user may interact with to instruct the digital assistant105to enable a group hotword50gand select the group of AEDs104to be assigned the group hotword50g. The GUI208may receive a user input indication indicating user interaction with a text field object210athat allows the user to create a custom group hotword by typing a name of the custom group hotword the user wants to create. Optionally, the user102may select a voice input graphic (e.g., graphical microphone) to provide a voice input corresponding to the user102speaking the custom group hotword. When creating a custom group hotword, the group hotword selection screen200amay prompt the user to speak a number of training examples that include the custom group hotword for use in training a group hotword detection model114to detect the custom group hotword in streaming audio. On the other hand, the user102may enable a predefined group hotword by providing a user input indication indicating user interaction with a dropdown object210bthat presents a list of available predefined group hotwords to select from. The dropdown object210may present commonly used group hotwords as available predefined group hotwords to select from such as group hotwords descriptive of device type, e.g., “Smart Speakers” and “Smart Lights”, descriptive of common zones/areas in an environment, e.g. “Family Room Devices”, and descriptive of both device type and zone/area, e.g., “Bedroom Speakers”. The user can interact with the dropdown object210bto scroll through the list of available group hotwords. In some examples, custom group hotwords can be added to the list of available group hotwords. In the example shown, the GUI208receives a user input indication indicating user interaction with the dropdown object210to select the predefined group hotword “Family Room Devices” from the list of available predefined group hotwords. Here, the selection of the predefined group hotword may instruct the digital assistant105to enable the predefined group hotword. The assistant may also suggest group hotwords to enable/activate for assignment to groups of AEDs104. For instance, a user may tend to query a group of devices manually I (e.g., in a sequence or via their individual names) which all belong to a semantic group. Further, the group hotword selection screen200adisplays a plurality of selection objects210ceach corresponding to a respective one of a plurality of AEDs104associated with the user102. The user102may provide user input (e.g., touch) to select each AED104to include in a group of AEDs104to be assigned the group hotword50gcreated via the text field object210aor selected from the dropdown object210b. In the example shown, the GUI208receives user input indications indicating user interaction with selection objects210ccorresponding to the AEDs104named Speaker 1, Speaker 2, Speaker 3, Speaker 4, and Smart TV to include these AEDS in the selected group of AEDs to be assigned the group hotword “Family Room Devices”. To instruct the digital assistant105to enable and assign the group hotword “Family Room Devices” to the selected group of AEDs104that includes Speakers 1-4 and Smart TV, the user102may provide a user input indication indicating user interaction with an enable object210d. Assuming the enable object210dis selected, the digital assistant105will provide assignment instructions to the selected group of AEDs that includes Speakers 1-4 and Smart TV indicating assignment of the group hotword “Family Room Speakers” to the selected group of AEDs. The digital assistant may also add the group hotword and selected group of AEDs to the hotword registry500as shown inFIG.5. Referring toFIG.2B, in some implementations, the software application205associated with the digital assistant105is configured to display an implicit group hotword selection screen200,200bin the GUI208of the AED104. The implicit group hotword selection screen200bdisplays a plurality of available implicit group hotwords and allows the user102to select groups of AEDs to be assigned to each implicit group hotword. For each implicit group hotword, the implicit group hotword selection screen200bmay list all eligible AEDs that can be assigned the implicit group hotword based on attributes associated with the AEDs. For instance, all of the AEDs104associated with the user102are listed as eligible AEDs to be assigned the proximity-based group hotwords “Hey nearby devices” and/or “Hey nearby device”. Accordingly, the user102may address, in a single query, one or more AEDs that are closest to the user102at any given time by simply speaking the proximity-based group hotword “Hey nearby devices” or “Hey nearby device” such that AEDs detecting the spoken group hotword will collaborate with one another by performing arbitration to select the device or devices which are closest to the user102for fulfilling an operation specified by the query. Advantageously, the proximity-based group hotword allows the user102to address only a subset of one or more AEDs that are currently closest in proximity to the user102without requiring the user to explicitly identify any particular AED in the subset of the one or more AEDs In the example shown, the proximity-based group hotword “Hey nearby devices” is assigned to all AEDs associated with the user by default. The user may interact with selection objects to remove any AEDs from the selected group of AEDs assigned the proximity-based group hotword. For instance, the GUI208may receive a user input indications indicating user interaction with a selection object210ccorresponding to the AED104named Smart Phone to remove the Smart Phone from the group assigned the proximity-based group hotword. Accordingly, the smart phone will not detect or respond to the user speaking “Hey Nearby Devices” even if the smart phone is the closest AED relative to the user102. Other implicit group hotwords include device-type group hotwords that can be assigned to a selected group of AEDs that all share a same device type. In the example shown, the implicit group hotword selection screen200blists only Speakers 1-7 as eligible AEDs to be assigned the implicit device-type group hotword “Hey smart speakers” since the AEDs named Speakers 1-7 all include the same device type of smart speaker. Accordingly, the user102may interact with the selection objects210cdisplayed in the GUI208to select the group of AEDs (or unselect AEDs from the group) to be assigned the group hotword “Hey smart speakers” and subsequently speak utterances that include the group hotword “Hey smart speakers” to address all the AEDs associated with the user102that include the device type of smart speakers in a single query. The implicit group hotword selection screen200balso displays two different implicit attribute-based hotwords that may each be assigned to a respective selected group of AEDs104that share a common attribute. For instance, a first attribute-based group hotword includes “Blue Speakers” that the user102may assign to Speaker 1 and Speaker 2 to allow the user to address all the smart speakers that share the attribute of having a blue color (or are otherwise labeled as “Blue”) in a single query. Similarly, a second attribute-based group hotword includes “Red Speakers” that the user may assign to Speaker 3 and Speaker 4 to allow the user to address all the smart speakers that share the attribute of having a red color (or are otherwise labeled as “Red”) in a single query. As will become apparent with reference toFIG.4below, attribute-based group hotwords can further narrow down a specific group of AEDs a user wants to address. FIG.4shows an example speech-enabled environment400including a plurality of AEDs104associated with a user102. In the example shown, the speech-enabled environment400is a home of the user102having multiple rooms and zones including a family room, a kitchen, and a bedroom. While the speech-enabled environment400depicts a home inFIG.4, the speech-enabled environment400can include any environment implementing a network of multiple AEDs such as educational environments, businesses, or automobiles. The AEDs104include seven smart speakers104a-g(SPs 1-7), a smart display104h, a smart TV104i, and a smart phone104jpositioned throughout the speech-enabled environment. Smart speakers SP1104a, SP2104b, SP3104c, SP4104dand the smart TV104iare positioned in the family room of the speech-enabled environment400, in addition to the smart phone104jwhich is portable/mobile and may be moved throughout the various rooms/zones in the speech-enabled environment400. The smart speakers SP1-SP4 and the smart TV104imay bond or otherwise pair together to form a respective zoned named “Family Room”. Further, the first and second smart speakers SP1, SP2 may be labeled as “Blue” devices to describe their physical attribute of being the color blue and the third and fourth smart speakers SP3, SP4 may be labeled as “Red” devices to describe their attribute of being the color red. Other attributes may be used such as size (e.g., big vs. small), type/brand (e.g., high-fidelity speakers), or any other label that the user use to identify/group AEDs within a specific zone or across multiple zones in the speech-enabled environment400. The speech-enabled environment400also depicts the smart speaker SP5104eand the smart display104hpositioned in the kitchen and bonding/pairing with one another to form a respective zone named “Kitchen”. Likewise, the smart speakers SP6104fand SP7104gmay bond/pair together to form a respective zone named “Bedroom”. Described with reference to the speech-enabled environment400ofFIG.4,FIG.5shows an example hotword registry500containing a list of hotwords50each assigned to a respective selected group of the AEDs104located in the speech-enabled environment400. One or more of the AEDs104may each store the hotword registry500on respective local memory hardware12. AEDs104that do not store the hotword registry500may discover other AEDs104in the network and access the hotword registry500there on to ascertain which hotwords are assigned to which AEDs Additionally or alternatively, the hotword registry500may be stored on a centralized device and in communication with one or more of the AEDs. For instance, the hotword registry500may be stored on a remote server, such as a remote server affiliated with the digital assistant105that associates the hotword registry with a profile for the user102. Each of the AEDs104is assigned a default hotword50“Hey Assistant” that when detected in streaming audio by one or more of the AEDs triggers the AEDs104to wake-up from a low-power state and invoke a first digital assistant105to initiate processing of one or more other terms following the default hotword50. Here, the first digital assistant may be affiliated with a first voice assistant service (e.g., GOOGLE'S Assistant). Moreover, smart speaker SP2104b, the smart display104h, and the smart phone104jare also assigned another default hotword “Other Assistant” that when detected in streaming audio by any one of the AEDs104b,104h,104jtriggers that AED to invoke a second digital assistant to initiate processing of one or more other terms following the other default hotword. Here, the second digital assistant may be affiliated with a second voice assistant service (e.g., AMAZON'S Alexa or APPLE'S Siri) different than the first voice assistant service. Additionally, each AED104may be assigned a unique device-specific hotword that only the corresponding AED is configured to detect in streaming audio when the user only wants to address the corresponding AED For instance, a unique device-specific hotword assigned to the first smart speaker SP1104ain the environment400may include an identifier of the AED such as “Hey Device 1” or simply “Device 1”, or could include a device type and/or other attribute associated with the AED such as “Hey Smart Speaker 1” or simply “Smart Speaker 1”. As mentioned previously, group hotwords assigned to respective selected groups of AEDs may include manually-enabled hotwords50assigned by the user102to the respective selected group of AEDs104. The manually-enabled hotwords may be custom hotwords created by the user102and/or predefined hotwords available for selection by the user102. The predefined hotwords may be associated with pre-defined hotword models trained to detect the associated hotword. A custom hotword created by the user102, however, may require the user to train a custom hotword detection model to detect the custom hotword. For instance, the user102may speak one or more utterances that include the custom hotword. In some examples, the user102provides a voice input (e.g., utterance)106(FIG.1A) to select each AED the user wants to include in a selected group of AEDs and assigns a manually-enabled group hotword50gthe selected group of AEDs104. Similarly, the user may provide subsequent voice inputs136,146(FIGS.1B and1C) to update the selected group of AEDs104by adding one or more additional AEDs to an existing selected group of AEDs (FIG.1B) and/or removing one or more AEDs from the existing selected group of AEDs (FIG.1C). Additionally or alternatively, the user may provide user input indications indicating user interaction with one or more objects displayed in a GUI208, such as the user-defined group hotword selection screen200aofFIG.2A, to instruct the digital assistant105to enable the manual group hotword and select the group of AEDs to be assigned the group hotword. The user102may provide subsequent user interaction indications to the GUI208to update the selected group of AEDs104by adding additional AEDs and/or removing AEDs from the existing selected group. In the example hotword registry500for the speech-enabled environment400, the user102enables and assigns the manual group hotword “Family Room Devices” to the respective selected group of AEDs that includes smart speakers SP1-SP4104a-dand the smart TV104ilocated in the zone named “Family Room”. The user102also enables and assigns the manual group hotword “Kitchen Devices” to the smart speaker SP5104eand the smart display104hlocated in the zone named “Kitchen”. Likewise, the manual group hotword “Bedroom Speakers” is enabled and assigned by the user to the smart speakers SP6, SP7104f-glocated in the zone named “Bedroom”. Here, each manually-enabled group hotword may be descriptive of a location/zone within the speech-enabled environment400(e.g., the user's home) at which the respective selected group of AEDs assigned the corresponding group hotword50are located. Notably, the manual-enabled group hotword “Bedroom Speakers” assigned to smart speakers SP6, SP7 is descriptive of the device type (e.g., smart speakers) associated with the respective selected group of AEDs. In the example shown, the user102has not assigned any manually-enabled group hotword to the smart phone104j. However, one or more of the selected group of AEDs may be updated to add/include the smart phone104jto enable the smart phone104jto collaborate with the other AEDs in the respective group to fulfill an operation specified by a query when the corresponding group hotword preceding the query is detected in streaming audio. The example hotword registry500ofFIG.5also shows that a plurality of different implicit group hotwords50gare each assigned to a different respective selected group of AEDs104. As described above with reference toFIG.2B, a software application205associated with the digital assistant105may render the implicit group hotword selection screen200bin the GUI208and the user102may interact with the screen200bto view the available implicit group hotwords and select groups of AEDs to be assigned to the implicit group hotwords. For instance, the proximity-based group hotwords “Hey nearby devices” and/or “Hey nearby device” are assigned to all of the AEDs104a-jassociated with the user102that are located in the speech-enabled environment400ofFIG.4. Accordingly, the user102may address, in a single query, one or more AEDs that are closest to the user102in the speech-enabled environment400at any given time by simply speaking the proximity-based group hotword “Hey nearby devices” or “Hey nearby device” such that AEDs detecting the spoken group hotword will collaborate with one another by performing arbitration to select the device or devices are closest to the user102for fulfilling an operation specified by the query. Advantageously, the proximity-based group hotword allows the user102to address only a subset of one or more AEDs that are currently closest in proximity to the user102without requiring the user to explicitly identify any particular AED in the subset of the one or more AEDs. Each AED104assigned the implicit proximity-based group hotword may run a hotword detection model to detect the presence of the group hotword in streaming audio to trigger the wake-up process and initiate speech recognition on the audio. As the implicit group hotword in this instance is proximity-based, even though multiple AEDs104may detect the group hotword in captured streaming audio, these AEDs104may each subsequently process the audio to determine a respective proximity value relative to the user102and then perform arbitration using these proximity values across the multiple AEDs104to elect one or more of these AEDs104to fulfill an operation specified by the user's query. Here, AEDs104outside some upper distance threshold from the user may be ineligible to fulfill the query. Optionally, AEDs104inside some lower distance threshold, such as a smart phone AED in the user's pocket that detected the proximity based group hotword “Hey nearby device(s)”, may also be ineligible to respond to the query. The lower distance threshold could be applied depending on the type of query. For example, if the query is a search query in which the nearby device provides is to provide a search result as synthesized speech, then the fact that the smart phone104jis so close to the user102to indicate the smart phone104jis in the user's pocket, would disqualify the smart phone104jfrom fulfilling the query since the synthesized speech would be muffled and not understood/heard by the user102. The user also has the option to add/remove AEDs from the selected group assigned the proximity-based group hotword. Additionally, the selected device nearest the user102may perform speech recognition and query interpretation to determine whether “nearby device” was spoken by user102to indicate that the user102only wants a single device nearest the user to fulfill a query, or whether “nearby devices” was spoken to indicate that the user wants two or more nearby devices to fulfill the query. Moreover, the example hotword registry500ofFIG.5also shows two different implicit device-type group hotwords each assigned to a respective selected group of AEDs104in the speech-enabled environment400that are associated with a same respective device type. For instance, a first device-type group hotword includes “Smart Speakers” assigned to all the smart speakers SP1-SP7 in the speech-enabled environment400to allow the user to address all the AEDs104a-hassociated with the user102that include the device type of smart speakers in a single query. Here, the device-type group hotword “Smart Speakers” addresses the four smart speakers SP1-SP4 located in the zone named “Family Room”, the smart speaker SP5 located in the zone named “Kitchen”, and the smart speakers SP6, SP7 located in the zone named “Bedroom”. Notably, the manually-enabled group hotword “Family Room Devices” is also assigned to the smart speakers SP1-SP4, the manually-enabled group hotword “Kitchen Devices” is also assigned to the smart speaker SP5, and the manually-enabled group hotword “Bedroom Speakers” is also assigned to the smart speakers SP5, SP7. Additionally, a second device-type group hotword includes “Smart Displays” assigned to the respective selected group of AEDs that include the smart display104hlocated in the zone named “Kitchen” and the smart TV104ilocated in the zone named “Family Room”. Notably, the manually-enabled group hotword “Family Room Devices” is also assigned to the smart TV104iand the manually-enabled group hotword “Kitchen Devices” is also assigned to the smart display104h. With continued reference to the speech-enabled environment400ofFIG.4and the example hotword registry500ofFIG.5, two different implicit attribute-based hotwords are each assigned to a respective selected group of AEDs104in the speech-enabled environment400that share a common attribute. For instance, a first attribute-based group hotword includes “Blue Speakers” assigned to the first and second smart speakers SP1, SP2 located in the zone named “Family Room” of the environment400to allow the user to address all the smart speakers104a-bthat share the attribute of having a blue color (or are otherwise labeled as “Blue”) in a single query. Similarly, a second attribute-based group hotword includes “Red Speakers” assigned to all the smart speakers104c-dthat share the attribute of having a red color (or are otherwise labeled as “Red”) in a single query. Notably, the first and second smart speakers SP1, SP2 assigned the group hotword “Blue Speakers” and the third and fourth smart speakers SP3, SP4 assigned the group hotword “Red Speakers” are also in the selected group of seven (7) smart speakers104a-gassigned the device-type group hotword “Smart Speakers” as well as the selected group of five (5) AEDs104a-d,104iassigned the manually-enabled group hotword “Family Room Devices” that include the smart speakers SP1-4 and the smart TV104i. Thus, attribute-based group hotwords can further narrow down a specific group of AEDs a user wants to address. Referring toFIG.4, in one example, the user102located in the zone named Family Room of the speech-enabled environment400speaks the utterance406“Speaker 1 & Smart TV, Play music videos” corresponding to a command418for the digital assistant105to perform a long-standing action of streaming music videos for playback on the first smart speaker SP1104aand the smart TV104i. The digital assistant105may execute across all of the AEDs104in the speech-enabled environment400. The terms “Speaker 1” and “Smart TV” prefix the command418and correspond to the respective device-specific hotword50aassigned to the first smart speaker SP1104aand the respective device-specific hotword50bassigned to the “Smart TV”. Here, the first smart speaker SP1104aexecutes a hotword detection model trained to detect the hotword50a“Speaker 1” in audio data corresponding to the utterance14to trigger the SP1104ato wake-up from a low-power state and initiate processing on the audio data. At the same time, the smart TV104iexecutes a hotword detection model trained to detect the hotword50b“Smart TV” in the audio data corresponding to the utterance14to trigger the smart TV104ito wake-up from a low-power state and initiate processing on the audio data. After processing the audio data by performing speech recognition to generate an ASR result and performing query interpretation on the ASR result to identify the command418to perform the long-standing action on the first smart speaker SP1104aand the smart TV104i, the SP1 and the smart TV collaborate with one another to fulfill the long-standing action. For instance, the smart TV104imay stream video data to display a video portion of the music videos while the SP1 may stream audio data to audibly output an audio portion of the music videos. Continuing with the example, the digital assistant105is also configured to automatically create an action-specific group hotword and assign the action-specific group hotword to the selected group of AEDs that includes the first smart speaker SP1104aand the smart TV104iperforming the long-standing action while the long-standing action is in progress. The user102may use the action-specific group hotword in follow-up queries that pertain to the long-standing action of playing back the music videos on the first smart speaker SP1104aand the smart TV104i. Accordingly, the AEDs corresponding to the first smart speaker SP1104aand the smart TV104ieach receive an assignment instruction assigning the action-specific group hotword “Music Videos” that was automatically created by the digital assistant105. Thereafter, the user102may address the long-standing action performed on the first smart speaker SP1104aand the smart TV104iby simply speaking the phrase “Music Videos” followed by a query/command for controlling the long-standing action. For instance, the user102may speak “Music Videos, next song” or “Music Videos, turn up the volume” to advance to a next music video or instruct the first smart speaker SP1 to increase the volume. In response to creating the action-specific group hotword and providing the assignment instructions to the first smart speaker SP1104aand the smart TV104i, the digital assistant105may update the hotword registry500ofFIG.5to indicate that the action-specific group hotword “Music Videos” is assigned to the selected group of AEDs that includes the first smart speaker SP1104aand the smart TV104i. In some examples, the first smart speaker SP1104aoutputs, for audible playback, synthesized speech corresponding to a response from the digital assistant105to indicate performance of the long-standing action is in progress and the automatically created action-specific group hotword for use in follow-up queries that pertain to the long-standing action. For instance,FIG.4shows the SP1104aoutputting synthesized speech corresponding to a response450from the digital assistant105that includes, “Ok, playing music videos now . . . . In the future, you can control playback using the ‘Music Videos’ hotword”. The digital assistant105is configured to revoke the use of the action-specific group hotword pertaining to the long-standing action when the long-standing action ends. Thus, when the long-standing action ends, the digital assistant105may update the hotword registry500to remove the action-specific hotword and inform the selected group of AEDs to no longer respond to the action-specific group hotword. The user102may reject the use of the action-specific group hotword at any time by providing a voice input or through the GUI ofFIGS.2A and2B. FIG.6is a flowchart of an exemplary arrangement of operations for a method600of enabling and assigning group hotwords to selected groups of assistant-enabled devices (AEDs)104. At operation602, the method600includes receiving, at data processing hardware10of a first AED104a, an assignment instruction assigning a group hotword50gto a selected group of AEDs104associated with a user. The selected group of AEDs104includes the first AED104aand one or more other AEDs104b-n. Each AED in the selected group of AEDs is configured to wake-up from a low-power state when the group hotword50gis detected in streaming audio by at least one of the AEDs in the selected group of AEDs. At operation604, the method600includes receiving, at the data processing hardware10, audio data20that corresponds to an utterance126spoken by the user102. The audio data20includes a query128that specifies an operation to perform. At operation606, the method600includes detecting, by the data processing hardware10, using a hotword detection model114, the group hotword50gin the audio data20. At operation608, in response to detecting the group hotword50gin the audio data20, the method600includes triggering, by the data processing hardware10, the first AED104ato wake-up from the low-power state and executing, by the data processing hardware10, a collaboration routine150that to cause the first AED104aand each other AED104in the selected group of AEDs to collaborate with one another to fulfill performance of the operation specified by the query128. A software application (i.e., a software resource) may refer to computer software that causes a computing device to perform a task. In some examples, a software application may be referred to as an “application,” an “app,” or a “program.” Example applications include, but are not limited to, system diagnostic applications, system management applications, system maintenance applications, word processing applications, spreadsheet applications, messaging applications, media streaming applications, social networking applications, and gaming applications. The non-transitory memory may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by a computing device. The non-transitory memory may be volatile and/or non-volatile addressable semiconductor memory. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes. FIG.7is schematic view of an example computing device700that may be used to implement the systems and methods described in this document. The computing device700is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. The computing device700includes a processor710, memory720, a storage device730, a high-speed interface/controller740connecting to the memory720and high-speed expansion ports750, and a low speed interface/controller760connecting to a low speed bus770and a storage device730. Each of the components710,720,730,740,750, and760, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor710can process instructions for execution within the computing device700, including instructions stored in the memory720or on the storage device730to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display780coupled to high speed interface740. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices700may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory720stores information non-transitorily within the computing device700. The memory720may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory720may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device700. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes. The storage device730is capable of providing mass storage for the computing device700. In some implementations, the storage device730is a computer-readable medium. In various different implementations, the storage device730may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In additional implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory720, the storage device730, or memory on processor710. The high speed controller740manages bandwidth-intensive operations for the computing device700, while the low speed controller760manages lower bandwidth-intensive operations. Such allocation of duties is exemplary only. In some implementations, the high-speed controller740is coupled to the memory720, the display780(e.g., through a graphics processor or accelerator), and to the high-speed expansion ports750, which may accept various expansion cards (not shown). In some implementations, the low-speed controller760is coupled to the storage device730and a low-speed expansion port790. The low-speed expansion port790, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device700may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server700aor multiple times in a group of such servers700a, as a laptop computer700b, or as part of a rack server system700c. Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The processes and logic flows described in this specification can be performed by one or more programmable processors, also referred to as data processing hardware, executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.
84,709
11862156
DETAILED DESCRIPTION The subject matter of aspects of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. As electronic devices become more integrated into our daily lives, so do the methods in which we can interface with them. Digital assistants have found a place in many peoples' homes, providing voice-activated services that can assist users with various tasks, from a basic level to a very advanced level. However, conventional digital assistants are mostly limited to the capabilities that the service provider and their developers implement. Some service providers and developers provide an open interface (e.g., an API) such that third-parties can develop custom services that can essentially “plug in” to the digital assistant and provide additional services. Typically, these digital assistants are implemented into a stationary device or mobile phone, and activated by speech detection or manual activation (e.g., a button press). Once activated, the digital assists receive a voice command, and relay the command to a remote server of the service provider (or third-party service provider) for processing. The remote server can then provide a response or an acknowledgement of the received command to the digital assistant for output to the user. For the most part, modern-day society has adopted the use of mobile computing devices, such as smart phones, tablets or other devices such as watches and connected glasses. Users generally prefer to carry portable computing with them, having a readily-available resource for accessing information and providing a means for communication. Users can download and install mobile applications of their choosing, and maintain settings that are customized to their personal preferences. The number of applications providing unique services to users is astounding, increasing by the thousands daily. In this regard, it is improbable to provide digital assistant services that can cater to the needs of all users, particularly based on the various services provided by the applications preferred and utilized by the users. Further, with the mass proliferation of network connected to IoT devices that are controlled by mobile applications, it is important for digital assistants to be able to control actions in these applications. As such, a digital assistant having easily customizable commands and actions that can be performed by the digital assistant, based on the receipt of a command, solves the aforementioned problems. As briefly noted above, the “Q” digital assistant, developed by Aiqudo, Inc., headquartered in San Jose, CA, has implemented customizable automation into the digital assistant. In other words, the “Q” digital assistant can, among many other things, perform a series of predefined tasks (e.g., “action”) based on the receipt of a predefined input (e.g., “command”) to accomplish a desired result. In addition, the “Q” digital assistant provides a plethora of additional services, such as crowd-sourced definitions of various commands and actions (e.g., talk back feature) that are quality-assured by intelligent algorithms, essentially eliminating the need for a novice user to “train” their digital assistant to work with their preferred applications. Among other things, the “Q” digital assistant receives a voice command and translates the speech to text. The digital assistant can then employ natural language processing to analyze the text for any recognized commands that may be serviced by applications already-installed or required-to-be-installed by the user. In some instances, the commands may include parameters that are recognized by the digital assistant as well. Provided that an application capable to service the command is installed on the user device, the “Q” assistant can then automate a series of predefined tasks, which can include, by way of example only: launching the application, emulating touch inputs for button presses or application navigation, passing parameters into application form fields, waiting for application or remotely-communicated responses, and many more, until the automated “action” is fully executed and the user is provided with a verbal result of the provided command. As described, when the automated action is being executed by the digital assistant, or in other words when the various steps associated with an automated action are being performed, the various steps required to complete the action are emulated by the digital assistant. In essence, and by way of example only, the user can provide a voice command to the digital assistant, such as “how much is a hotel room tonight,” the digital assistant can determine that a particular application can provide this service, determine a current location of the user and the nearest hotel, and launch a hotel booking application that the digital assistant can pass the “current location” and “date” parameters to. Any additional inputs, such as a “hotel name,” “specific location,” “room options,” or a “submit” button can also be provided by or automated by the digital assistant provided that such options or tasks are included in the predefined action corresponding to the received command. Accordingly, the digital assistant can verbally communicate the output (i.e., price of the hotel room) to the user in accordance with the parameters communicated along with the command. In this way, the user does not have to interact directly with the user interface of the application or even the application itself. In some embodiments, context or additional options may be verbally communicated to the user. For example, the output may be associated with metadata. Continuing the above example, in addition to providing output specific to the command, such as the price of the hotel room, the hotel booking application may indicate that only one room is still available. As will be explained in additional detail below, the metadata may be automatically added as context to the output by the digital assistant or may be manually added by a user customizing the command and action. Additionally, the command and action may only be part of the workflow. In this way, the digital assistant may verbally communicate the output (e.g., price of the hotel room), ask the user if the user would like to book the hotel room, and based on the response verbally communicated by the user, provide confirmation of the booking or the decision not to book the hotel room. In some embodiments, the output may be provided to another device for verbal communication to the user. For example, the user may not be in proximity to the mobile device that has a particular application the user would like to access. However, the user may be in proximity to another device with a digital assistant (e.g., a digital assistant controlled speaker). In this case, a command may be communicated by the user to the other device. The other device communicates the command to a server where it is communicated to the mobile device with the particular application. The command may then be executed on the mobile device and the output is communicated back to the server where it is communicated to the other device. Accordingly, the digital assistant on the other device can verbally communicate the output to the user in accordance with the parameters communicated along with the command. In this way, the user does not even have to interact directly with the mobile device having the application installed. Turning now toFIG.1, a schematic depiction is provided illustrating an exemplary system100in which some embodiments of the present invention may be employed. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, groupings of functions, etc.) can be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. The system inFIG.1includes one or more clients110,115a,115b,115c, . . .115n, in communication with a server120via a network130(e.g., the Internet). In this example, the server120, also in communication with the network130, is in communication with each of the client devices110,115a-115n, and can also be in communication with a database140. The database can be directly coupled to the server120or coupled to the server120via the network130. The client device110, representative of client devices115a-115n, is a computing device comprising one or more applications112and a digital assistant114installed thereon. The following description in reference toFIG.1provides a high level overview of the “Q” digital assistant, described briefly herein above, with additional detail provided in U.S. Provisional Application No. 62/508,181, filed May 18, 2017, entitled “SYSTEMS AND METHODS FOR CROWDSOURCED ACTIONS AND COMMANDS,” and U.S. Provisional Application No. 62/509,534, filed May 22, 2017, entitled “CONNECTING MULTIPLE MOBILE DEVICES TO A SMART HOME ASSISTANT ACCOUNT.” The one or more applications112includes any application that is executable on the client110, and can include applications installed via an application marketplace, custom applications, web applications, side-loaded applications, applications included in the operating system of the client110, or any other application that can be reasonably considered to fit the general definition of an application. On the other hand, the digital assistant114can be an application, a service accessible via an application installed on the client110or via the network130, or implemented into a layer of an operating system of the client110. In accordance with embodiments described herein, the digital assistant114provides an interface between a client110and a user (not shown), generally via a speech-based exchanged, although any other method of exchange between user and client110remains within the purview of the present disclosure. When voice commands are received by the digital assistant114, the digital assistant converts the speech command to text, analyzes the command to extract relevant keywords and/or parameters, processes the keywords and/or parameters and/or any additional contextual data obtained by the client110, identifying the command in a library of recognized commands and corresponding actions, and determining an appropriate action to perform on one or more applications112installed on the client110. By way of brief overview, a command can include one or more keywords and/or one or more parameters and parameter types, generally corresponding to a predefined action to be performed on one or more particular applications. Moreover, a plurality of commands can correspond to a single predefined action, such that there are multiple equivalent commands that can invoke the same predefined action. By way of example only, commands such as “check in,” check into flight,” “please check in,” “check into flight now,” “check in to flight 12345,” and the like, can all invoke the same action that directs the digital assistant to open up an appropriate application and perform the predefined set of tasks to achieve the same result. The aforementioned commands, however, may lack appropriate information (e.g., the specific airline). As one of ordinary skill may appreciate, a user may have multiple applications from various vendors associated with a similar service (e.g., airlines). While not described in detail herein, the referenced “Q” digital assistant provides features that can determine contextual information associated with the user, based on historical use of the digital assistant, stored profile information, stored parameters from previous interactions or commands, searches through email folders, and a variety of other types of information stored locally or remotely on a server, such as server120, to identify an appropriate parameter and determine a complete command to invoke the appropriate action. More specific commands, such as “check into FriendlyAirline flight,” or “FriendlyAirline check in,” and the like, can be recognized by a digital assistant, such as the “Q” assistant, to invoke the appropriate action based on the complete command received thereby. One or more recognizable commands and corresponding actions can be received by the digital assistant114from the server120at any time, including upon installation, initialization, or invocation of the digital assistant114, after or upon receipt of the speech command by the digital assistant114, after or upon installation of a new application, periodically (e.g., once a day), when pushed to the client110from the server120, among many other configurations. It is contemplated that the commands and corresponding actions received by the client110are limited based at least in part on the applications112installed on the client110, although configurations where a larger or smaller set of commands and actions can be received. In the event a command and/or action is not available for a particular application installed on the client110, digital assistant114can either redirect the user to a marketplace to install the appropriate application, or include a training feature that enables a user to manually perform tasks on one or more applications to achieve the desired result. The digital assistant114can also receive one or more commands from the user (e.g., via speech) to associate with the tasks manually performed or to be manually performed during training. In this way, the command is associated with at least the particular application designated by the user and also corresponds to the one or more tasks manually performed by the user, associating the received command to the task(s) and the desired result. In some instances, the server120can provide a command and/or action for the received command based on crowd-sourced commands and/or actions collected (e.g., submitted by or received from) client devices115a-115nalso independently having a digital assistant114and applications112installed thereon. The client devices115a-115nmay have any combination of applications112installed thereon, and any training of commands and actions performed on any client device110,115-115ncan be communicated to the server120to be analyzed and stored for mass or selective deployment. Although not described in more detail herein, the server120can include various machine-learned algorithms to provide a level of quality assurance on user-trained commands and actions before they are distributed to other users via the network130. When the digital assistant114determines an appropriate action (e.g., one or more tasks to achieve a desired result) that corresponds to the received command, the digital assistant114performs the automated tasks and an output of the application(s)112may be generated by the application(s)112and communicated verbally to the user by the digital assistant114, another digital assistant running on the client110or another device, or communicated textually or in another format to the same or another application for further action. Referring now toFIG.2, a block diagram200is provided to illustrate an exemplary implementation of a client110having one or more applications112installed thereon and a digital assistant114in accordance with some embodiments of the present disclosure. As noted herein, the client110can include a memory205for storing, among other things, a command and action library207and contextual data209associated with the client110or application(s)112and/or a profile associated with the client110. The command and action library207can include, among other things, a dataset of recognizable commands and corresponding actions. The commands and actions stored in the library207may be limited to the applications currently installed on the client110, or may include a collection of commonly used (e.g., popular) applications installed by a larger population of clients, such as clients115a-115nofFIG.1. In some aspects, the commands and actions can be further limited based on versions of the application or the platform (e.g., operating system) on which the applications are executed. While storage of a larger dataset of recognizable commands and corresponding actions is preferable for offline availability of the digital assistant, in some instances the command and action library207can only include a single command or a small set of commands and corresponding action(s) retrieved from a server, such as server120, based on the command(s) recently received by the digital assistant114. The contextual data209can include a variety of information including device information, application information, profile information, and historical information. The device information can include current device location data (e.g., GPS coordinates), surrounding signal data (e.g., recognized wireless signals, Bluetooth, cellular, NFC, RFID, Wi-Fi, etc.), among other things. The application information can include common phrases corresponding to outputs of the application(s) (i.e., incorporation of the output in a complete sentence or phrase as might be commonly spoken), entities corresponding to the outputs of the application(s) (e.g., name, brand, location, price, etc.), and/or units of measurement of the outputs. The profile information can include user demographic information (e.g., gender, location, income, occupation, etc.), personal preferences (e.g., foods, entertainment, sports teams, etc.), relationships (e.g., other users also having digital assistant114on their respective computing devices, social network connections, etc.), calendar information (e.g., appointments, times, locations, attendees, etc.), and the like. Historical information can include commands or portions thereof previously provided to and/or recognized by the digital assistant114, device information history, profile information history, among other things. The stored command and action library207and contextual data209stored in memory205can provide the digital assistant114with information that can be analyzed and employed to provide relevant and useful information to a client110user when automated actions are being performed. To implement various embodiments described herein, the digital assistant112can include, among other things, an application indexing component210, a speech-to-text component220, a contextual data determining component230, an automation engine240, and an output component250. The described components are not intended to be limited to the specific structure, order, or devices described herein, and can be implemented in such ways where operations described therein can be swapped, intermixed, or modified to achieve the same or similar results described within the purview of the present disclosure. The application indexing component210of the digital assistant114can scan an index of applications installed on the client110to identify a set or “index” of applications particular to the client110. In this way, in accordance with some embodiments, the digital assistant114can employ the data obtained by application indexing component210and determine the specific set of commands available to the user for the applications currently installed on the client110. This information can be employed by the digital assistant114, for instance via output component250, to identify relevant suggestions for communicating output from applications currently installed on the client110. Embodiments are not necessarily limited to the foregoing, and other embodiments consider that the index of applications can be submitted to the server120, stored in contextual data209, or any combination thereof. The speech-to-text component220of the digital assistant114can receive audio input data via, by way of example, a microphone coupled to the client110. The audio data, including speech data, can then be processed by the digital assistant114and converted into a string of text. This string of text can include, among other things, keywords, parameters, fillers, or any other aspect of a spoken language that is relayed by a user to the digital assistant114via speech communication. It is contemplated that the spoken language is in any language in which the digital assistant114is capable of handling, which can be based on a command and action library207including commands in the spoken language, or a translation engine that can be employed to translate the spoken language into a command that is then interpreted by the digital assistant114in the native language of the predefined commands or translate an output into a spoken language of the user. The contextual data determining component230can, among other things, retrieve contextual data209from one or more components of or in communication with the client110, including the application(s)112. In addition, the contextual data determining component230can facilitate the interpretation or completion of the string of text generated by the speech-to-text component220or the interpretation or completion of the output generated by the output component250. As described, the speech-to-text component220merely generates a converted string of text from received speech data while the output component250generates converted speech data from talk back objects selected from a view of an application at a specific state (which may include text). In some embodiments, the contextual data determining component230can employ the contextual data209stored in memory205to facilitate the generation or completion of an appropriate command recognizable by the client110(e.g., mapped to an installed application based on application indexing component210, or available in command and action library207) or to facilitate the generation or completion of an appropriate phrase or sentence corresponding to an output of an application(s)112. The client110may itself, or employing server120via remote communications, employ machine-learned models to either replace the string of text generated by speech-to-text component220or complete the string of text to provide a recognizable command to the digital assistant114based on equivalencies determined via the machine-learned model. Similarly, the client110may itself, or employing server120via remote communications, employ machine-learned models to either replace the output of an application(s)112or complete a phrase or sentence corresponding to the output of an application(s)112to provide a recognizable response to the user in response to the command (which as described herein may be based in part on context corresponding to the output of the application(s)112and stored as contextual data209). The automation engine240can perform a series of steps or “tasks” defined in an action that corresponds to the received command. Each task can be performed automatically by the digital assistant114by emulating button presses, pauses, responsive inputs, conditional inputs, or other inputs typically provided by a user, accessing application deep links or URLs that can invoke specific operations of one or more applications, and other operations that are necessary to achieve the desired result of performing all tasks associated with an action. With reference now toFIG.3, the automation engine240can include, among other things, an action governing component310and a redirect component320. In various embodiments, the action governing component310can determine when to initiate a selected action (e.g., based on a determination that the command is defined in the command and action library207), determine when various operations or tasks of an action are completed so that subsequent operations or tasks of the action can be initiated, (e.g., based on expected GUI events generated by an application), and determine when an action is fully completed (e.g., also based on expected GUI events or after completion of the final task (i.e., the talk back output)) to provide a result or confirmation to the user that the desired action was successfully executed. Further, a specific element can be identified to be used as a talk back object to be retrieved and conveyed back to the user upon completion of the task. In some instances, the action governing component310can terminate a task based on a received command. The redirect component320facilitates communications between the output component250and another digital assistant running on the client110, another device, or another application(s)112. In this way, the output can be redirected such that a different digital assistant verbally communicates the output to the user or a digital assistant running on a different device verbally communicates the output to the user. In some embodiments, the output may be redirected to a different application running on the client110or a different device. Looking now toFIGS.4-10, illustrations are provided to depict an exemplary workflow for setting up talk back automation for an output corresponding to an application in accordance with various embodiments of the present disclosure. The following steps can be facilitated by digital assistant such as digital assistant114, having an automation engine such as automation engine240ofFIGS.2and3. The following description is merely exemplary and is not intended to be limiting in any way. Features and illustrations depicted in the figures are only provided to show an exemplary implementation, and are not to limit the sequence of events or layout of the graphical user interface. The illustrations ofFIGS.4-10are provided herein to provide a clear depiction of a sequence of events in talk back automation. With reference toFIG.4, illustrating a first step of the talk back automation process, a computing device410is provided having a display420. On the display420is an indicator402that the talk back feature for the digital assistant (e.g., digital assistant114) has been activated. It is contemplated that the talk back feature setup can be activated by a voice prompt (e.g., “talk back”), a physical input (e.g., a long button press), gesture, or the like. Talk back options406depicts an option to edit talk back features that have already been added by the user, an option to record a new talk back feature, or cancel out of the options and close the talk back feature setup. Upon selecting the record option408, the user is able to navigate through the flow of an application to a desired state. Moving on toFIG.5, illustrating a second step of the talk back automation process, a home screen or “app” selection screen of the computing device510may be presented in response selecting the record option408. In other words, once the record option408has been selected, the user selects the appropriate application for the desired talk back feature. As noted the illustration is merely exemplary, and it is contemplated that an application can be activated in any number of ways, including emulated navigation and touches, deep links, API calls, and the like. InFIG.5, the user has selected a simple application that tracks the distance a user runs as indicated by the text “I ran 59 miles”512. Moving now toFIG.6, illustrating a third step of the talk back automation process, a view of the application at a specific state610is provided in response to the selection of and navigation of the application by the user. Here, the specific state identifies the distance a user runs. Although the specific state610as illustrated provides a simple display, it should be appreciated that any number of objects may be presented within the view at the specific state. The user may identify the talk back object (i.e., the desired output selected from the objects)602such as by highlighting the talk back object602and a command that is not otherwise defined by the application (e.g., a long button press). Once the talk back object602has been selected, the digital assistant may determine an entity corresponding to the talk back object602. Looking now toFIG.7, illustrating a fourth step of the talk back automation process, a command selection interface710is presented in response to the selected talk back object described inFIG.6. As depicted, a user is prompted to enter a name for the talk back feature. Continuing the example of the simple application that tracks the distance a user runs, a user may name the talk back feature “how far did I run”702. Upon naming the talk back feature, and referring now toFIGS.8-10, the user is prompted to add various commands802that, when communicated to the digital assistant, activate the talk back feature. For example, a user may create a command that activates the talk back feature for the running application when the user speaks “my run”804, “say how long did I run”902, or “give me running details”1002. Once the user has added all desired commands for the particular talk back feature, the user may select the submit button1006to save the talk back feature, which may be further stored at a server for use by other users. Turning now toFIG.11, illustrating the first step of the talk back automation process again, a computing device1110is provided having a display1120. Again, on the display1120is an indicator1102that the talk back feature for the digital assistant (e.g., digital assistant114) has been activated. Now, talk back options1106depicts an option to edit talk back features, including the illustrated “how long did I run” talk back feature1109, that have already been added by the user. Upon selecting the “how long did I run” talk back feature1109, the user is able to make any changes to the talk back feature, as desired. In various embodiments, the digital assistant114can determine additional context for the received command (for instance, by employing contextual data determining component230) based on keywords and/or parameters parsed out of the command string generated by speech-to-text component230ofFIG.2. In further embodiments, the additional context can be determined based on contextual data209stored in memory205. As described with regard to contextual data determining component230, the determined additional context can provide clarifying or additional context to the digital assistant114, such that a proper command is identified to determine a proper command to perform and/or an actual intent of the user. In this regard, in one embodiment, a received command is recognized by the digital assistant114, an analysis on the command and action library207can be performed by the digital assistant114to identify one or more additional commands (e.g., command suggestions) that are predefined and recognizable by the digital assistant114. An additional command or “command suggestion” can be identified based on a determination by the digital assistant114that the additional command's corresponding action is performed on the same application(s) associated with the received command and talk back object, or a determination by the digital assistant114that the received command can also be invoked on different application(s) installed or available for installation on the client110. Additionally or alternatively, in various embodiments, the digital assistant114can determine additional context for the received talk back object (for instance, by employing contextual data determining component230) based on metadata, keywords, and/or parameters parsed out of the identified objects in the view of the application at the specific state. In further embodiments, the additional context can be determined based on contextual data209stored in memory205. As described with regard to contextual data determining component230, the determined additional context can provide clarifying or additional context to the digital assistant114, such that a proper talk back output is identified and communicated to the user or another application. For example, the additional context may indicate that a running application has been selected and a distance is the talk back object. Accordingly, the additional context may facilitate the output component250to verbally communicate to the user “You ran 59 miles today.” Similarly, if the user has provided additional parameters in the command to ask how far the user ran last Tuesday, the additional context (indicating a distance ran last Tuesday, but indicated in a different language) may facilitate the output component250to verbally communicate to the user “You ran 59 miles last Tuesday” translated to the language of the user. In some embodiments, the additional context may facilitate the output component250to provide the talk back output in a format recognizable by the same or a different application as an input. In this regard, in one embodiment, a received talk back object is recognized by the digital assistant114, an analysis on the talk back object and corresponding metadata and action library207can be performed by the digital assistant114to identify one or more entities of the talk back object. The entities may be utilized to identify suggested sentences or phrases that are predefined and recognizable by the digital assistant114such that the talk back output can be communicated to the user in an understandable and meaningful manner. Moreover, any necessary translations or transformations of the talk back output may also be initiated based on such analysis, for example, the price of a product can be converted into the right currency based on the location of the user. Turning now toFIG.12, a flow diagram1200is provided to illustrate a method of facilitating talk back automation. As shown at block1210, a digital assistant, such as digital assistant114ofFIGS.1and2executing on a client such as client110ofFIGS.1and2, receives a talk back add command to add a talk back feature that corresponds to an output of an application installed on the mobile device. In some embodiments, the talk back feature is embodied in a JSON data structure. At block1220, a talk back recording mode based on the received talk back add command is initiated by the digital assistant. In the talk back recording mode a set of events that corresponds to a workflow of the application is recorded. At block1230, a view of the application is analyzed, by the digital assistant, at a specific state to identify objects within the view. The specific state is selected by the user, such as by the user navigating to the specific state of the application in the talk back recording mode. At block1240, a selection of a talk back object selected from the identified objects is received by the digital assistant. The talk back object may be an output of the application at the specific state and may include metadata. At block1250, the digital assistant generates the talk back output corresponding to the application based on the recorded set of events and the selected talk back object. The talk back feature may be stored on a server, where access to the talk back feature is enabled for use by other users via digital assistants corresponding to the other users. In embodiments, the digital assistant further receives one or more commands selected by a user. When the commands are communicated to the digital assistant by the user, the workflow of the application is initiated and the talk back output is provided to the user. The digital assistant may be configured to invoke a particular talk back feature responsive to a detected utterance that corresponds to a particular command of the one or more commands. In some embodiments, each command of the one or more commands includes a string representation of a detected audio input. In this way, each command of the one or more commands is generated by the mobile device employing a speech-to-text operation. Accordingly, the detected audio input may be a spoken utterance detected by the digital assistant via at least one microphone of the mobile device. In some embodiments, each command of the one or more commands includes at least one parameter to clarify what output should be provided by the application. In various embodiments, the talk back output is provided to a device other than the mobile device, to the mobile device, or is communicated as an input to another or the same application. In some embodiments, an entity type for the talk back object is determined. The entity type may be determined by the digital assistant based on the metadata (e.g., such as by employing machine learning to associate certain entity types with contents of the metadata and suggest the context to add to the talk back output). Additionally or alternatively, the digital assistant may provide suggestions to the user for selection of the appropriate entity type and/or context, e.g., recognized city, price information, zip code, etc. In other embodiments, a user may select the entity type. Context corresponding to the entity type may be added to the talk back output. For example, the metadata may provide details related to the output that might add clarification for the user. If the user asked how far was the longest run of the year for the user, the metadata might reveal a unit of measurement (e.g., miles), the day of the run (e.g., Saturday, Dec. 2, 2017), or even a starting and ending point, and detailed template as to how the talk back statement is constructed. As a result, the digital assistant may respond, “You ran 13.1 miles from your home to your office on Saturday, Dec. 2, 2017.” It is possible to associate default behaviors and templates to specific entity types in the talk back object. Referring now toFIG.13, a flow diagram is provided that illustrates a method of providing talk back output. As shown at block1310, a command from a user is received by a digital assistant. The command includes a string representation of a detected audio input and is generated by the mobile device employing a speech-to-text operation. The detected audio input is a spoken utterance detected by the digital assistant via at least one microphone of the mobile device. At block1320, in response to the received command, a workflow of an application is initiated. The workflow corresponds to a set of events. At block1330, a talk back output corresponding to the application is provided. The talk back output includes context based on an entity type. Additionally, the talk back output is based on the set of events and a selection of a talk back object selected from a view of the application at a specific state. The talk back object includes metadata that enables the entity type to be determined. Having described embodiments of the present invention, an exemplary operating environment in which embodiments of the present invention may be implemented is described below in order to provide a general context for various aspects of the present invention. Referring initially toFIG.14in particular, an exemplary operating environment for implementing embodiments of the present invention is shown and designated generally as computing device1400. Computing device1400is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing device1400be interpreted as having any dependency or requirement relating to any one or combination of components illustrated. The invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The invention may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. With reference toFIG.14, computing device1400includes a bus1410that directly or indirectly couples the following devices: memory1412, one or more processors1414, one or more presentation components1416, input/output (I/O) ports1418, input/output components1420, and an illustrative power supply1422. Bus1410represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various blocks ofFIG.14are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component such as a display device to be an I/O component. Also, processors have memory. The inventor recognizes that such is the nature of the art, and reiterates that the diagram ofFIG.14is merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the present invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “hand-held device,” etc., as all are contemplated within the scope ofFIG.14and reference to “computing device.” Computing device1400typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device1400and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device1400. Computer storage media does not comprise signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. Memory1412includes computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device1400includes one or more processors that read data from various entities such as memory1412or I/O components1420. Presentation component(s)1416present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc. I/O ports1418allow computing device1400to be logically coupled to other devices including I/O components1420, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. The I/O components1420may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition (as described in more detail below) associated with a display of the computing device1400. The computing device1400may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, touchscreen technology, and combinations of these, for gesture detection and recognition. Additionally, the computing device1400may be equipped with accelerometers or gyroscopes that enable detection of motion. The output of the accelerometers or gyroscopes may be provided to the display of the computing device1400to render immersive augmented reality or virtual reality. As can be understood, embodiments of the present invention provide for, among other things, optimizing display engagement in action automation. The present invention has been described in relation to particular embodiments, which are intended in all respects to be illustrative rather than restrictive. Alternative embodiments will become apparent to those of ordinary skill in the art to which the present invention pertains without departing from its scope. From the foregoing, it will be seen that this invention is one well adapted to attain all the ends and objects set forth above, together with other advantages which are obvious and inherent to the system and method. It will be understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations. This is contemplated by and is within the scope of the claims. The subject matter of the present invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
47,919
11862157
DETAILED DESCRIPTION U.S. patent application Ser. No. 17/184,207 describes a system in which a machine learning algorithm (e.g., an artificial intelligence (AI) engine) monitors a conversation between a customer and an employee at a restaurant. As the system is monitoring the conversation, the system interacts with a point-of-sale (POS) terminal to add, subtract, modify, or any combination thereof the contents of a cart. For example, if the customer is placing an order for one or more food items, the system may automatically add contents to the cart based on the customer's voice input. To illustrate, if the customer says “Two large pepperoni pizzas” then the system automatically (e.g., without human interaction) adds two large pepperoni pizzas to the cart. Thus, the employee verbally interacts with the customer, without interacting with the point-of-sale terminal, and with the system interacting with the point-of-sale terminal. The employee observes the system modifying the contents of the cart while the employee is verbally interacting with the customer. The employee may interact with the point-of-sale terminal to make corrections if the system makes an error. The system may provide upsell suggestions to the employee to provide to the customer. The upsell suggestions may include increasing a size of an item ordered by the customer (e.g., “Would you like an extra-large instead of a large for just two dollars more?”, adding an item (e.g., “Would you like to add something to drink?”), or both. The upsell suggestions may be provided to the employee via, for example, audibly (e.g., via an earpiece) or visually (e.g., displayed on the point-of-sale terminal). In addition, the system may be used to train new employees by prompting them as to what to say to the customer during a conversation to take an order. The conversation data that includes the verbal interaction between the employee and the customer when the customer is placing an order is archived. The conversation data is used to train an AI engine to provide a software agent (e.g., sometimes referred to as a “chat bot”). By using a large quantity of conversation data between human employees and human customers to train the software agent, the software agent is able to mimic the way in which a human employee takes an order in such a way that the human customer may be unaware that they are interacting with a software agent rather than a human employee. The term human employee refers to any human employed on behalf of the commerce company to take orders, including employees (including contractors and the like) at a call center run by the commerce site or a 3rd party. In this way, a human employee is replaced by a software agent to take an order from a customer, thereby saving the restaurant money and increasing profit margins. As a first example, a method performed by a server includes receiving, by a software agent (e.g., “chat bot”) executing on the server, a communication comprising a first utterance from a customer. The utterance may be the customer's voice or customer input received via a website or a software application (“app”). For example, receiving, by the software agent executing on the server, the communication comprising the first utterance from the customer may include receiving audio data that includes the first utterance, converting the audio data to text using a speech-to-text converter, and performing post processing on the text to create a corrected utterance. The method may include predicting, using an intent classifier, a first intent of the first utterance. For example, the intent classifier may determine whether first intent is order-based or menu-based. Based on determining that the first intent is order-related, the method may include predicting, using a dish classifier, a cart delta vector based at least in part on the first utterance. The method includes modifying a cart associated with the customer based on the cart delta vector. Modifying the cart associated with the customer based on the cart delta vector may include: adding a new item to the cart, deleting a current item from the cart, modifying an existing item in the cart, or any combination thereof. The method includes predicting, using a dialog model, a first dialog response based at least in part on the first utterance, and providing the first dialog response, by the software agent, to the customer using a text-to-speech converter. For example, predicting, using the dialog model, the first dialog response based at least in part on the first utterance may include (i) predicting the first dialog response based on a plurality of candidate response, (ii) based on a dialog policy, and (iii) an order context. The order context may include (1) an interaction history between the customer and the software agent, (2) a cart state of the cart associated with the customer, and (3) a conversation state of a conversation between the customer and the software agent. The conversation may include the first utterance and the first dialog response. The method may include receiving, by the software agent, a second utterance from the customer, predicting, using the intent classifier, a second intent of the second utterance, and based on determining that the second intent is menu-related, retrieving menu-related information based at least in part on the second utterance. The method may include predicting, using the dialog model, a second dialog response based at least in part on the second utterance and the menu-related information and providing the second dialog response to the customer using the text-to-speech converter. The method may include receiving, by the software agent, a third utterance from the customer, predicting, using the intent classifier of the software agent, a third intent of the third utterance. Based on determining that the third intent is order-related, the method may include closing the cart. The method may include receiving payment information from the customer and initiating order fulfillment of items in the cart (e.g., preparing the items for takeout or delivery). As a second example, a server includes one or more processors and one or more non-transitory computer readable media (e.g., a memory device) to store instructions executable by the one or more processors to perform various operations. The operations include receiving, by a software agent executing on the server, a communication comprising a first utterance from a customer. The operations include predicting, using an intent classifier, a first intent of the first utterance. Based on determining that the first intent is order-related, the operations include predicting, using a dish classifier, a cart delta vector based at least in part on the first utterance. The operations include modifying a cart associated with the customer based on the cart delta vector. For example, modifying the cart associated with the customer based on the cart delta vector may include adding a new item to the cart, deleting a current item from the cart, modifying an existing item in the cart, or any combination thereof. The new item, the current item, and the existing item correspond to menu items in a menu associated with a restaurant. The operations include predicting, using a dialog model, a first dialog response based at least in part on the first utterance. For example, predicting, using the dialog model, the first dialog response based at least in part on the first utterance may include predicting the first dialog response from a plurality of candidate responses based on a dialog policy and an order context. The order context may include an interaction history between the customer and the software agent, a cart state of the cart associated with the customer, and a conversation state of a conversation between the customer and the software agent. The conversation may include the first utterance and the first dialog response. The operations include providing the first dialog response, by the software agent, to the customer using a text-to-speech converter. The operations include receiving, by the software agent, a second utterance from the customer. The operations include predicting, using the intent classifier, a second intent of the second utterance. Based on determining that the second intent is menu-related, the operations include retrieving menu-related information based at least in part on the second utterance. The operations include predicting, using the dialog model, a second dialog response based at least in part on the second utterance and the menu-related information. The operations include providing the second dialog response to the customer using the text-to-speech converter. The operations include receiving, by the software agent, a third utterance from the customer. The operations include predicting, using the intent classifier of the software agent, a third intent of the third utterance. Based on determining that the third intent is order-related, the operations include closing the cart. The operations include receiving payment information from the customer. The operations include initiating order fulfillment of items in the cart. As a third example, a non-transitory computer-readable storage medium, such as a memory device, may be used to store instructions executable by one or more processors to perform various operations. The operations include receiving, by a software agent executing on a server, a communication comprising a first utterance from a customer. The operations include predicting, using an intent classifier, a first intent of the first utterance. The operations include, based on determining that the first intent is order-related, predicting, using a dish classifier, a cart delta vector based at least in part on the first utterance. The operations include modifying a cart associated with the customer based on the cart delta vector. For example, modifying the cart associated with the customer based on the cart delta vector may include adding a new item to the cart; deleting a current item from the cart; modifying an existing item in the cart; or any combination thereof. The operations include predicting, using a dialog model, a first dialog response based at least in part on the first utterance. For example, predicting, using the dialog model, the first dialog response based at least in part on the first utterance may include predicting the first dialog response based on a plurality of candidate responses and based on a dialog policy and an order context. The order context may include (i) an interaction history between the customer and the software agent, (ii) a cart state of the cart associated with the customer, and (iii) a conversation state of a conversation between the customer and the software agent. The conversation may include the first utterance and the first dialog response. The operations include providing the first dialog response, by the software agent, to the customer using a text-to-speech converter. The operations may include receiving, by the software agent, a second utterance from the customer. The operations may include predicting, using the intent classifier, a second intent of the second utterance. The operations may include, based on determining that the second intent is menu-related, retrieving menu-related information based at least in part on the second utterance. The operations may include predicting, using the dialog model, a second dialog response based at least in part on the second utterance and the menu-related information. The operations may include providing the second dialog response to the customer using the text-to-speech converter. The operations may include receiving, by the software agent, a third utterance from the customer. The operations may include predicting, using the intent classifier of the software agent, a third intent of the third utterance. The operations may include, based on determining that the third intent is order-related, closing the cart. The operations may include receiving payment information from the customer. The operations may include initiating order fulfillment (e.g., takeout or delivery) of items in the cart. FIG.1is a block diagram of a system100that includes a server to host software, according to some embodiments. The system100includes a representative employee-assistance point-of-sale (EA-POS) device102, a consumer device104, and one or more server(s)106connected to each other via one or more network(s)108. The server106may include an AI engine110(e.g., a machine learning algorithm), a natural language processing (NLP) pipeline112, and a software agent116. A customer may use the customer device104to initiate a call to a commerce site, such as a restaurant132. A restaurant is used merely as an example and it should be understood that the systems and techniques described herein can be used for other types of commerce, such as ordering groceries, ordering non-perishable items and the like. In some cases, a human employee may receive the call and the AI engine110may monitor the conversation111, including utterances115of the customer and responses113. Initially, the responses113may be from a human employee of the restaurant132. The AI engine110may determine which items from a menu140of the restaurant132the customer is ordering. The AI engine110may monitor the conversation111between the customer and the employee and automatically (e.g., without human interaction) modify a cart126hosted by the EA-POS device102. In other cases, a human employee may receive the call, the AI engine110may monitor the conversation between the customer and the employee, and monitor what the employee enters into the EA-POS device102. The employee entries may be used as labels when training the AI engine110and various machine learning (ML) models in the NLP pipeline112. The AI engine110may use a dictionary118to identify words in the conversation. The AI engine110may keep a running track of an order context120associated with each particular order. The order context120may include order data associated with previously placed orders by each customer, trending items in a region in which the customer is located, specials/promotions (e.g., buy one get one free (BOGO), limited time specials, regional specials, and the like) that the restaurant132is currently promoting (e.g., on social media, television, and other advertising media), and other context-related information. The order context120may include user preferences, such as gluten allergy, vegan, vegetarian, or the like. The user may specify the preferences or the AI engines110may determine the preferences based on the customer's order history. For example, if the customer orders gluten-free products more than once, then the AI engines110may determine that the customer is gluten intolerant and add gluten intolerance to the customer's preference file. As another example, if the customer orders vegan or vegetarian items (or customizes menu items to be vegan or vegetarian) then the AI engines110may determine that the customer is vegan or vegetarian and add vegan or vegetarian to the customer's preference file. The cart126may include other information as how the order is to be fulfilled (e.g., pickup or delivery), customer address for delivery, customer contact information (e.g., email, phone number, etc.), and other customer information. The customer may use a payment means, such as a digital wallet128, to provide payment data130to complete the order. In response, the restaurant132may initiate order fulfillment134that includes preparing the ordered items for take-out, delivery, or in-restaurant consumption. Such conversations between human employees and customers may be stored as conversation data136. The conversation data136is used to train a software agent116to take orders from customers in a manner similar to a human employee, such that the customers may be unaware that they are interacting with the software agent116rather than a human employee. Subsequently (e.g., after the software agent116has been trained using the conversation data136), when the customer uses a customer device104to initiate a communication to the restaurant132to place an order, the communication may be routed to the software agent116. The customer may have a conversation111that includes utterances115of the customer and responses113by the software agent116. In most cases, the conversation111does not include an employee of the restaurant. The conversation may be routed to a human being under particular exception conditions, such as due to an inability of the software agent116to complete the conversation111or the like. The conversation111may include voice, text, touch input, or any combination thereof. For example, in some cases, the conversation111may include the voice of the customer and the responses113of the software agent116may be vocalized (e.g., converted into a synthesized voice) using text-to-speech technology. The conversation111may include text input and/or touch input in which the customer enters order information using a website, an application (“app”), a kiosk, or the like. One or more of the utterances115may result in the server106sending a cart update124to update a cart126at the point-of-sale device102. The AI engine110may determine (e.g., predict) recommendations114that the software agent116provides in the responses113as part of the conversation111. For example, the recommendations114may be based on items that the customer has previously ordered, items that are currently popular in the customer's region (e.g., zip code, city, county, state, country, or the like), and the like. To determine items that the customer previously ordered, the AI engine110may determine an identity of the customer based on, for example, an identifier (e.g., a phone number, an Internet protocol (IP) address, caller identifier, or the like) associated with the customer device104, voice recognition, facial recognition (e.g., in the case of a video call), or another identifying characteristic associated with the call initiated by the customer device104. After the customer has completed an order, the customer may provide payment data130, for example using an account (e.g., bank account, credit card account, debit card account, gift card account, or the like) stored in a digital wallet128. The payment data130may be sent to the point-of-sale device102to complete a checkout process for the cart126. After the payment data130has been received and the payment data processed, the restaurant132may initiate order fulfillment134, such as preparing the items in the order for take-out, delivery, in-restaurant dining, or the like. Thus, the system100includes an automated ordering system to enable customers to initiate and complete an order using voice, written text, or commands entered via a user interface (UI) provided by a website, an application (“app”) or the like. The system100is configured to enable the interactions between human customers and software agents116to be natural and human-like to such a degree that the human customers may conclude that they interacted with a human rather than a software agent. Thus, in so far as ordering food from a restaurant is concerned, the software agents116may pass the Turing test. The software agents116engage in human-like conversations in which the software agents116exhibit flexibility in the dialog. The software agents116are trained, based on the conversation data, to have an understanding of complex natural language utterances that take into account the nuances of oral and written communications, including both formal communications and informal communications. The term ‘utterance’ may include anything spoken or typed by a customer, including a word, a phrase, a sentence, or multiple sentences (including incomplete sentences that can be understood based on the context). The system100includes a voice ordering system that takes the utterances115of a customer and processes the utterances115through the Natural Language Processing (NLP) pipeline112(also referred to as a Natural Language Understanding (NLU) pipeline). The output of the NLP pipeline112are used by the server106to select: (1) a next one of the responses113that the software agent116provides the customer in the conversation111and (2) the cart updates124to update the cart126. The systems and techniques described herein provide a data-driven approach to the NLP pipeline112. The conversation data136includes hundreds of thousands of conversations between a human customer and a human employee and is used to train a supervised machine learning model (e.g., the software agents116) to make the responses113of the software agents116as human-like as possible. The conversation data136includes human-to-human conversations used to train a domain specific language model (e.g., the software agents116). The systems and techniques described herein take advantage of newly available language models that provide a greater capacity for leveraging contextual information over the utterances115(e.g., a word, a phrase, a sentence, or multiple sentences including incomplete sentences). Thus, an AI engine may be used to listen in on conversations between customers and human employees. The AI engine may automatically populate and modify a cart associated with an order that each customer is placing. The AI engine may automatically provide suggestions to the human employees on up-selling (e.g., adding items, increasing a size of ordered items, or both). The conversation data between customers and human employees may be stored to create a database of conversations associated with, for example, ordering food at a restaurant or another type of commerce site. The database of conversation data may be gathered over multiple months or years and used to train a machine learning algorithm, such as a software agent, to automatically take an order from a customer as if the customer was having a conversation with a restaurant employee. For example, given a conversation context and an utterance from the customer, the software agent determines and verbalizes (e.g., using text-to-speech) an appropriate and automated response using a natural language processing pipeline. FIG.2is a block diagram200of the natural language processing (NLP) pipeline112ofFIG.1, according to some embodiments. The NLP pipeline112may receive the utterances115of a customer (e.g., from the customer device104ofFIG.1). The NLP pipeline112may process audio data205that includes at least a portion of the utterances115using a speech-to-text converter206to convert the audio data205to text207. For example, the utterances115may be “I would like 2 large pizzas with pepperoni and mushrooms.” The order context120may include an interaction history222between the software agent116and the customer, a current cart state224, and a conversation state226. The interaction history222may include interactions between the customer and one of the software agents116, including the utterances115of the customer and the responses113of the software agent116. The cart state224identifies a state of the customer's cart including, for example, items in the cart, how many of each item is in the cart, a price associated with each item, a total price associated with the cart, whether payment has been received (e.g., whether the cart has been through check out), a most recent change (e.g., addition, subtraction, or modification) to one or more items in the cart, other cart related information, or any combination thereof. The conversation state226may indicate a state of the conversation between the customer and the software agent116, such as whether the conversation is in progress or has concluded, whether the customer is asked a question and is waiting for a response from the software agent116, whether the software agent116has asked a question and is waiting for a response from the customer, a most recent utterance from the customer, a most recent response from the software agent116, other conversation related information, or any combination thereof. The utterances115are provided by a customer that has called the restaurant132ofFIG.1to place an order. The utterances115are in the form of the audio data205. The speech-to-text converter206converts the audio205into text207. The text207is processed using an NLP post processor208that makes corrections, if applicable, to the text207to create corrected utterances211. For example, the text207may include an incorrect word that is plausible in the context and multiple similar sounding words may be equally plausible. The NLP post processor208may make corrections by identifying and correcting one or more incorrect words in the text207to create corrected utterances211. After the NLP post processor208processes the text207, the corrected utterances211are sent to the encoder210. The order context120, including the interaction history222, the cart state224, and the conversation state226, are provided to the encoder210in the form of structured data209. The structured data209includes defined data types that enable the structured data209to be easily searched. Unstructured data is raw text, such as “two pizzas with sausage and pepperoni”. Structured data may use a structured language, such as JavaScript Object Notation (JSON), Structured Query Language (SQL), or the like to represent the data. For example, “two pizzas with sausage and pepperoni” may be represented using structured data as: {“Quantity”: 2, “Item”: “Pizza”, “Modifiers”: [“Pepperoni”, “Sausage” ] }. In structured data209, each data item has an identifier or some fixed structured meaning and is not subject to natural language meaning or interpretation. The order context120captures where the customer and the software agent116are in the conversation111(e.g., what has already been said), what items are in the cart126, and the like. The encoder210of the NLP pipeline112receives the text207(in the form of the corrected utterances211) and the structured data209as input and predicts an utterance vector212. For example, the encoder210may use word2vec, a two-layer neural net, to process the text207to create the utterance vector212. The input to the NLP pipeline112is a text corpus and the output is a set of vectors, e.g., feature vectors that represent words in that corpus. The encoder210thus converts the text207into a numerical form that deep neural networks can understand. The encoder210looks for transitional probabilities between states, e.g., the likelihood that two states will co-occur. The NLP pipeline112groups vectors of similar words together in vector space to identify similarities mathematically. The vectors are distributed numerical representations of features, such as menu items. Given enough data, usage, and contexts during training, the encoder210is able to make highly accurate predictions about a word's meaning based on past appearances. The predictions can be used to establish the word's association with other words (e.g., “man” is to “boy” what “woman” is to “girl”), or cluster utterances and classify them by topic. The clusters may form the basis of search, sentiment analysis, and recommendations. The output of the encoder210is a vocabulary in which each item has a vector attached to it, which can be fed into a deep-learning net or simply queried to detect relationships between words. For example, by using cosine as a similarity measure, no similarity is expressed as a 90 degree angle, while total similarity is a 0 degree angle, complete overlap. The encoder210may include a pre-trained language model232that predicts, based on the most recent utterances115and the current order context120, (1) how the cart126is to be modified and (2) what the software agent116provides as a response, e.g., dialog response220. The encoder210is a type of machine learning model for NLP that is a model pre-trained directly from a domain specific corpora. In some cases, the encoder210may use a Bidirectional Encoder Representations from Transformers (BERT), e.g., a transformer-based machine learning technique for natural language processing (NLP), to predict the utterance vector212. The encoder210may be a language model232that converts the text207of the utterances into a vector of numbers. The language model232may be fine-tuned to a specific domain, e.g., to ordering at a restaurant and that too, at a specific type of restaurant (e.g., pizza, wings, tacos, etc.). The training is based on the conversation data136that has been gathered over time between customers and employees who enter data in the EA-POS102. The employee entered data may be used as labels for the conversation data136when training the various machine learning models described herein. The language model232associates a specific utterance, e.g., “I want chicken wings”, with a specific action, e.g., entering a chicken wing order into the EA-POS102. The language model232predicts what items from the menu140are to be added to the cart126(e.g., based on one or more actions associated with the utterance115) and which items are to be removed from the cart126, quantities, modifiers, or other special treatments (e.g., preparation instructions such as “rare”, “medium”, “well done” or the like for cooking meat) associated with the items that are to be added and/or removed. In some aspects, the encoder210may be implemented as a multi-label classifier. Modifiers may include, for example, half pepperoni, half sausage, double cheese, and the like. In some cases, the language model232may be structured hierarchically, e.g., with pizza at a high level and modifiers at a lower level. Alternately, the language model232may use a flat system with every possible combination as a unique item. The utterance vector212may be used by three classifiers (e.g., a type of machine learning algorithm, such as a support vector machine or the like), including the dish classifier, the intent classifier213, and the dialog model218. For example, the utterance vector212may be used by the dish classifier214to predict a multiclass cart delta vector216. The multiclass cart delta vector216is used to modify the cart126. For example, in the cart delta vector216, the first position may indicate a size of the pizza, e.g., 1=small, 2=medium, 3=large, the second position may indicate a type of sauce, e.g., 0=no sauce, 1=1st type of sauce, 2=2nd type of sauce, the third position may indicate an amount of cheese, e.g., 0=no cheese, 1=normal cheese, 2=extra cheese, 3=double cheese, and the remaining positions may indicate the presence (e.g., 1) or the absence (e.g., 0) of various toppings, e.g., pepperoni, mushrooms, onions, sausage, bacon, olives, green peppers, pineapple, and hot peppers. Thus, (3, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0) is a vector representation of a large pizza with the first type of sauce, a normal amount of cheese, and pepperoni. If the utterances115includes “I'd like double cheese”, then the vector representation may change to (3, 1, 3, 1, 0, 0, 0, 0, 0, 0, 0, 0), resulting in a corresponding change to the cart126. Of course, this is merely an example and other vector representations may be created based on the number of options the restaurant offers for pizza size, types of sauces, amount of cheese, toppings, and the like. The encoder210outputs the utterance vector212which a dialog model218uses to determine a predicted dialog response220. For example, based on the order context120and the most recent utterances115, the encoder210may determine the predicted response220. The predicted response220is a prediction as to what a human employee would say at that point in the conversation (e.g., order context120) based on the customer's most recent utterances115. The encoder210is trained using the conversation data136to predict the dialog response220based on the utterances115and the order context120. The software agent116converts the predicted dialog response220to speech using a text-to-speech converter228. The dialog model may use dialog policies236, candidate responses238, and the order context120to predict the dialog response220. For example, if a customer states that they would like to order a burger, an appropriate response may be “what toppings would you like on that burger?” In some cases, a natural language generation (NLG) post processor240may modify the output of the dialog model218to create the dialog response220. For example, the NLG post processor240may modify the dialog response220to include local colloquialisms, more informal and less formal dialog, and the like. The NLG response is the translation of the dialog response220into natural language. The example is above. During training of the machine learning model used to create the software agents116, the human-to-human conversations in the conversation data136ofFIG.1are labelled to fine tune the language model232, as described in more detail inFIG.5. The utterances115and the order context120(e.g., contextual language information and current cart information up to a given point time) are encoded (e.g., into the utterance vector212) to provide the cart delta vector216(e.g., a delta relative to the cart126) as well as the next predicted dialog response220. The cart delta vector216identifies the steps to update the cart126. The codified delta over the cart indicates the steps to update the cart126and is the label that the human operator creates when handling the conversation that afterwards becomes the training dataset. For example, the encoder210is able to associate a specific utterance of the utterances115, such as “I want chicken wings”, with a specific action, e.g., entering a chicken wing order into the cart126. The encoder210predicts what items should be added to the cart126(e.g., based on the action associated with the utterance) and which items should be removed from the cart126, and any associated quantities. In some aspects, the encoder210may use a multi-label classifier, such as for example, decision trees, k-nearest neighbors, neural networks, or the like. In a multi-label classifier, modifiers may include, for example, half-pepperoni, half-sausage, double cheese, and the like. In some cases, the order may use hierarchical structures, with each particular type of order, such as pizza, wings, taco, or the like, at a highest level and modifiers at a lower level in the hierarchy. For example, pizza may be at the highest level while half-pepperoni, half-sausage, double cheese, and the like may be at a lower level. In other cases, the order may use a flat system with every possible combination as a unique item. For example, (a) half-pepperoni may be a first item, (b) half-sausage may be a second item, (c) double cheese may be a third item, (d) half-pepperoni and half-sausage may be a fourth item, (e) half-pepperoni, half-sausage, and double cheese may be a fifth item, and so on. The intent classifier213takes the utterance vector212as input and creates an intent vector242that represents intent(s)244of the utterances115. Thus, the intent classifier213creates the intent vector242that is a representation of the customer's intent in the utterances115. The intent vector242, along with the utterance vector212, is used by the dialog model218to determine the dialog response220. The dialog model218uses the utterance vector212and the intents244to create the dialog response220. The dialog model218predicts the dialog response220, the response that the software agent116to the utterance115. In contrast, in a conventional voice-response system, the system uses a finite state machine. For example, in a conventional system, after each utterance, the system may ask for a confirmation “Did you say ‘combo meal’? In the system ofFIG.2, a predictive model predicts the dialog response220based on the utterance115and the order context120. The dish classifier214predicts which items from the menu140the customer is ordering and modifies the cart126accordingly. For example, in the utterance “Can I have 2 pizzas with pepperoni, 6 chicken wings, but no salad”, the dish classifier214determines which parts of this utterance refers to pizza. The dish classifier214model understands the history, e.g., there is a salad already in the cart (e.g., because it is included with chicken wings), and predicts the cart delta vector216to reflect how many pizzas and how many wings are there in the cart126. The prediction of the dish classifier214indicates what is being added to and what is being deleted from the cart126. Thus, based on the utterances115and the order context120, the NLP pipeline112predicts the cart126and the dialog response220. One or more of the classifiers213,214,218may use multiclass classification, a type of support vector machine. The intent classifier213determines intent(s)244of the utterances115, e.g., is the intent244a menu-related question (e.g., “What toppings are on a Supreme pizza?” or a modification (“I'd link a large pepperoni pizza”) to the cart126. In some aspects, the menu140of the restaurant132ofFIG.1may be represented as an ontology250(e.g., a set of menu items in the menu140that shows each menu item's properties and the relationships between menu items). In some aspects, the ontology250may be represented in the form of a vector. e.g., each type of pizza may have a corresponding vector representation. In some aspects, the menu representations may be generated from unlabeled data, to enable the NLP pipeline112to handle any type of information related to ordering, dishes, and food items. The utterances115are used as input to the NLP pipeline112. The utterances115may be in the form of a concatenated string of a set of previous utterances. The amount of utterances115provided to the NLP pipeline112may be based on how much latent knowledge of the conversation state226is desired to be maintained. The greater the amount of utterances115, the better the conversation state226. The utterances115may be a word, a phrase, a sentence, or multiple sentences (including incomplete sentences) that the customer provides to the software agent116at each turn in the conversation. For example, an example conversation may include:<agent> “Welcome to XYZ, how can I help you?”<customer> “I'd like to order a large pepperoni pizza.”<agent> “Sure, one pepperoni pizza. We have a promotion going on right now where you can get an extra large for just two dollars more. Would you be interested in getting an extra large?”<customer> “Okay, give me an extra large pepperoni.”<agent> “Would you like anything to drink?”<customer> “Two bottles of water please.”<agent> “Anything else I can get for you? Dessert perhaps?”<customer> “No. That will do it.”<agent> “Did you want this delivered or will you be picking up?”<customer> “Pickup.”<agent> “Okay. Your total is $20.12. Our address for pickup is 123 Main Street. How would you like to pay?”<customer> “Here is my credit card information <info>.”<agent? “Thanks. Your order will be ready in 20 minutes at 123 Main Street.” In this conversation, the customer may be calling from home, may be at a drive-through, or may be talking to an automated (e.g., unmanned) kiosk in the restaurant. There are a total of 6 turns in this example conversation, starting with “I'd like to order a large pepperoni pizza”, with each turn including the customer's utterances115and the agent's response220. The utterances115may thus include multiple sentences. In some aspects, chunking splitting may be performed, resulting in more than one representation corresponding to a unique utterance from the user. In some cases, the audio of the utterances115may be used as input, providing complementary features for emotion recognition, estimation of willingness to talk to AI, or for tackling issues as sidebar conversations. The satisfaction estimation based on vocal features also serves as a signal for optimizing the dialog policy. The interaction history222includes contextual language information, such as, for example, the N previous utterances of the customer (N>0), the M previous responses from the software agent116(M>0). The cart state224includes current cart information. In some cases, a domain specific ontology250may be added as semantic representation of items in the knowledge base (e.g., the conversation data136). The ontology250allows the encoder210to identify specific entities with which to select the correct modification to operate on the cart126. The ontology250may be used to facilitate the onboarding of new items or whole semantic fields, alleviate the need for annotated data for each label (e.g., the entries of the employee into the EA-POS102), and improve the performance of the NLP pipeline112. The encoder210creates the cart delta vector216that includes corresponding actions to update the cart126based on the most recent (e.g., latest turn) of the utterances115. The cart delta vector216may be a vector, e.g., a sparse array of numbers that corresponds to a state difference. For example, for a cart that includes “Large Pepperoni Pizza”, “2 Liter Coke” and “Chicken Salad”, if the most recent utterance is “A large coke, but remove the salad”, then the encoder210may output [0, 1, −1]. In this way, both the quantity and the intent to remove are encompassed. The encoder210determines the utterance vector212, a numerical representation of each input (e.g., the utterances115and the order context120) based on the language model232. The utterance vector212is a type of encoding, e.g., a set of symbols that represent a particular entity. For example, in some aspects, the encoding may be an array of real numbers, a vector (or a higher dimensional extension, such as a tensor), that is generated by a statistical language model from a large corpus of data. In addition to using the conversation data136, the encoder210may leverage an additional corpus of data on multiple sites234(e.g., Wikipedia and the like), such as food-related sites, thereby enabling the encoder210to engage in specialized conversations, such as food-related conversations. In some cases, the encoder210may be trained to engage in conversations associated with a particular type of restaurant, e.g., a pizza restaurant, a chicken wings restaurant, a Mexican restaurant, an Italian restaurant, an Indian restaurant, a Middle Eastern restaurant, or the like. The dish classifier214may predict the cart delta vector216by passing the encoded representations in the utterance vector212through additional neural dialog layers for classification, resulting in a sparse vector that indicates the corresponding element(s) within all possible cart actions, e.g., a comprehensive array of labels of possible combinations. The classifiers213,214,218may be trained using the conversation data136. The ontology250provides information to precise the modifiers, relating cart actions that are highly related such as adding two different variations of the same dish. The utterances115(e.g., representations of the conversation111ofFIG.1), along with the order context120, may be used as the input to the encoder210to determine a particular one of the dialog policies236to select the next predicted response220of the software agent116. Each particular one of the dialog policies236may be used to predict an appropriate response220from multiple candidate responses238. In some cases, the dialog model218may use policy optimization with features such as emotion recognition, total conversation duration, or naturalness terms. The dialog response220may be fed back to the dialog model218as contextual information. In some cases, multitask learning algorithms that combine more than one similar task to achieve better results may be used with the encoder210to enable the encoder210to learn important aspects of language modeling that serve indirectly to the final downstream task, while allowing a controlled training process via the design of the learning curriculum. The multiple and auxiliary objective functions serve to leverage more error signals during training, and make the model learn proper representations of the elements involved. Semantic and structural information about the menu140is encoded into the ontology250and used to inform the later layers of the cart prediction system (e.g., dish classifier214). In some cases, curriculum learning may be used to design the order with which tasks of different types or complexity are fed to the encoder210, the dish classifier214, the intent classifier213, the dialog model218, or any combination thereof, to assist tackling different tasks or to perform prolonged training. In addition, to improve extended training processes, the systems and techniques described here may use continual learning, in which the encoder210, the dish classifier214, the intent classifier213, the dialog model218, or any combination thereof, are retrained as new conversation data is accumulated. In some cases, the continual learning may be performed with elastic weight consolidation to modulate optimization parameters. For example, continual learning along with incremental learning may be used for new classes, e.g., new dishes, sequentially adding them to the objective though training the same model. Curriculum learning is the process of ordering the training data and tasks using logic to increase the improvement on the later, objective tasks. For example, the first training may include auto-regressive loss, then sentence classification, and then a more complex task. In this way, the model may be incrementally improved instead of tackling directly a possibly too complex task. One or more of the machine learning models (e.g.,210,213,214,218) in the NLP pipeline112may be re-trained using newly gathered conversation data136. For example, the retraining may be performed to improve an accuracy of the machine learning models, to train the models for additional products (e.g., a pizza restaurant adds chicken wings) or additional services (e.g., a pandemic causes the introduction of curbside service as a variation of takeout). The retraining may be performed periodically (to improve accuracy) or in response to the introduction of a new product or a new service. In the flow diagrams ofFIGS.3,4, and5, each block represents one or more operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, cause the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, modules, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the blocks are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. For discussion purposes, the processes300,400and500are described with reference toFIGS.1and2as described above, although other models, frameworks, systems and environments may be used to implement this process. FIG.3is a block diagram300illustrating training multiple classifiers used in a natural language processing (NLP) pipeline, according to some embodiments.FIG.3illustrates training the encoder210, the intent classifier213, the dish classifier214, and the dialog model218. A portion of the conversation data136may be selected as training data301and used as input into the natural language pipeline112. The encoder210may create the utterance vector212based on the training data301. The intent classifier213may create the intent vector242based on the utterance vector212and the order context120. The server106may determine an intent accuracy302of the intent vector242by comparing the intent vector242with the intent as in the conversation data136. For example, the intent vector242may be compared with the employee's entry into the EA-POS132during the conversation included in the training data301to determine the intent accuracy302of the intent vector242. If the intent accuracy302is less than a desired accuracy (e.g., 90%, 95%, 98% or the like), then an algorithm of the encoder210(e.g., to improve the utterance vector212), of the intent classifier213, or both may be modified at304to improve the intent accuracy302. The process may be repeated until the intent accuracy302satisfies the desired accuracy. The dish classifier214may create the cart delta vector216based on the utterance vector212and the order context120. The server106may determine a cart accuracy306of the cart delta vector216by comparing the cart delta vector242with the cart associated with the conversation data136. If the cart accuracy306is less than a desired accuracy (e.g., 90%, 95%, 98% or the like), then an algorithm of the encoder210(e.g., to improve the utterance vector212), of the dish classifier214, or both may be modified at308to improve the cart accuracy306. The process may be repeated until the cart accuracy306satisfies the desired accuracy. The dialog model218may predict, using machine learning, the dialog response220based on the utterance vector212and the order context120. The server106may determine a dialog accuracy310of the dialog response220by comparing the dialog response220with the response of the human employee recorded in the conversation data136. If the dialog accuracy310is less than a desired accuracy (e.g., 90%, 95%, 98% or the like), then an algorithm of the encoder210(e.g., to improve the utterance vector212), of the dialog model218, or both may be modified at312to improve the dialog accuracy310. The process may be repeated until the dialog accuracy310satisfies the desired accuracy. FIG.4is a block diagram400illustrating training a dish classifier, according to some embodiments. InFIG.4, when talking to a customer while taking an order, the employee402may make one or more entries404into the EA-POS102. The entries404may include the customer's utterances, the employee's responses, what the employee402enters into the EA-POS102, or any combination thereof, with the entries404for being used as labels to create labeled data406. For example, if the user utters “two pepperoni pizzas” and the employee402responds by entering two pepperoni pizzas into the EA-POS102(e.g., adding the pizzas to the customer's cart) than the utterance and the resulting cart may be labeled based on the entry of two pepperoni pizzas. The server106may determine the cart accuracy306of the labeled data406. A portion of the conversation data136may be used to create training data414that includes utterances of a customer. The training data414may be used as input to the encoder210to create the utterance vector212. The dish vector214may create the cart delta vector216. The cart delta vector216and the cart accuracy306may be used to determine an accuracy of multi-label classification410. If the accuracy of the multi-label classification410does not satisfy a desired accuracy (e.g., 90%, 95%, 98% or the like), then the dish classifier214may be modified412to improve an accuracy of the dish classifier214. This process may be repeated until the multi-label classification accuracy410satisfies the desired accuracy. FIG.5is a block diagram500to create a menu embedding used in a natural language processing (NLP) pipeline, according to some embodiments. Transactions from the EA-POS device102are processed using processing502and compared with the cart delta vector216produced by the dish classifier214to determine an actual training delta504. The actual training delta504is used to determine the cart accuracy306. If the cart accuracy306does not satisfy a desired accuracy (e.g., 90%, 95%, 98% or the like), then the dish classifier214may be modified308to improve an accuracy of the dish classifier214. This process may be repeated until the dish classifier214satisfies the desired accuracy. In some cases, menu data506(associated with the menu140) may be processed using processing508to determine, for example, sentence embedding510. The sentence embedding510may be used to determine actual menu embedding512. The actual menu embedding512may be compared with the predicted or calculated menu embedding518to determine a menu accuracy514. If the menu accuracy514does not satisfy a desired accuracy (e.g., 90%, 95%, 98% or the like), then the dish classifier214may be modified516to improve an accuracy of the dish classifier214. This process may be repeated until the dish classifier214satisfies the desired accuracy. The text processing may include concatenating a dish name, concatenating a description, concatenating ingredients, concatenating tags and the like. An example embedding includes an array of numbers and the encoding process may include matrix multiplication. In the flow diagrams ofFIGS.6,7, and8, each block represents one or more operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, cause the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, modules, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the blocks are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. For discussion purposes, the processes600,700, and800are described with reference toFIGS.1,2,3,4, and5as described above, although other models, frameworks, systems and environments may implement these processes. FIG.6is a flowchart of a process600that includes predicting a dialog response using an artificial intelligence (AI) engine, according to some embodiments. The process600may be performed by a server, such as the server106ofFIGS.1,2,3,4, and5. At602, the process determines a set of previous utterances (e.g., historical data) and create an utterance data structure (e.g., a concatenated string). At604, the utterance data structure is encoded to create a data structure representation (e.g., a vector, a tensor, or the like). For example, inFIG.2, the encoder210, based on the order context120, processes the text207of the utterances115to create the utterance vector212. At606, a first classifier is used to predict, based on the representation, a cart delta. For example, inFIG.2, the dish classifier214, based on the order context120and the utterance vector212, creates the cart delta vector216. At608, a second classifier is used to predict a dialog response. For example, inFIG.2, the dialog model218, based on the order context120and the utterance vector212, creates the dialog response220. FIG.7is a flowchart of a process700that includes predicting a cart vector, according to some embodiments. The process700may be performed by a server, such as the server106ofFIGS.1,2,3,4, and5. At702, the process receives a communication from a customer and provides an initial greeting. At704, the process receives an utterance from the customer. At706, the process transcribes the utterance (e.g., using speech to text). At708, the transcribed utterances processed using a post processor to create a corrected utterance. For example, inFIG.1, the customer initiates a communication (call, text, chat, or the like) using the customer device104. In response to receiving the customer-initiated communication, one of the software agents116provides an initial greeting. The software agent116receives a first of the customer utterances115. InFIG.2, one or more of the utterances115are converted to the audio205and processed using the speech-to-text converter206to create the text207. The NLP post processor208may be used to process the text207to create the corrected utterance209. At710, a classifier is used to predict an utterance intent based on the corrected utterance. At712, the process determines whether the utterance intent is order-related or menu-related. If the process determines that the utterance intent is menu-related, at712, then the process retrieves and provides information related to the menu-related utterance, at714, and goes back to704to receive an additional utterance from the customer. If the process determines that the utterance intent is order-related, at712, then the process predicts a cart vector (e.g., that adds, removes, and/or modifies a cart) based on the order related utterance and updates the cart based on the cart vector. These are merely examples of 2 types of intents. Of course, there may be other intents. Thus, at712, the process selects an appropriate class of action based on the intent classifier. At720, the process generates a natural language response based on the dialog response. At722, the process provides a natural language response as speech using a text-to-speech converter. For example, inFIG.2, the encoder210creates the utterance vector212which the dialog model218uses to create a natural language response in the form of the dialog response220. The dialog response220is provided to the software agent116which converts it to speech using the text-to-speech converter228. FIG.8is a flowchart of a process800to train a machine learning algorithm to create a classifier, according to some embodiments. The process700may be performed by a server, such as the server106ofFIGS.1,2,3,4, and5, to train the encoder210, the dish classifier214, the intent classifier213, and the dialog model218. At802, the machine learning algorithm (e.g., software code) may be created by one or more software designers. At804, the machine learning algorithm may be trained using training data806(e.g., a portion of the conversation data136). For example, the training data806may be a representational set of data and self-supervised training by machine learning, or could have been pre-classified by humans, or a combination of both. After the machine learning has been trained using the training data806, the machine learning may be tested, at808, using test data810to determine an accuracy of the machine learning. For example, in the case of a classifier (e.g., support vector machine), the accuracy of the classification may be determined using the test data810. If an accuracy of the machine learning does not satisfy a desired accuracy (e.g., 95%, 98%, 99% accurate), at808, then the machine learning code may be tuned, at812, to achieve the desired accuracy. For example, at812, the software designers may modify the machine learning software code to improve the accuracy of the machine learning algorithm. After the machine learning has been tuned, at812, the machine learning may be retrained, at804, using the pre-classified training data806. In this way,804,808,812may be repeated until the machine learning is able to classify the test data810with the desired accuracy. After determining, at808, that an accuracy of the machine learning satisfies the desired accuracy, the process may proceed to814, where verification data816(e.g., a portion of the conversation data136that has been pre-classified) may be used to verify an accuracy of the machine learning. After the accuracy of the machine learning is verified, at814, the machine learning130, which has been trained to provide a particular level of accuracy may be used. The process800may be used to train each of multiple machine learning algorithms (e.g., classifiers) described herein, such as the encoder210, the dish classifier214, the intent classifier213, and the dialog model218. FIG.9illustrates an example configuration of a device900that can be used to implement the systems and techniques described herein, such as, for example, the computing devices102, the consumer device104, and/or the server106ofFIG.1. For illustration purposes, the device900is illustrated inFIG.9as implementing the server106ofFIG.1. The device900may include one or more processors902(e.g., CPU, GPU, or the like), a memory904, communication interfaces906, a display device908, other input/output (I/O) devices910(e.g., keyboard, trackball, and the like), and one or more mass storage devices912(e.g., disk drive, solid state disk drive, or the like), configured to communicate with each other, such as via one or more system buses914or other suitable connections. While a single system bus914is illustrated for ease of understanding, it should be understood that the system buses914may include multiple buses, such as a memory device bus, a storage device bus (e.g., serial ATA (SATA) and the like), data buses (e.g., universal serial bus (USB) and the like), video signal buses (e.g., ThunderBolt®, DVI, HDMI, and the like), power buses, etc. The processors902are one or more hardware devices that may include a single processing unit or a number of processing units, all of which may include single or multiple computing units or multiple cores. The processors902may include a graphics processing unit (GPU) that is integrated into the CPU or the GPU may be a separate processor device from the CPU. The processors902may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, graphics processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processors902may be configured to fetch and execute computer-readable instructions stored in the memory904, mass storage devices912, or other computer-readable media. Memory904and mass storage devices912are examples of computer storage media (e.g., memory storage devices) for storing instructions that can be executed by the processors902to perform the various functions described herein. For example, memory904may include both volatile memory and non-volatile memory (e.g., RAM, ROM, or the like) devices. Further, mass storage devices912may include hard disk drives, solid-state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), a storage array, a network attached storage, a storage area network, or the like. Both memory904and mass storage devices912may be collectively referred to as memory or computer storage media herein and may be any type of non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that can be executed by the processors902as a particular machine configured for carrying out the operations and functions described in the implementations herein. The device900may include one or more communication interfaces906for exchanging data via the network110. The communication interfaces906can facilitate communications within a wide variety of networks and protocol types, including wired networks (e.g., Ethernet, DOCSIS, DSL, Fiber, USB etc.) and wireless networks (e.g., WLAN, GSM, CDMA, 802.11, Bluetooth, Wireless USB, ZigBee, cellular, satellite, etc.), the Internet and the like. Communication interfaces906can also provide communication with external storage, such as a storage array, network attached storage, storage area network, cloud storage, or the like. The display device908may be used for displaying content (e.g., information and images) to users. Other I/O devices910may be devices that receive various inputs from a user and provide various outputs to the user, and may include a keyboard, a touchpad, a mouse, a printer, audio input/output devices, and so forth. The computer storage media, such as memory116and mass storage devices912, may be used to store software and data, including, for example, the dictionary118, the classifiers210,213,214,218, the NLP pipeline112, the order context120, the recommendations114, and the software agents116. The example systems and computing devices described herein are merely examples suitable for some implementations and are not intended to suggest any limitation as to the scope of use or functionality of the environments, architectures and frameworks that can implement the processes, components and features described herein. Thus, implementations herein are operational with numerous environments or architectures, and may be implemented in general purpose and special-purpose computing systems, or other devices having processing capability. Generally, any of the functions described with reference to the figures can be implemented using software, hardware (e.g., fixed logic circuitry) or a combination of these implementations. The term “module,” “mechanism” or “component” as used herein generally represents software, hardware, or a combination of software and hardware that can be configured to implement prescribed functions. For instance, in the case of a software implementation, the term “module,” “mechanism” or “component” can represent program code (and/or declarative-type instructions) that performs specified tasks or operations when executed on a processing device or devices (e.g., CPUs or processors). The program code can be stored in one or more computer-readable memory devices or other computer storage devices. Thus, the processes, components and modules described herein may be implemented by a computer program product. Furthermore, this disclosure provides various example implementations, as described and as illustrated in the drawings. However, this disclosure is not limited to the implementations described and illustrated herein, but can extend to other implementations, as would be known or as would become known to those skilled in the art. Reference in the specification to “one implementation,” “this implementation,” “these implementations” or “some implementations” means that a particular feature, structure, or characteristic described is included in at least one implementation, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation. Although the present invention has been described in connection with several embodiments, the invention is not intended to be limited to the specific forms set forth herein. On the contrary, it is intended to cover such alternatives, modifications, and equivalents as can be reasonably included within the scope of the invention as defined by the appended claims.
66,579
11862158
DETAILED DESCRIPTION The example embodiments will be described in detail here, and examples thereof are shown in the accompanying drawings. When the following description refers to the accompanying drawings, unless otherwise indicated, the same numbers in different drawings represent the same or similar elements. The implementations described in the following example embodiments do not represent all the implementations consistent with the present invention. Rather, they are merely examples of the apparatuses and methods consistent with some aspects of the present invention as recited in the appended claims. FIG.1is a flowchart illustrating a method for controlling a device according to an example embodiment. As illustrated inFIG.1, the method includes the following blocks. At block S11, audio data is collected. In an example, a device configured to execute a method for controlling a device according to the present disclosure may be an electronic device to be controlled. After the electronic device is powered on, an audio collecting module in the electronic device may collect in real time or periodically any audio data in the environment where the electronic device is located. In another example, a device configured to execute a method for controlling a device according to the present disclosure may be other electronic device or server than the electronic device to be controlled. After other electronic device or server is powered on, an audio collecting module may collect in real time or periodically any audio data in the environment where the electronic device is located. At block S12, it is determined whether a target frame of audio data is a first type signal for each target frame of audio data collected. In the present disclosure, the target frame of the audio data may be each frame of audio data in the collected audio data, and also may be each frame of audio data collected behind a preset number of frames, and may also be each frame of audio data in any multiple frames of audio data (for example, multiple frames of audio data extracted from the collected audio data according to a preset rule) in the collected audio data, etc., which is not limited in the present disclosure. In addition, the specific implementation of determining whether the target frame of the audio data is the first type signal will be described below. At block S13, in response to the target frame of the audio data being the first type signal, an acoustic event type represented by the first type signal is determined. In the present disclosure, the acoustic event type represented by the first type signal refers to an acoustic event that generates the first type signal. In an example, the first type signal is an impulse signal, and the impulse signal is characterized by a short duration, large amplitude energy changes and aperiodicity. Therefore, the audio data generated by clapping, the audio data generated by finger-snapping, and the audio data generated by collision belong to impulse signals. Accordingly, in the present disclosure, the acoustic event type represented by the impulse signal may be a clapping event, a finger-snapping event, a coughing event, and a collision event, etc. It should be noted that, in practical applications, the first type signal may further be a non-impulse signal, which is not specifically limited here. The specific implementation of determining an acoustic event type represented by the first type signal will be described below. At block S14, the device is controlled to execute control instructions corresponding to the acoustic event type. In the present disclosure, control instructions corresponding to different acoustic event types may be preset. In response to determining that the target frame of the audio data is the first type signal, an acoustic event type represented by the first type signal is further determined and the device is controlled to execute the control instructions corresponding to the acoustic event type. Different acoustic event types correspond to different control instructions. For example, when the acoustic event type of the first type signal is the clapping event, the corresponding control instructions may be control instructions configured to represent turning on the television. For another example, when the acoustic event type of the first type signal is the snapping event, the corresponding control instructions may be control instructions configured to represent pausing playing. In an example, controlling the device to execute the control instructions corresponding to the acoustic event type may include: controlling the device to execute the control operations corresponding to the acoustic event type without waking up a smart voice assistant of the device. For example, when the preset control operation corresponding to the first type signal representing the clapping event is to turn on the television, the device or the server executing the method for controlling a device controls the television to perform the operation of turning on the television without waking up the smart voice assistant of the television in response to determining that the target frame of the audio data is the first type signal representing the clapping event. With the above technical solution, in response to determining that the target frame of the audio data is the first type signal, the acoustic event type represented by the first type signal is further determined, and the device is further controlled to execute the control instructions corresponding to the acoustic event type. In this way, not only the generation of the first type signal may be detected, but also the acoustic event type represented by the first type signal may be further judged and the instructions for controlling the device corresponding to different acoustic event types may be differentiated, thereby improving the robustness of controlling the device. Moreover, in response to determining the acoustic event type represented by the first type signal, the device may be directly controlled to execute the control instructions corresponding to the acoustic event type, which reduces the calculation amount and resource consumption of device operation, improves the control efficiency of the device, and improves the user experience. FIG.2is a flowchart illustrating another method for controlling a device according to an example embodiment. As illustrated inFIG.2, block S12inFIG.1may include block S121. At block S121, it is determined whether each target frame of audio data is the first type signal according to the target frame of the audio data and at least part of frames of historical audio data collected before collecting the target frame of the audio data. Taking the target frame of the audio data being each frame of audio data collected as an example, it is explained in regards to determining whether the target frame of the audio data being the first type signal according to the target frame of the audio data and the at least part of frames of the historical audio data collected before the target frame of the audio data. First, it is determined whether at least third preset number of frames of the historical audio data have been collected before collecting the target frame of the audio data; in response to determining that the at least third preset number of frames of the historical audio data have been collected before collecting the target frame of the audio data, it is determined whether the target frame of the audio data is the first type signal according to at least third preset number of frames of the historical audio data and the target frame of the audio data; or in response to determining that the at least third preset number of frames of the historical audio data have not been collected before collecting the target frame of the audio data, it is determined whether the target frame of the audio data is the first type signal according to the target frame of the audio data and the collected historical audio data. In an example, assuming that the third preset number is 3, when a first frame of audio data collected is not the first type signal in default, it is determined whether a second frame of audio data is the first type signal according to the first frame of audio data and the second frame of audio data, and it is determined whether a third frame of audio data is the first type signal according to the first frame of audio data, the second frame of audio data and the third frame of audio data. For each target frame of audio data behind the third frame of audio data, it is determined whether the target frame of the audio data is the first type signal according to the target frame of the audio data and the third preset number of frames of the historical audio data before the target frame of the audio data. For example, it is determined whether a fourth frame of audio data is the first type signal according to the first frame of audio data, the second frame of audio data and the third frame of audio data. It may be determined whether the target frame of the audio data is the first type signal with reference to the above-described way of determining whether the target frame of the audio data is the first type signal, which is not repeated here. In this way, it may refer to different numbers of frames of the historical audio data to improve the flexibility of determining whether the target frame of the audio data is the first type signal, and due to determining whether the frame of audio data is the first type signal with reference to the historical audio data before the frame of audio data, the accuracy of determining whether the frame of audio data is the first type signal is improved. In an embodiment, taking the first type signal being an impulse signal as an example, it may be determined whether the first type signal is the impulse signal in the following way. In an example,FIG.3is a diagram illustrating a method for determining whether audio data is impulse signal data in the embodiment of the present disclosure. As illustrated inFIG.3, m(n) represents the target frame of the audio data corresponding to an nth sampling point. Firstly, the target frame of the audio data m(n) corresponding to the nth sampling point is input to a first down sampling module to obtain audio data x(n), the audio data x(n) is input to a first linear prediction module to obtain audio data y(n), the audio data y(n) is input to a first excitation extraction module to extract e(n), and the e(n) is input to a dynamic component analysis module to analyze whether the audio data is data with large dynamic component changes. Meanwhile, the audio data x(n) is input to a second down sampling module to obtain audio data z(n), the audio data z(n) is input to a second linear prediction module and a second excitation extraction module in sequence to obtain audio data v(n), and the audio data v(n) is input to a periodic analysis module to determine whether the audio data is a periodic signal. Finally, the respective results output by the dynamic component analysis module and the periodic analysis module are input to a fast changing signal judgment module to determine whether the audio data is an impulse signal by the fast changing signal judgment module. The specific analysis process of the dynamic component analysis module is as follows: First, an envelope signal env(n) is analyzed by a first low pass filter. For example, the envelope signal env(n) may be determined by the following formula, where decides a cut-off frequency of the first low pass filter. env(n)=env(n−1)+β(|e(n)|−env(n−1)) where env(n−1) is an envelope signal of the audio data corresponding to a (n−1)th sampling point, and β is a value within the range of 0 to 1 set empirically. Then, env(n) passes through a second low pass filter to obtain a low frequency signal flr(n). For example, the low frequency signal flr(n) may be determined by the following formula, where γ decides a cut-off frequency of the second low pass filter; flr(n)=flr(n−1)+γ(env(n)−flr(n−1)) where flr(n−1) is a low frequency signal determined based on the audio data corresponding to the (n−1)th sampling point in a way as illustrated inFIG.3, and γ is a value within the range of 0 to 1 set empirically. Next, a relationship among env(n), flr(n) and a preset threshold is analyzed to determine whether the audio data is the data with large dynamic component changes. For example, it is determined that a relationship between env(n) and a product of flr(n) and the preset threshold. In response to env(n) being greater than the product of flr(n) and the preset threshold, the audio data is determined as the data with large dynamic component changes, otherwise, the audio data is determined as data with small dynamic component changes. The specific analysis process of the periodic analysis module is as follows: when the audio data is periodic data, its autocorrelation is also periodic. Therefore, in the embodiments, the periodicity of audio data may be judged by autocorrelation calculation of the audio data v(n). For example, an autocorrelation coefficient of the audio data v(n) may be calculated by the following formula, and it is determined whether the audio data is the periodic data according to the autocorrelation coefficient. pi=∑n=0M-1⁢v⁡(n)⁢g⁢γ⁡(N+i) where pirepresents an autocorrelation coefficient between the audio data v(n) and v(n+i) at a distance of i sampling points, in which, n is an nth sampling point, and M is a total number of sampling points. When the audio data is data with large dynamic component changes and is aperiodic data, it is determined that the target frame of the audio data is the impulse signal. In another embodiment, it may be determined whether the target frame of the audio data is the impulse signal by the following way: First, respective initial spectral values (onset values) of the target frame of the audio data and at least part of frames of the audio data are obtained. For example, for each target frame of audio data, an Mel spectrum of the target frame of the audio data is obtained by a short-time fourier transform, the Mel spectrum of the target frame of the audio data minuses the Mel spectrum of the previous frame of audio data to obtain difference values, and a mean value of the obtained difference values is determined as the initial spectral value of the target frame of the audio data. In this way, the initial spectral value of each target frame of audio data may be calculated. Then, in response to the initial spectral value of the target frame of the audio data meeting a preset condition, it is determined that the target frame of the audio data is the impulse signal. The preset condition is: the initial spectral value of the target frame of the audio data is a maximum value of the initial spectral values of the at least part of frames of the historical audio data, and the initial spectral value of the target frame of the audio data is greater than or equal to a mean value of the initial spectral values of the at least part of frames of the historical audio data and the target frame of audio data. That is, when the initial spectral value of the target frame of the audio data is the maximum value of the initial spectral values of the at least part of frames of the historical audio data, and the initial spectral value of the target frame of the audio data is greater than or equal to the mean value of the initial spectral values of the at least part of frames of the historical audio data and the initial spectral value of the target frame of the audio data, it is determined that the target frame of the audio data is the impulse signal, otherwise it is not the impulse signal. FIG.4is a flowchart illustrating determining an acoustic event type represented by the first type signal according to an example embodiment. As illustrated inFIG.4, block S13inFIG.1may further include blocks S131to S134. At block S131, in response to the target frame of the audio data being the first type signal, it is determined whether the target frame of the audio data is the first of the first type signals. In an example, in response to the target frame of the audio data being the first type signal, it is determined whether the first type signals have occurred within a preset duration before collecting the target frame of the audio data; in response to determining that the first type signals have not occurred within the preset duration, it is indicated that a time interval between the first type signal determined this time and the first type signal determined last time is greater than or equal to the preset duration, the first type signal determined this time is considered as first impulse signal data, that is, the target frame of the audio data is the first of the first type signal. In response to determining that the first type signals have occurred within the preset duration, it is indicated that the time interval between the first type signal determined this time and the first type signal determined last time is less than the preset duration, the first type signal determined this time is not considered as the first impulse signal data, that is, the target frame of the audio data is not the first of the first type signal. In response to determining that the target frame of the audio data is the first impulse signal data, block S132is executed, otherwise block S133is executed. At block S132, a first preset number of frames of audio data behind the target frame of the audio data in the audio data are determined as target audio data. The target audio data includes a second preset number of first type signals, and the target frame of the audio data is a first frame of audio data in the target audio data. In the present disclosure, the first preset number is related to the second preset number, which may be set according to the requirements and the experiment results. In an example, it may be determined in advance by the experiments how many frames of audio data behind the audio data corresponding to the first of first type signals need to be collected, to ensure that the collected audio data include the second preset number of first type signals, thereby determining the first preset number. For example, assuming that the second preset number is 2, when 48 frames of audio data are collected behind the audio data corresponding to the first of the first type signals, to ensure that the collected 48 frames of audio data include two first type signals, the first preset number is 48. The control instructions corresponding to the second preset number of first type signals are preset by users, for example, the second preset number may be 1, 2, 3, etc. Assuming that the second preset number is 2, the determined target audio data need to include two first type signals. It should be noted that, in practical applications, the larger the second preset number is, the lower the probability of the device mistakenly executing control instructions is, and the greater the first preset number is. It should be noted that, when the first preset number is determined, in addition to enabling the second preset number of first type signals to be included in the first preset number of frames of audio data, the first preset number needs to be minimized as much as possible to avoid that there is audio data with interference in the target audio data. For example, assuming the second preset number is 2 and the first of first type signals is denoted as the 1st frame of the audio data, the three experiments performed in advance respectively show that, the 48th frame of audio data behind the first of first type signals is the second of first type signals, the 49th frame of audio data behind the first of first type signals is the second of first type signals, and the 50th frame of audio data behind the first of first type signals is the second of first type signals, then the first preset number should be greater than or equal to 48. The first preset number may be 48 so that the determined number of the target audio data is as small as possible, thereby reducing the computation amount of the system operation. At block S133, the first preset number of frames of audio data behind the historical audio data corresponding to the first of the first type signals in the audio data are determined as target audio data. Before the target frame of the audio data is not the first of the first type signal, a certain frame of audio data before the target frame of the audio data in the collected audio data has been determined as the first of the first type signals. In this case, the first preset number of frames of audio data behind the historical audio data corresponding to the first of the first type signals in the audio data may be taken as target audio data. The historical audio data corresponding to the first of first type signals is a first frame of audio data in the target audio data. After the target audio data is determined according to block S132or block S133, block S134is executed. At block S134, an acoustic event type represented by the first type signal is determined according to the target audio data. In the present disclosure, the acoustic event type represented by the first type signal included in the target audio data may be determined by deep learning. In an example, spectral feature data of the target audio data is firstly extracted, and the spectral feature data of the target audio data is input to a trained neural network model, to obtain the acoustic event type represented by the first type signal output by the neural network model. In an example, after the target audio data is determined, the Mel spectral feature of each frame of audio data in the target audio data may be obtained and input to the trained neural network model to determine the acoustic event type represented by the first type signal. The neural network model may extract a deep feature based on the Mel spectral feature of each frame of audio data, and the acoustic event type represented by the first type signal is determined based on the deep feature. In this way, the acoustic event type represented by the first type signal may be determined based on the deep feature of the target audio data extracted by the neural network model, to further improve the robustness of determining the acoustic event type represented by the first type signal. In the present disclosure, the neural network model may be trained by the following way: First, sample audio data of different acoustic event types are obtained. The acoustic event type of each frame of sample audio data is known. For example, sample audio data generated by the finger-snapping event, sample audio data generated by the collision event, sample audio data generated by the clapping event, etc. are obtained respectively. It should be noted that, a number of sample audio data is greater than or equal to a preset number. Then, a Mel spectral feature of each frame of sample audio data is obtained. Finally, during each training, Mel spectral features of a preset number of frames of sample audio data are taken as model input parameters, and tag data corresponding to the known acoustic event types of the first preset number of frames of sample audio data are taken as model input parameters to train a neural network model, further to obtain the trained neural network model. The neural network model may be a time domain convolution structure that is characterized by few parameters and quick operating speed than other conventional neural network structures. In an embodiment, a corresponding relationship between a number of first type signals for representing the acoustic event type and control instructions may be further preset, for example, when the number of first type signals for representing the clapping event is 2, the corresponding control instructions are configured to represent control instructions starting playing, and when the number of first type signals for representing the clapping event is 3, the corresponding control instructions are configured to represent control instructions pausing playing. In the embodiment, the neural network model may recognize the number of first type signals for representing the acoustic event type included in the target audio data in addition to the acoustic event type represented by the first type signal. In this way, after a target number of first type signals for representing the acoustic event type, the control instructions corresponding to the target number are determined, and the device is controlled to execute the corresponding control operations according to the corresponding relationship between the preset number of first type signals for representing the acoustic event type and control instructions. The disclosure further provides an apparatus for controlling a device based on the same invention concept.FIG.5is a block diagram illustrating an apparatus for controlling a device according to an example embodiment. As illustrated inFIG.5, the apparatus500for controlling a device may include: a collecting module501, a first determining module502, a second determining module503and a control module504. The collecting module501is configured to collect audio data. The first determining module502is configured to for each target frame of audio data collected, determine whether the target frame of the audio data is a first type signal. The second determining module503is configured to determine an acoustic event type represented by the first type signal in response to the target frame of the audio data being the first type signal. The control module504is configured to control the device to execute control instructions corresponding to the acoustic event type. In at least one embodiment, the first determining module502is configured to: for each target frame of audio data, determine whether the target frame of the audio data is the first type signal according to the target frame of the audio data and at least part of frames of historical audio data collected before the target frame of the audio data. In at least one embodiment, the first type signal is an impulse signal. The first determining module502may include: an obtaining submodule and a first determining submodule. The obtaining submodule is configured to obtain respective initial spectral values of the target frame of the audio data and the multiple frames of the historical audio data. The first determining submodule is configured to, in response to the initial spectral value of the target frame of the audio data meeting a preset condition, determine that the target frame of the audio data is the impulse signal. The preset condition is: the initial spectral value of the target frame of the audio data is a maximum value of the initial spectral values of the at least part of frames of the historical audio data, and the initial spectral value of the target frame of the audio data is greater than or equal to a mean value of the initial spectral values of the at least part of frames of the historical audio data and the target frame of the audio data. In at least one embodiment, in response to the target frame of the audio data being the first type signal, the second determining module503may include: a second determining submodule, a third determining submodule, a fourth determining submodule and a fifth determining submodule. The second determining submodule is configured to, in response to the target frame of the audio data being the first type signal, determine whether the target frame of the audio data is the first of the first type signals. The third determining submodule is configured to, in response to the target frame of the audio data being the first impulse signal, determine a first preset number of frames of audio data behind the target frame of the audio data in the audio data as target audio data, in which, the target audio data include a second preset number of first type signals. The fourth determining submodule is configured to, in response to the target frame of the audio data not being the first of the first type signals, determine the first preset number of frames of audio data behind historical audio data corresponding to the first of first type signals in the audio data as target audio data. The fifth determining submodule is configured to determine the acoustic event type represented by the first type signal according to the target audio data. In at least one embodiment, the fifth determining submodule is configured to extract spectral feature data of the target audio data; and input spectral feature data of the target audio data into a neural network model, to obtain the acoustic event type represented by the first type signal output by the neural network model, in which the neural network model is trained according to the spectral feature data of sample audio data of a plurality of acoustic event types. In at least one embodiment, the second determining submodule is configured to: in response to the target frame of the audio data being the first type signal, determine whether the first type signals have occurred within a preset duration before collecting the target frame of the audio data; in response to determining that the first type signals have not occurred within the preset duration, determine the target frame of the audio data is the first of the first type signals; and in response to determining that the first type signals have occurred within the preset duration, determine the target frame of the audio data is not the first of the first type signals. In at least one embodiment, the target frame of the audio data is each frame of audio data collected; the first determining module502may include: a sixth determining submodule, a seventh determining submodule and a eighth determining submodule. The sixth determining submodule is configured to determine whether at least third preset number of frames of the historical audio data have been collected before collecting the target frame of the audio data. The seventh determining submodule is configured to, in response to determining that the at least third preset number of frames of the historical audio data have been collected before collecting the target frame of the audio data, determine whether the target frame of the audio data is the first type signal according to the target frame of the audio data and the third preset number of frames of the historical audio data before collecting the target frame of the audio data. The eighth determining submodule is configured to, in response to determining that the at least third preset number of frames of the historical audio data have not been collected before collecting the target frame of the audio data, determine whether the target frame of the audio data is the first type signal according to the target frame of the audio data and the collected historical audio data. In at least one embodiment, the control module504is configured to control the device to execute control operations corresponding to the acoustic event type without waking up a smart voice assistant of the device. With regard to the apparatus in the above embodiments, the specific implementation in which each module performs the operation has been described in detail in the embodiments of the method and will not be elaborated here. The present disclosure provides a computer readable storage medium having computer program instructions stored thereon, in which the computer instructions are executed by a processor to implement the steps of the method for controlling a device according to the present disclosure. FIG.6is a block diagram illustrating an apparatus for controlling a device according to an example embodiment. For example, an apparatus800may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical equipment, a fitness equipment, a personal digital assistant, etc. Referring toFIG.6, the apparatus800may include one or more components: a processing component802, a memory804, a power supply component806, a multimedia component808, an audio component810, an input/output (I/O) interface812, a sensor component814, and a communication component816. The processing component802generally controls the overall operation of the apparatus800, such as the operations related to display, phone calls, data communications, camera operations and recording operations. The processing component802may include one or more processors820for executing instructions to complete all or part of steps of the method for controlling a device. In addition, the processing component802may include one or more modules for the convenience of interactions between the processing component802and other components. For example, the processing component802may include a multimedia module for the convenience of interactions between the multimedia component808and the processing component802. The memory804is configured to store various types of data to support the operation of the apparatus800. Examples of such data include the instructions for any applications or methods operating on apparatus800, contact data, phone book data, messages, pictures, videos, etc. The memory804may be implemented by any type of volatile or non-volatile storage devices or their combination, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic disk or an optical disk. The power supply component806may provide power supply for various components of the apparatus800. The power supply component806may include a power supply management system, one or more power supplies, and other components related to generating, managing and distributing power for the apparatus800. The multimedia component808includes a screen that provides an output interface between the apparatus800and the user. In some embodiments, a screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touches, swipes and gestures on the touch panel. The touch sensors may not only sense a boundary of the touch or swipe action, but also sense a duration and a pressure related to the touch or swipe operation. In some embodiments, the multimedia component808include a front camera and/or a rear camera. When the apparatus800is in operation mode, such as shooting mode or video mode, the front camera or the rear camera may receive external multimedia data. Each of the front camera and rear camera may be a fixed optical lens system or an optical lens system or have focal length and optical zoom capacity. The audio component810is configured as output and/or input signal. For example, the audio component810includes a microphone (MIC). When the apparatus800is in operation mode, such as a call mode, a recording mode, and a speech recognition mode, the microphone is configured to receive external audio signals. The audio signals received may be further stored in the memory804or sent via the communication component816. In some embodiments, the audio component810further includes a speaker to output an audio signal. The I/O interface812provides an interface for the processing component802and the peripheral interface module, and the peripheral interface module may be a keyboard, a click wheel, a button, etc. These buttons may include but not limited to a home button, a volume button, a start button and a lock button. The sensor component814includes one or more sensors, configured to provide various aspects of state evaluation for the apparatus800. For example, the sensor component814may detect an on/off state of the apparatus800and relative positioning of the component, such as a display and a keypad of the apparatus800. The sensor component814may further detect a location change of the apparatus800or a component of the apparatus800, a presence or absence of user contact with the apparatus800, an orientation or an acceleration/deceleration of the apparatus800, and a temperature change of the apparatus800. The sensor component814may include a proximity sensor, which is configured to detect the presence of the objects nearby without any physical contact. The sensor component814may further include a light sensor such as a CMOS or a CCD image sensor for use in imaging applications. In some embodiments, the sensor component814may further include an acceleration transducer, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor. The communication component816is configured for the convenience of wire or wireless communication between the apparatus800and other devices. The apparatus800may access wireless networks based on communication standard, such as WiFi, 2G or 3G, or their combination. In an exemplary embodiment, the communication component816receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component816further includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on a radio frequency identification (RFID) technology, an infrared data association (IRDA) technology, an ultra-wideband (UWB) technology, a bluetooth (BT) technology and other technologies. In an embodiment, the apparatus800may be implemented by one or more application specific integrated circuits(ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable gate arrays (FPGA), controllers, microcontrollers, microprocessors or other electronics components, which is configured to perform the method for controlling a device. In an embodiment, a non-transitory computer readable storage medium is further provided, such as the memory804including instructions. The instructions may be executed by the processor820of the apparatus800to complete the method for controlling a device. For example, the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, etc. In another embodiment, a computer program product is further provided. The computer program product includes computer programs that may be executed by a programmable apparatus, and the computer program possesses a code part configured to execute the above method for controlling a device when executed by the programmable apparatus. FIG.7is a block diagram illustrating an apparatus for controlling a device according to an example embodiment. For example, the apparatus1900may be provided as a server. Referring toFIG.7, the apparatus1900includes a processing component1922, which further includes one or more processors, and memory resources represented by the memory1932, which are configured to store instructions executed by the processing component1922, for example, an application. The applications stored in the memory1932may include one or more modules each of which corresponds to a set of instructions. In addition, the processing component1922is configured to execute instructions, to implement a method for controlling a device described above. The apparatus1900may further include one power supply component1926configured to execute power management of the apparatus1900, and one wired or wireless network interface1950configured to connect the apparatus1900to a network, and one input/output (I/O) interface1958. The apparatus1900may operate an operating system stored in the memory1932, for example, Windows Server™, Mac OS X™, Unix™, Linux™, FreeBSD™, etc. After considering the specification and practicing the disclosure herein, those skilled in the art will easily think of other implementations. The present application is intended to cover any variations, usages, or adaptive changes of the present disclosure. These variations, usages, or adaptive changes follow the general principles of the present disclosure and include common knowledge or conventional technical means in the technical field not disclosed by the present disclosure. The description and the embodiments are to be regarded as exemplary only, and the true scope and spirit of the present disclosure are given by the appended claims. It should be understood that the present invention is not limited to the precise structure described above and shown in the drawings, and various modifications and changes may be made without departing from its scope. The scope of the present application is only limited by the appended claims. The scope of the present application is only limited by the appended claims.
41,223
11862159
DETAILED DESCRIPTION Automatic speech recognition (ASR) is a field of computer science, artificial intelligence, and linguistics concerned with transforming audio data associated with speech into text data representative of that speech. Similarly, natural language understanding (NLU) is a field of computer science, artificial intelligence, and linguistics concerned with enabling computers to derive meaning from text data corresponding to natural language. ASR and NLU are often used together as part of a speech-processing system. Text-to-speech (TTS) is a field of computer science concerning transforming text data into audio data that resembles human speech. Certain systems may be configured to perform actions responsive to user inputs. For example, for the user input of “Alexa, play that one song by Toto,” a system may output a song called “Africa” performed by a band named Toto. For further example, for the user input of “Alexa, what is the weather,” a system may output synthesized speech representing weather information for a geographic location of the user. In a further example, for the user input of “Alexa, make me a restaurant reservation,” a system may book a reservation with an online reservation system of the user's favorite restaurant. In a still further example, the user input may include “Alexa, call mom” where the system may identify a contact for “mom” in the user's contact list, identify a device corresponding to the contact for “mom,” and initiate a call between the devices. In various embodiments of the present disclosure, an autonomously mobile device may be used to communicate, using audio and/or video communication, with a user of the device. The user of the device may request to initiate the communication or the device may receive, from another device, a request to initiate the communication. One or more server(s) may receive the request(s) and determine that the device is autonomously mobile. As the term is used herein, autonomously mobile refers to the device moving a portion of the device, such as a screen, camera, or microphone or movement of the device itself through an environment by receiving input data from one or more sensors and determining the movement based thereon. In some embodiments, the autonomously mobile device receives input data from one or more sensors, such as a camera, microphone, and/or other input sensor; presence of a user proximate the device is determined based thereon. The determination of the presence of the user may be made by the device; in some embodiments, the device transmits the input data to one or more remote devices, such as servers, which determine presence of the user. The server(s) may determine the identity of the user based on the input data by comparing the input data to other data associated with the user, such as vocal and/or facial characteristics stored in a user profile associated with the user. Based on determining the presence and/or identity of the user, the autonomously mobile device may determine that the user has moved in the environment and maintain the user in a field of observation using one or more input and/or output devices. For example, the autonomously mobile device may move a camera to track movement of the user or may configure a microphone array to reflect movement of the user. The autonomously mobile device may distinguish between the user and another person proximate the device; the autonomously mobile device may continue to track the user even if, for example, the user has stopped speaking and/or the other person has begun speaking. FIG.1illustrates a system configured to establish communications with an autonomously mobile device in accordance with embodiments of the present disclosure. Although the figures and discussion herein illustrate certain operational steps of the system in a particular order, the steps described may be performed in a different order (as well as certain steps removed or added) without departing from the intent of the disclosure. As illustrated inFIG.1, the system may include one or more autonomously mobile voice-controlled devices110in an environment (such as home or office) of a user5and one or more speech/command processing server(s)120connected to the device110using one or more networks199. Further, one or more communication servers125may also be connected to the server(s)120and/or device110. An autonomously mobile device110may be used to facilitate communication between a first user and a second user As explained in greater detail below, when referring to the autonomously mobile device110in the context of other devices, the autonomously mobile device110may be referred to as a “device110a,” a second voice-controlled device such as a smartphone may be referred to as a “device110b,” and a third voice-controlled device such as a smart speaker may be referred to as a “device110c.” In various embodiments, the autonomously mobile device110is capable of movement using one or motors powering one or more wheels, treads, robotic limbs, or similar actuators. The device110may further include one or more display screens for displaying information to a user5and/or receiving touch input from a user. The device110may further include a microphone or microphone array (which may include two or more microphones) and one or more loudspeakers. The microphone/microphone array and loudspeakers may be used to capture audio11(e.g., an utterance from the user5); the utterance may be, for example, a command or request. The device110may be used to output audio to the user5, such as audio related to receipt of the command or audio related to a response to the request. The device110may further include one or more sensors; these sensors may include, but are not limited to, an accelerometer, a gyroscope, a magnetic field sensor, an orientation sensor, a weight sensor, a temperature sensor, and/or a location sensor (e.g., a global-positioning system (GPS) sensor or a Wi-Fi round-trip time sensor). The device may further include a computer memory, a computer processor, and one or more network interfaces. The device110may be, in some embodiments, a robotic assistant that may move about a room or rooms to provide a user with requested information or services. In other embodiments, the device110may be a smart speaker, smart phone, or other such device. The disclosure is not, however, limited to only these devices or components, and the autonomously mobile device110may include additional components without departing from the disclosure. FIG.1illustrates a system and method for establishing a communication connection between a first device of a first user and a second device of a second user. The server(s)120/125receive (130), from a first device associated with a first user profile, request data corresponding to a request to establish a communication connection with a second device corresponding to a second user profile. The server(s)120/125may determine that the user profile is associated with the second user and that a second device is associated with the second user profile. The server(s)120/125determine (132) that the second device is an autonomously mobile device. Based on determining that the second device is the autonomously mobile device, the server(s)120/125send (134), to the second device, a second command to search for the second user. The server(s) may120/125receive, from the second device, further data indicating that the second user is proximate the second device. The server(s) determine (138) that the first data corresponds to presence of the user and send (140) a command to maintain the user in a field of observation. The server(s)120/125then establish (142) the communication connection between the first device and the second device without receiving the authorization from the second device. The system may operate using various components as illustrated inFIG.2. The various components may be located on same or different physical devices. Communication between various components may occur directly or across a network(s)199. An audio capture component(s), such as a microphone or array of microphones disposed on or in the device110a, captures audio11. The device110aprocesses audio data, representing the audio11, to determine whether speech is detected. The device110amay use various techniques to determine whether audio data includes speech. In some examples, the device110amay apply voice activity detection (VAD) techniques. Such techniques may determine whether speech is present in audio data based on various quantitative aspects of the audio data, such as the spectral slope between one or more frames of the audio data; the energy levels of the audio data in one or more spectral bands; the signal-to-noise ratios of the audio data in one or more spectral bands; or other quantitative aspects. In other examples, the device110amay implement a limited classifier configured to distinguish speech from background noise. The classifier may be implemented by techniques such as linear classifiers, support vector machines, and decision trees. In still other examples, the device110amay apply Hidden Markov Model (HMM) or Gaussian Mixture Model (GMM) techniques to compare the audio data to one or more acoustic models in storage, which acoustic models may include models corresponding to speech, noise (e.g., environmental noise or background noise), or silence. Still other techniques may be used to determine whether speech is present in audio data. Once speech is detected in audio data representing the audio11, the device110amay use a wakeword detection component220to perform wakeword detection to determine when a user intends to speak an input to the device110a. An example wakeword is “Alexa.” Wakeword detection is typically performed without performing linguistic analysis, textual analysis, or semantic analysis. Instead, the audio data representing the audio11is analyzed to determine if specific characteristics of the audio data match preconfigured acoustic waveforms, audio signatures, or other data to determine if the audio data “matches” stored audio data corresponding to a wakeword. Thus, the wakeword detection component220may compare audio data to stored models or data to detect a wakeword. One approach for wakeword detection applies general large vocabulary continuous speech recognition (LVCSR) systems to decode audio signals, with wakeword searching being conducted in the resulting lattices or confusion networks. LVCSR decoding may require relatively high computational resources. Another approach for wakeword detection builds HMIs for each wakeword and non-wakeword speech signals, respectively. The non-wakeword speech includes other spoken words, background noise, etc. There can be one or more HMMs built to model the non-wakeword speech characteristics, which are named filler models. Viterbi decoding is used to search the best path in the decoding graph, and the decoding output is further processed to make the decision on wakeword presence. This approach can be extended to include discriminative information by incorporating a hybrid DNN-HMM decoding framework. In another example, the wakeword detection component220may be built on deep neural network (DNN)/recursive neural network (RNN) structures directly, without HMI being involved. Such an architecture may estimate the posteriors of wakewords with context information, either by stacking frames within a context window for DNN, or using RNN. Follow-on posterior threshold tuning or smoothing is applied for decision making. Other techniques for wakeword detection, such as those known in the art, may also be used. Once the wakeword is detected, the device110amay “wake” and begin transmitting audio data211, representing the audio11, to the server(s)120. The audio data211may include data corresponding to the wakeword, or the portion of the audio data211corresponding to the wakeword may be removed by the device110prior to sending the audio data211to the server(s)120. Upon receipt by the server(s)120, the audio data211may be sent to an orchestrator component230. The orchestrator component230may include memory and logic that enables the orchestrator component230to transmit various pieces and forms of data to various components of the system, as well as perform other operations as described herein. The orchestrator component230sends the audio data211to an ASR component250. The ASR component250transcribes the audio data211into text data. The text data output by the ASR component250represents one or more than one (e.g., in the form of an N-best list) ASR hypotheses representing speech represented in the audio data211. The ASR component250interprets the speech in the audio data211based on a similarity between the audio data211and pre-established language models. For example, the ASR component250may compare the audio data211with models for sounds (e.g., subword units, such as phonemes, etc.) and sequences of sounds to identify words that match the sequence of sounds of the speech represented in the audio data211. The ASR component250sends the text data generated thereby to an NLU component260, for example via the orchestrator component230. The text data output by the ASR component250may include a top scoring ASR hypothesis or may include an N-best list including multiple ASR hypotheses. An N-best list may additionally include a respective score associated with each ASR hypothesis represented therein. Each score may indicate a confidence of ASR processing performed to generate the ASR hypothesis with which the score is associated. The device110bmay send text data213to the server(s)120. Upon receipt by the server(s)120, the audio data213may be sent to the orchestrator component230, which may send the text data213to the NLU component260. The NLU component260attempts to make a semantic interpretation of the phrase(s) or statement(s) represented in the text data input therein. That is, the NLU component260determines one or more meanings associated with the phrase(s) or statement(s) represented in the text data based on words represented in the text data. The NLU component260determines an intent representing an action that a user desires be performed as well as pieces of the text data that allow a device (e.g., the device110, the server(s)120, a skill component290, a skill server(s)225, etc.) to execute the intent. For example, if the text data corresponds to “play a song by Toto,” the NLU component260may determine an intent that the system output music and may identify “Toto” as an artist. For further example, if the text data corresponds to “what is the weather,” the NLU component260may determine an intent that the system output weather information associated with a geographic location of the device110b. In another example, if the text data corresponds to “turn off the lights,” the NLU component260may determine an intent that the system turn off lights associated with the device110or the user5. The NLU component260may send NLU results data (which may include tagged text data, indicators of intent, etc.) to the orchestrator component230. The orchestrator component230may send the NLU results data to a skill component(s)290configured to perform an action at least partially responsive the user input. The NLU results data may include a single NLU hypothesis, or may include an N-best list of NLU hypotheses. A “skill component” may be software running on the server(s)120that is akin to a software application running on a traditional computing device. That is, a skill component290may enable the server(s)120to execute specific functionality in order to provide data or produce some other requested output. The server(s)120may be configured with more than one skill component290. For example, a weather service skill component may enable the server(s)120to provide weather information, a car service skill component may enable the server(s)120to book a trip with respect to a taxi or ride sharing service, a restaurant skill component may enable the server(s)120to order a pizza with respect to the restaurant's online ordering system, etc. A skill component290may operate in conjunction between the server(s)120and other devices, such as the device110, in order to complete certain functions. Inputs to a skill component290may come from speech processing interactions or through other interactions or input sources. A skill component290may include hardware, software, firmware, or the like that may be dedicated to a particular skill component290or shared among different skill components290. In addition or alternatively to being implemented by the server(s)120, a skill component290may be implemented at least partially by a skill server(s)225. Such may enable a skill server(s)225to execute specific functionality in order to provide data or perform some other action requested by a user. Types of skills include home automation skills (e.g., skills that enable a user to control home devices such as lights, door locks, cameras, thermostats, etc.), entertainment device skills (e.g., skills that enable a user to control entertainment devices such as smart televisions), video skills, flash briefing skills, as well as custom skills that are not associated with any pre-configured type of skill. The server(s)120may be configured with a single skill component290dedicated to interacting with more than one skill server225. Unless expressly stated otherwise, reference to a skill, skill device, or skill component may include a skill component290operated by the server(s)120and/or the skill server(s)225. Moreover, the functionality described herein as a skill may be referred to using many different terms, such as an action, bot, app, or the like. The server(s)120may include a TTS component280that generates audio data (e.g., synthesized speech) from text data using one or more different methods. Text data input to the TTS component280may come from a skill component290, the orchestrator component230, or another component of the system. In one method of speech synthesis called unit selection, the TTS component280matches text data against a database of recorded speech. The TTS component280selects matching units of recorded speech and concatenates the units together to form audio data. In another method of synthesis called parametric synthesis, the TTS component280varies parameters such as frequency, volume, and noise to create audio data including an artificial speech waveform. Parametric synthesis uses a computerized voice generator, sometimes called a vocoder. A user input may be received as part of a dialog between a user and the system. A dialog may correspond to various user inputs and system outputs. When the server(s)120receives a user input, the server(s)120may associate the data (e.g., audio data or text data) representing the user input with a session identifier. The session identifier may be associated with various speech processing data (e.g., an intent indicator(s), a category of skill to be invoked in response to the user input, etc.). When the system invokes the skill, the system may send the session identifier to the skill in addition to NLU results data. If the skills outputs data from presentment to the user, the skill may associate the data with the session identifier. The foregoing is illustrative and, thus, one skilled in the art will appreciate that a session identifier may be used to track data transmitted between various components of the system. A session identifier may be closed (e.g., a dialog between a user and the system may end) after a skill performs a requested action (e.g., after the skill causes content to be output to the user). The server(s)120may include profile storage270. The profile storage270may include a variety of information related to individual users, groups of users, devices, etc. that interact with the system. A “profile” refers to a set of data associated with a user, device, etc. The data of a profile may include preferences specific to the user, device, etc.; input and output capabilities of the device; internet connectivity information; user bibliographic information; subscription information; as well as other information. The profile storage270may include one or more user profiles, with each user profile being associated with a different user identifier. Each user profile may include various user identifying information. Each user profile may also include preferences of the user and/or one or more device identifiers, representing one or more devices of the user. The profile storage270may include one or more group profiles. Each group profile may be associated with a different group profile identifier. A group profile may be specific to a group of users. That is, a group profile may be associated with two or more individual user profiles. For example, a group profile may be a household profile that is associated with user profiles associated with multiple users of a single household. A group profile may include preferences shared by all the user profiles associated therewith. Each user profile associated with a group profile may additionally include preferences specific to the user associated therewith. That is, each user profile may include preferences unique from one or more other user profiles associated with the same group profile. A user profile may be a stand-alone profile or may be associated with a group profile. A group profile may include one or more device identifiers representing one or more devices associated with the group profile. The user profile may further include information regarding which other users, devices, and/or groups a given user associated with the user profile has granted authorization to establish a communication connection with one or more devices associated with the user profile—without necessarily granting said authorization again at the time of establishment of the communication connection. For example, a user profile may grant permission to a spouse to establish, using the spouse's smartphone, a communication connection with an autonomously mobile device associated with the user profile without again granting permission at the time of establishment of the communication connection. In another example, a user profile may grant similar permission to a doctor, caregiver, therapist, or similar medical or care professional. Similarly, a parent may configure a child's user profile to authorize communication requests from the parent to a device associated with the child. The profile storage270may include one or more device profiles. Each device profile may be associated with a different device identifier, and each device profile may include various device-identification information. For example, the device-identification information may include a device name, device type, device serial number, and/or device address. In some embodiments, the device-identification information indicates whether a given device includes a voice interface, a touchscreen interface, and/or a keyboard/mouse interface. The device-identification information may also indicate whether the device is autonomously mobile and/or capable of moving a camera and video screen. Thus, as described in greater detail below, if a first device requests establishment of a communication connection with a second device, a server(s)120may determine, using a device profile associated with the second device, that the second device is an autonomously mobile device and may enable functionality specific to the autonomously mobile device, such as searching an environment for a user of the autonomously mobile device. Each device profile may also include one or more user identifiers, representing one or more users associated with the device. For example, a household device's profile may include the user identifiers of users of the household. The system may be configured to incorporate user permissions and may only perform activities disclosed herein if approved by a user. As such, the systems, devices, components, and techniques described herein would be typically configured to restrict processing where appropriate and only process user information in a manner that ensures compliance with all appropriate laws, regulations, standards, and the like. The system and techniques can be implemented on a geographic basis to ensure compliance with laws in various jurisdictions and entities in which the components of the system and/or user are located. The server(s)120may include a user locator component295that recognizes one or more users associated with data input to the system. The user locator component295may take as input the audio data211. The user locator component295may perform user recognition by comparing audio characteristics in the audio data211to stored audio characteristics of users. The user locator component295may also or alternatively perform user recognition by comparing biometric data (e.g., fingerprint data, iris data, etc.), received by the system in correlation with the present user input, to stored biometric data of users. The user locator component295may also or alternatively perform user recognition by comparing image data (e.g., including a representation of at least a feature of a user), received by the system in correlation with the present user input, with stored image data including representations of features of different users. The user locator component295may perform additional user recognition processes, including those known in the art. For a particular user input, the user locator component295may perform processing with respect to stored data of users associated with the device that captured the user input. The user locator component295determines whether user input originated from a particular user. For example, the user locator component295may generate a first value representing a likelihood that the user input originated from a first user, a second value representing a likelihood that the user input originated from a second user, etc. The user locator component295may also determine an overall confidence regarding the accuracy of user recognition operations. The user locator component295may output a single user identifier corresponding to the most likely user that originated the user input. Alternatively, the user locator component295may output an N-best list of user identifiers with respective values representing likelihoods of respective users originating the user input. The output of the user locator component295may be used to inform NLU processing, processing performed by a skill component290, as well as processing performed by other components of the system. The user locator component295may determine the location of one or more users using a variety of data. As illustrated inFIG.3, the user locator component295may include one or more components including a vision component308, an audio component310, a biometric component312, a radio frequency component314, a machine learning component316, and a location confidence component318. In some instances, the user locator component295may monitor data and determinations from one or more components to determine an identity of a user and/or a location of a user in an environment302. The user locator component295may output user location data395which may include a user identifier matched with location information as to where the system believes the particular user of the user identifier is located. The location information may include geographic information (such as an address, city, state, country, geo-position (e.g., GPS coordinates), velocity, latitude, longitude, altitude, or the like). The location information may also include a device identifier, zone identifier or environment identifier corresponding to a device/zone/environment the particular user is nearby/within. Output of the user locator component295may be used to inform natural language component260processes as well as processing performed by skills290, routing of output data, permission access to further information, etc. The details of the vision component308, the audio component310, the biometric component312, the radio frequency component314, the machine learning component316, and the location confidence component318are provided below following a description of the environment302. In some instances, the environment302may represent a home or office associated with a user320“Alice” and/or a user322“Bob.” In some instances, the user320“Alice” may be associated with a computing device324, such as a smartphone. In some instances, the user322“Bob” may be associated with a radio frequency device326, such as a wearable device (e.g., a smartwatch) or an identifier beacon. The environment302may include, but is not limited to, a number of devices that may be used to locate a user. For example, within zone301(1), the environment302may include an imaging device328, an appliance330, an smart speaker110c, and a computing device334. Within zone301(2), the environment302may include a microphone336and a motion sensor338. Within zone301(3), the environment may include an imaging device340, a television342, a speaker344, a set-top box346, a smartphone110b, a television350, and an access point352. Within zone301(4), the environment302may include an appliance354, an imaging device356, a speaker358, a device110a, and a microphone360. Further, in some instances, the user-locator component295may have information regarding the layout of the environment302, include details regarding which devices are in which zones, the relationship between zones (e.g., which rooms are adjacent), and/or the placement of individual devices within each zone. In some instances, the user locator component295can leverage knowledge of the relationships between zones and the devices within each zone to increase a confidence level of user identity and location as a user moves about the environment302. For example, in a case in which the user322is in zone301(3) and subsequently moves beyond a field of view of the imaging device340into the zone301(2), the user locator component295may infer a location and/or identity of the user to determine with a high confidence level (in combination with data from one or more other devices) that any motion detected by the motion sensor338corresponds to movement by the user322. In some instances, the vision component308may receive data from one or more sensors capable of providing images (e.g., such as the imaging devices328,340,356and the computing devices324and334) or sensors indicating motion (e.g., such as the motion sensor338). In some instances, the vision component308can perform facial recognition or image analysis to determine an identity of a user and to associate that identity with a user profile associated with the user. In some instances, when a user (e.g., the user322“Bob”) is facing the imaging device340, the vision component308may perform facial recognition and identify the user322with a high degree of confidence. In some instances, the vision component308may have a low degree of confidence of an identity of a user, and the user locator component295may utilize determinations from additional components to determine an identity and/or location of a user. In some instances, the vision component308can be used in conjunction with other components to determine when a user is moving to a new location within the environment302. In some instances, the vision component308can receive data from one or more imaging devices to determine a layout of a zone or room, and/or to determine which devices are in a zone and where they are located. In some instances, data from the vision component308may be used with data from the audio component310to identify what face appears to be speaking at the same time audio is captured by a particular device the user is facing for purposes of identifying a user who spoke an utterance. In some instances, the environment302may include biometric sensors that may transmit data to the biometric component312. For example, the biometric component312may receive data corresponding to fingerprints, iris or retina scans, thermal scans, weights of users, a size of a user, pressure (e.g., within floor sensors), etc., and may determine a biometric profile corresponding to a user. In some instances, the biometric component312may distinguish between a user and sound from a television, for example. Thus, the biometric component312may incorporate biometric information into a confidence level for determining an identity and/or location of a user. In some instances, the biometric information from the biometric component312can be associated with a specific user profile such that the biometric information uniquely identifies a user profile of a user. In some instances, the radio frequency (RF) component314may use RF localization to track devices that a user may carry or wear. For example, as discussed above, the user320(and a user profile associated with the user) may be associated with a computing device324. The computing device324may emit RF signals (e.g., Wi-Fi, Bluetooth®, etc.), which are illustrated as signals362and364. As illustrated, the appliance354may detect the signal362and the access point352may detect the signal364. In some instances, the access point352and the appliance354may indicate to the RF component314the strength of the signals364and362(e.g., as a received signal strength indication (RSSI)), respectively. Thus, the RF component314may compare the RSSI for various signals and for various appliances and may determine an identity and/or location of a user (with an associated confidence level). In some instances, the RF component314may determine that a received RF signal is associated with a mobile device that is associated with a particular user. In some instances, a device (e.g., the access point352) may be configured with multiple antennas to determine a location of a user relative to the device using beamforming or spatial diversity techniques. In such a case, the RF component314may receive an indication of the direction of the user relative to an individual device. As illustrated, the appliance330may receive a signal366from the RF device326associated with the user and a user profile, while the access point352may receive a signal368. Further, the appliance354can receive a signal370from the RF device326. In an example where there is some uncertainty about an identity of the users in zones301(3) and301(4), the RF component314may determine that the RSSI of the signals362,364,366,368, and/or370increases or decreases a confidence level of an identity and/or location of the users, such as the user320and324. For example, if an RSSI of the signal362is higher than the RSSI of the signal370, the RF component may determine that it is more likely that a user in the zone301(4) is the user320than the user322. In some instances, a confidence level of the determination may depend on a relative difference of the RSSIs, for example. In some instances a voice-controlled device110, or another device proximate to the voice-controlled device110may include some RF or other detection processing capabilities so that a user who speaks an utterance may scan, tap, or otherwise acknowledge his/her personal device (such as a phone) to a sensing device in the environment302. In this manner the user may “register” with the system for purposes of the system determining who spoke a particular utterance. Such a registration may occur prior to, during, or after speaking of an utterance. In some instances, the machine-learning component316may track the behavior of various users in the environment as a factor in determining a confidence level of the identity and/or location of the user. For example, the user320may adhere to a regular schedule, such that the user320is outside the environment302during the day (e.g., at work or at school). In this example, the machine-learning component316may factor in past behavior and/or trends in determining the identity and/or location. Thus, the machine-learning component316may use historical data and/or usage patterns over time to increase or decrease a confidence level of an identity and/or location of a user. In some instances, the location-confidence component318receives determinations from the various components308,310,312,314, and316, and may determine a final confidence level associated with the identity and/or location of a user. In some embodiments, the confidence level may determine whether an action is performed. For example, if a user request includes a request to unlock a door, a confidence level may need to be above a threshold that may be higher than a confidence level needed to perform a user request associated with playing a playlist or resuming a location in an audiobook, for example. The confidence level or other score data may be included in user location data395. In some instances, the audio component310may receive data from one or more sensors capable of providing an audio signal (e.g., the devices110a-c, the microphones336,360, the computing devices324,334, and/or the set-top box346) to facilitate locating a user. In some instances, the audio component310may perform audio recognition on an audio signal to determine an identity of the user and an associated user profile. Further, in some instances, the imaging devices328,340,356may provide an audio signal to the audio component310. In some instances, the audio component310is configured to receive an audio signal from one or more devices and may determine a sound level or volume of the source of the audio. In some instances, if multiple sources of audio are available, the audio component310may determine that two audio signals correspond to the same source of sound, and may compare the relative amplitudes or volumes of the audio signal to determine a location of the source of sound. In some instances, individual devices may include multiple microphone and may determine a direction of a user with respect to an individual device. In some instances, aspects of the server(s)120may be configured at a computing device (e.g., a local server) within the environment302. Thus, in some instances, the audio component310operating on a computing device in the environment302may analyze all sound within the environment302(e.g., without requiring a wake word) to facilitate locating a user. In some instances, the audio component310may perform voice recognition to determine an identity of a user. The audio component310may also perform user identification based on information relating to a spoken utterance input into the system for speech processing. For example, the audio component310may take as input the audio data211and/or output data from the speech recognition component250. The audio component310may determine scores indicating whether the command originated from particular users. For example, a first score may indicate a likelihood that the command originated from a first user, a second score may indicate a likelihood that the command originated from a second user, etc. The audio component310may perform user recognition by comparing speech characteristics in the audio data211to stored speech characteristics of users. FIG.4illustrates the audio component310of the user-locator component295performing user recognition using audio data, for example input audio data211corresponding to an input utterance. The ASR component250performs ASR on the audio data211as described herein. ASR output (i.e., text data) is then processed by the NLU component260as described herein. The ASR component250may also output ASR confidence data402, which is passed to the user-locator component295. The audio component310performs user recognition using various data including the audio data211, training data404corresponding to sample audio data corresponding to known users, the ASR confidence data402, and secondary data406. The audio component310may output user recognition confidence data408that reflects a certain confidence that the input utterance was spoken by one or more particular users. The user recognition confidence data408may include an indicator of a verified user (such as a user ID corresponding to the speaker of the utterance) along with a confidence value corresponding to the user ID, such as a numeric value or binned value as discussed below. The user-recognition confidence data408may be used by various components, including other components of the user locator component295, to recognize and locate a user, for example nearby to a particular device, in a particular environment, or the like for purposes of performing other tasks as described herein. The training data404may be stored in a user-recognition data storage410. The user-recognition data storage410may be stored by the server(s)120or may be a separate device. Further, the user-recognition data storage410may be part of a user profile in the profile storage270. The user-recognition data storage410may be a cloud-based storage. The training data404stored in the user-recognition data storage410may be stored as waveforms and/or corresponding features/vectors. The training data404may correspond to data from various audio samples, each audio sample associated with a known user and/or user identity. For example, each user known to the system may be associated with some set of training data404for the known user. The audio component310may then use the training data404to compare against incoming audio data211to determine the identity of a user speaking an utterance. The training data404stored in the user-recognition data storage410may thus be associated with multiple users of multiple devices. Thus, the training data404stored in the storage410may be associated with both a user who spoke a respective utterance as well as the voice-controlled device110that captured the respective utterance. To perform user recognition, the audio component310may determine the voice-controlled device110from which the audio data211originated. For example, the audio data211may include a tag indicating the voice-controlled device110. Either the voice-controlled device110or the server(s)120may tag the audio data211as such. The user-locator component295may send a signal to the user-recognition data storage410; the signal may request the training data404associated with known users of the voice-controlled device110from which the audio data211originated. This request may include accessing a user profile(s) associated with the voice-controlled device110and then inputting only training data404associated with users corresponding to the user profile(s) of the device110. This inputting limits the universe of possible training data that the audio component310considers at runtime when recognizing a user, and thus decreases the amount of time needed to perform user recognition by decreasing the amount of training data404needed to be processed. Alternatively, the user-locator component295may access all (or some other subset of) training data404available to the system. Alternatively, the audio component310may access a subset of training data404of users potentially within the environment of the voice-controlled device110from which the audio data211originated, as may otherwise have been determined by the user locator component295. If the audio component310receives training data404as an audio waveform, the audio component310may determine features/vectors of the waveform(s) or otherwise convert the waveform into a data format that can be used by the audio component310to actually perform the user recognition. The audio component310may then identify the user that spoke the utterance in the audio data211by comparing features/vectors of the audio data211to training features/vectors (either received from the storage410or determined from training data404received from the storage410). The audio component310may include a scoring component412that determines a score indicating whether an input utterance (represented by audio data211) was spoken by a particular user (represented by training data404). The audio component310may also include a confidence component414which determines an overall confidence as the accuracy of the user recognition operations (such as those of the scoring component412) and/or an individual confidence for each user potentially identified by the scoring component412. The output from the scoring component412may include scores for all users with respect to which user recognition was performed (e.g., all users associated with the voice-controlled device110). For example, the output may include a first score for a first user, a second score for a second user, and third score for a third user, etc. Although illustrated as two separate components, the scoring component412and confidence component414may be combined into a single component or may be separated into more than two components. The scoring component412and confidence component414may implement one or more trained-machine learning models (such neural networks, classifiers, etc.) as known in the art. For example, the scoring component412may use probabilistic linear discriminant analysis (PLDA) techniques. PLDA scoring determines how likely it is that an input audio data feature vector corresponds to a particular training data feature vector for a particular user. The PLDA scoring may generate similarity scores for each training feature vector considered and may output the list of scores and users and/or the user ID of the speaker whose training data feature vector most closely corresponds to the input audio data feature vector. The scoring component412may also use other techniques such as GMMs, generative Bayesian models, or the like, to determine similarity scores. The confidence component414may input various data including information about the ASR confidence402, utterance length (e.g., number of frames or time of the utterance), audio condition/quality data (such as signal-to-interference data or other metric data), fingerprint data, image data, or other factors to consider how confident the user locator component295is with regard to the scores linking users to the input utterance. The confidence component414may also consider the similarity scores and user IDs output by the scoring component412. Thus, the confidence component414may determine that a lower ASR confidence represented in the ASR confidence data402, or poor input audio quality, or other factors, may result in a lower confidence of the audio component310. Whereas a higher ASR confidence represented in the ASR confidence data402, or better input audio quality, or other factors, may result in a higher confidence of the audio component310. Precise determination of the confidence may depend on configuration and training of the confidence component414and the models used therein. The confidence component414may operate using a number of different machine learning models/techniques such as GMM, neural networks, etc. For example, the confidence component414may be a classifier configured to map a score output by the scoring component412to a confidence. The audio component310may output user-recognition confidence data408specific to a single user or specific to multiple users in the form of an N-best list. For example, the audio component310may output user-recognition confidence data408with respect to each user indicated in the profile associated with the voice-controlled device110from which the audio data211was received. The audio component310may also output user-recognition confidence data408with respect to each user potentially in the location of the voice-controlled device110from which the audio data211was received, as determined by the user locator component295. The user recognition confidence data408may include particular scores (e.g., 0.0-1.0, 0-1000, or whatever scale the system is configured to operate). Thus the system may output an N-best list of potential users with confidence scores (e.g., John=0.2, Jane=0.8). Alternatively or in addition, the user-recognition confidence data408may include binned recognition indicators. For example, a computed recognition score of a first range (e.g., 0.0-0.33) may be output as “low,” a computed recognition score of a second range (e.g., 0.34-0.66) may be output as “medium,” and a computed recognition score of a third range (e.g., 0.67-1.0) may be output as “high.” Thus, the system may output an N-best list of potential users with binned scores (e.g., John=low, Jane=high). Combined binned and confidence score outputs are also possible. Rather than a list of users and their respective scores and/or bins, the user-recognition confidence data408may only include information related to the top scoring user as determined by the audio component310. The scores and bins may be based on information determined by the confidence component414. The audio component310may also output a confidence value that the scores and/or bins are correct, in which the confidence value indicates how confident the audio component310is in the output results. This confidence value may be determined by the confidence component414. The confidence component414may determine individual user confidences and differences between user confidences when determining the user recognition confidence data408. For example, if a difference between a first user's confidence score and a second user's confidence score is large, and the first user's confidence score is above a threshold, then the audio component310is able to recognize the first user as the user that spoke the utterance with a much higher confidence than if the difference between the users' confidences were smaller. The audio component310may perform certain thresholding to avoid incorrect user recognition results being output. For example, the audio component310may compare a confidence score output by the confidence component414to a confidence threshold. If the confidence score is not above the confidence threshold (for example, a confidence of “medium” or higher), the user locator component295may not output user recognition confidence data408, or may only include in that data408an indication that a user speaking the utterance could not be verified. Further, the audio component310may not output user recognition confidence data408until enough input audio data211is accumulated and processed to verify the user above a threshold confidence. Thus, the audio component310may wait until a sufficient threshold quantity of audio data of the utterance has been processed before outputting user recognition confidence data408. The quantity of received audio data may also be considered by the confidence component414. The audio component310may default to output-binned (e.g., low, medium, high) user-recognition confidence data408. Such binning may, however, be problematic from the command processor(s)290perspective. For example, if the audio component310computes a single binned confidence for multiple users, an application290may not be able to determine which user to determine content with respect to. In this situation, the audio component310may be configured to override its default setting and output user recognition confidence data408including values (e.g., 0.0-1.0) associated with the users associated with the same binned confidence. This enables the application290to select content associated with the user associated with the highest confidence value. The user recognition confidence data408may also include the user IDs corresponding to the potential user(s) who spoke the utterance. The user locator component295may combine data from components308-318to determine the location of a particular user. As part of its audio-based user recognition operations, the audio component310may use secondary data406to inform user recognition processing. Thus, a trained model or other component of the audio component310may be trained to take secondary data406as an input feature when performing recognition. Secondary data406may include a wide variety of data types depending on system configuration and may be made available from other sensors, devices, or storage such as user profile data504, etc. The secondary data406may include a time of day at which the audio data was captured, a day of a week in which the audio data was captured, the text data, NLU results, or other data. In one example, secondary data406may include image data or video data. For example, facial recognition may be performed on image data or video data received corresponding to the received audio data211. Facial recognition may be performed by the vision component308of the user locator component295, or by another component of the server(s)120. The output of the facial recognition process may be used by the audio component. That is, facial recognition output data may be used in conjunction with the comparison of the features/vectors of the audio data211and training data404to perform more accurate user recognition. The secondary data406may also include location data of the voice-controlled device110. The location data may be specific to a building within which the voice-controlled device110is located. For example, if the voice-controlled device110is located in user A's bedroom, such location may increase user recognition confidence data associated with user A, but decrease user recognition confidence data associated with user B. The secondary data406may also include data related to the profile of the device110. For example, the secondary data406may further include type data indicating a type of the autonomously mobile device110. Different types of speech-detection devices may include, for example, an autonomously mobile device, a smart speaker, a smart watch, a smart phone, a tablet computer, and a vehicle. The type of device110may be indicated in the profile associated with the device110. For example, if the device110from which the audio data211was received is an autonomously mobile device110belonging to user A, the fact that the autonomously mobile device110belongs to user A may increase user recognition confidence data associated with user A, but decrease user recognition confidence data associated with user B. Alternatively, if the device110from which the audio data211was received is a public or semi-public device, the system may use information about the location of the device to cross-check other potential user locating information (such as calendar data, etc.) to potentially narrow the potential users to be recognized with respect to the audio data211. The secondary data406may additionally include geographic coordinate data associated with the device110. For example, a profile associated with an autonomously mobile device may indicate multiple users (e.g., user A and user B). The autonomously mobile device may include a global positioning system (GPS) indicating latitude and longitude coordinates of the autonomously mobile device when the audio data211is captured by the autonomously mobile device. As such, if the autonomously mobile device is located at a coordinate corresponding to a work location/building of user A, such location may increase user recognition confidence data associated with user A, but decrease user recognition confidence data of all other users indicated in the profile associated with the autonomously mobile device. Global coordinates and associated locations (e.g., work, home, etc.) may be indicated in a user profile associated with the autonomously mobile device110. The global coordinates and associated locations may be associated with respective users in the user profile. The secondary data406may also include other data/signals about activity of a particular user that may be useful in performing user recognition of an input utterance. For example, if a user has recently entered a code to disable a home security alarm, and the utterance corresponds to a device at the home, signals from the home security alarm about the disabling user, time of disabling, etc., may be reflected in the secondary data406and considered by the audio component310. If a mobile device (such as a phone, Tile, dongle, or other device) known to be associated with a particular user is detected proximate to (for example physically close to, connected to the same WiFi network as, or otherwise nearby) the voice-controlled device110, this may be reflected in the secondary data406and considered by the user locator component295. The user-recognition confidence data408output by the audio component310may be used by other components of the user-locator component295and/or may be sent to one or more applications290, to the orchestrator230, or to other components. The skill290that receives the NLU results and the user recognition confidence score data408(or other user location results as output by the user locator component295) may be determined by the server(s)120as corresponding to content responsive to the utterance in the audio data211. For example, if the audio data211includes the utterance “Play my music,” the NLU results and user-recognition confidence data408may be sent to a music playing skill290. A user identified using techniques described herein may be associated with a user identifier, user profile, or other information known about the user by the system. As part of the user recognition/user location techniques described herein the system may determine the user identifier, user profile, or other such information. The profile storage270may include data corresponding to profiles that may be used by the system to perform speech processing. Such profiles may include a user profile that links various data about a user such as user preferences, user owned speech controlled devices, user owned other devices (such as speech-controllable devices), address information, contacts, enabled skills, payment information, etc. Each user profile may be associated with a different user identifier (ID). A profile may be an umbrella profile specific to a group of users. That is, a user profile encompasses two or more individual user profiles, each associated with a unique respective user ID. For example, a profile may be a household profile that encompasses user profiles associated with multiple users of a single household. A profile may include preferences shared by all the user profiles encompassed thereby. Each user profile encompassed under a single user profile may include preferences specific to the user associated therewith. That is, each user profile may include preferences unique with respect to one or more other user profiles encompassed by the same profile. A user profile may be a stand-alone profile or may be encompassed under a group profile. A profile may also be a device profile corresponding to information about a particular device, for example a device identifier, location, owner entity, whether the device is in a public, semi-public, or private location, the device capabilities, device hardware, or the like. A profile may also be an entity profile, for example belonging to a business, organization or other non-user entity. Such an entity profile may include information that may otherwise be found in a user and/or device profile, only such information is associated with the entity. The entity profile may include information regarding which users and/or devices are associated with the entity. The user identification/location techniques described herein (as well as others) may be operated by a device remote from the environment302, for example by the server(s)120. In other configurations, the user identification/location techniques may be wholly or partially operated by a device within the environment302. For example, the device110may be an autonomously mobile device and may include a motor, actuators, and/or other components that allow the device110to move within the environment302without necessarily receiving a user command to do so. The device110may also include a user-locator component295(or portions thereof) and may be capable of locating a user using techniques described herein (as well as others) either on its own or in combination with data received by another device such as server(s)120, another device local to the environment302, or the like. In various embodiments, the device110and the user-locator component295continually or periodically scan the environment for users, and the device110maintains a list of users currently in the environment. In other embodiments, the device110and the user-locator component295determine which users, if any, are in the environment only upon certain actions or requests, such as determination of receipt of an incoming call, detection of a wakeword, or similar actions. Similarly, the device110may locate users using the user-locator component295and may also, apart from determining user location, further determine user identity using the various techniques described above. Determination of the user identity may occur continually, periodically, or only upon certain actions or requests, such as determination of receipt of an incoming call, detection of a wakeword, or similar actions. The device110may be controlled by a local user using voice input, a touch screen, gestures, or other such input. The device110may be also controlled using remote commands from another device110and/or from server(s)120and/or125. In various embodiments, one or more application programming interfaces (APIs) may be defined to send and receive commands and/or data to or from the device110. The APIs may be used by other devices110and/or the server(s)120and/or125. For example, one or more APIs may be defined to implement a “look” command in which the device110scans its proximate area for presence of users. When the device110receives the look command from, e.g., server(s)120, it may, for example, use its camera and/or microphone to detect the presence or absence of a user in image and/or video data or determine if audio captured by the microphone contains an utterance corresponding to the user. The device110may rotate its camera and/or microphone array to scan its proximate area. In some embodiments, the device110uses the user-locator component295and corresponding components and techniques to determine presence or absence of one or more users. The device110may determine a confidence score for each user regarding his or her absence or presence. The look command may be sent with (or may include) additional commands or parameters. For example, the look command may specify a length of time to spend looking, information about users that may be proximate the device110, whether to monitor video and/or audio, or other such parameters. In response to receiving the look command, the device110may transmit, to the server(s)120, information regarding any users found to be proximate the device110, the confidence score corresponding to each user, or, if no users are found, an indication that no users are proximate the device110. Another command related to the API may be a “find” command. When the device110receives the find command, it may search its environment for users, particularly if no users are initially found to be proximate the device110. As described above, the device110may travel about its environment into different rooms, hallways, spaces, etc., in search of users. As described above, the device110may shorten its search by inferring placement of users in the environment—if, e.g., the device110observed a user leave its proximate area through a doorway, the device110may begin its search by traveling through that doorway. The find command may, like the look command, be sent with (or may include) additional parameters, such as a length of time to spend finding, rooms or spaces to travel to or avoid, inferences regarding the placement of unseen users, or similar parameters. The device110may transmit, in response to the receipt of the find command, information to the server(s)120, such as number and placement of found users or indication that no users were found. Other API commands may include a “follow” command, in which the device110is commanded to follow a particular user or users as the user or users moves throughout the environment, and a “hangout” command, in which the device110is commanded to maintain proximity to a number of users. These commands, like the commands above, may specify additional parameters, such as a length of time to spend following or hanging out, which users to follow, how many users to hang out with, and other similar parameters. The device110may similarly send relevant information to the server(s), such as success or failure at following or hanging out and a number of successfully followed users. FIGS.5A-5Cillustrate one embodiment of an autonomously mobile device110.FIG.5Aillustrates a front view of the autonomously mobile device110according to various embodiments of the present disclosure. The device110includes wheels502disposed on left and right sides of a lower structure. The wheels502may be canted inwards toward an upper structure. In other embodiments, however, the wheels502may be mounted vertically. A caster504(i.e., a smaller wheel) may disposed along a midline of the device110. The front section of the device110may include a variety of external sensors. A first set of optical sensors506may be disposed along the lower portion of the front, and a second set of optical sensors508may be disposed along an upper portion of the front. A microphone array510may be disposed between or near the second set of optical sensors508. One or more cameras512may be mounted to the front of the device110; two cameras512may be used to provide for stereo vision. The distance between two cameras512may be, for example, 5-15 centimeters (cm); in some embodiments, the distance is 10 cm. In some embodiments, the cameras512may exhibit a relatively wide horizontal field-of-view (HFOV). For example, the HFOV may be between 90° and 110°. A relatively wide FOV may provide for easier detection of moving objects, such as users or pets, which may be in the path of the device110. Also, the relatively wide FOV may provide for the device110to more easily detect objects when rotating or turning. Cameras512used for navigation may be of different resolution from, or sensitive to different wavelengths than, other cameras512used for other purposes, such as video communication. For example, navigation cameras512may be sensitive to infrared light allowing the device110to operate in darkness or semi-darkness, while a camera516mounted above a display514may be sensitive to visible light and may be used to generate images suitable for viewing by a person. A navigation camera512may have a resolution of at least 300 kilopixels each, while the camera516mounted above the display514may have a resolution of at least 10 megapixels. In other implementations, navigation may utilize a single camera512. The cameras512may operate to provide stereo images of the environment, the user, or other objects. For example, an image from the camera516disposed above the display514may be accessed and used to generate stereo-image data corresponding to a face of a user. This stereo-image data may then be used for facial recognition, user identification, gesture recognition, gaze tracking, and other uses. In some implementations, a single camera516may be disposed above the display514. The display514may be mounted on a movable mount. The movable mount may allow the display to move along one or more degrees of freedom. For example, the display514may tilt, pan, change elevation, and/or rotate. In some embodiments, the display514may be approximately 8 inches as measured diagonally from one corner to another. An ultrasonic sensor518may be mounted on the front of the device110and may be used to provide sensor data that is indicative of objects in front of the device110. One or more speakers520may be mounted on the device110, and the speakers520may have different audio properties. For example, low-range, mid-range, and/or high-range speakers520may be mounted on the front of the device110. The speakers520may be used to provide audible output such as alerts, music, human speech such as during a communication session with another user, and so forth. Other output devices522, such as one or more lights, may be disposed on an exterior of the device110. For example, a running light may be arranged on a front of the device110. The running light may provide light for operation of one or more of the cameras, a visible indicator to the user that the device110is in operation, or other such uses. One or more floor optical motion sensors (FOMS)524,526may be disposed on the underside of the device110. The FOMS524,526may provide indication indicative of motion of the device110relative to the floor or other surface underneath the device110. In some embodiments, the FOMS524,526comprise a light source, such as light-emitting diode (LED) and/or an array of photodiodes. In some implementations, the FOMS524,526may utilize an optoelectronic sensor, such as an array of photodiodes. Several techniques may be used to determine changes in the data obtained by the photodiodes and translate this into data indicative of a direction of movement, velocity, acceleration, and so forth. In some implementations, the FOMS524,526may provide other information, such as data indicative of a pattern present on the floor, composition of the floor, color of the floor, and so forth. For example, the FOMS524,526may utilize an optoelectronic sensor that may detect different colors or shades of gray, and this data may be used to generate floor characterization data. FIG.5Billustrates a side view of the autonomously mobile device110according to various embodiments of the present disclosure. In this side view, the left side of the device110is illustrated. An ultrasonic sensor528and an optical sensor530may be disposed on either side of the device110. The disposition of components of the device110may be arranged such that a center of gravity (COG)532is located between a wheel axle534of the front wheels502and the caster504. Such placement of the COG532may result in improved stability of the device110and may also facilitate lifting by a carrying handle128. In this illustration, the caster110is shown in a trailing configuration, in which the caster110is located behind or aft of the wheel axle534and the center of gravity532. In another implementation (not shown) the caster110may be in front of the axle of the wheels502. For example, the caster504may be a leading caster504positioned forward of the center of gravity532. The device110may encounter a variety of different floor surfaces and transitions between different floor surfaces during the course of its operation. A contoured underbody536may transition from a first height538at the front of the device110to a second height540that is proximate to the caster504. This curvature may provide a ramp effect such that, if the device110encounters an obstacle that is below the first height538, the contoured underbody536helps direct the device110over the obstacle without lifting the driving wheels502from the floor. FIG.5Cillustrates a rear view of the autonomously mobile device110according to various embodiments of the present disclosure. In this view, as with the front view, a first pair of optical sensors542are located along the lower edge of the rear of the device110, while a second pair of optical sensors544are located along an upper portion of the rear of the device110. An ultrasonic sensor546may provide proximity detection for objects that are behind the device110. Charging contacts548may be provided on the rear of the device110. The charging contacts548may include electrically conductive components that may be used to provide power (to, e.g., charge a battery) from an external source such as a docking station to the device110. In other implementations, wireless charging may be utilized. For example, wireless inductive or wireless capacitive charging techniques may be used to provide electrical power to the device110. In some embodiments, the wheels502may include an electrically conductive portion550and provide an electrical conductive pathway between the device110and a charging source disposed on the floor. One or more data contacts552may be arranged along the back of the device110. The data contacts552may be configured to establish contact with corresponding base data contacts within the docking station. The data contacts552may provide optical, electrical, or other connections suitable for the transfer of data. Other output devices126, such as one or more lights, may be disposed on an exterior of the back of the device110. For example, a brake light may be arranged on the back surface of the device110to provide users an indication that the device110is slowing or stopping. The device110may include a modular payload bay554. In some embodiments, the modular payload bay554is located within the lower structure. The modular payload bay554may provide mechanical and/or electrical connectivity with the device110. For example, the modular payload bay554may include one or more engagement features such as slots, cams, ridges, magnets, bolts, and so forth that are used to mechanically secure an accessory within the modular payload bay554. In some embodiments, the modular payload bay554includes walls within which the accessory may sit. In other embodiments, the modular payload bay554may include other mechanical engagement features such as slots into which the accessory may be slid and engage. The device110may further include a mast556, which may include a light558. The machine-learning model(s) may be trained and operated according to various machine learning techniques. Such techniques may include, for example, neural networks (such as deep neural networks and/or recurrent neural networks), inference engines, trained classifiers, etc. Examples of trained classifiers include Support Vector Machines (SVMs), neural networks, decision trees, AdaBoost (short for “Adaptive Boosting”) combined with decision trees, and random forests. Focusing on SVM as an example, SVM is a supervised learning model with associated learning algorithms that analyze data and recognize patterns in the data, and which are commonly used for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. More complex SVM models may be built with the training set identifying more than two categories, with the SVM determining which category is most similar to input data. An SVM model may be mapped so that the examples of the separate categories are divided by clear gaps. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gaps they fall on. Classifiers may issue a “score” indicating which category the data most closely matches. The score may provide an indication of how closely the data matches the category. In order to apply the machine-learning techniques, the machine-learning processes themselves need to be trained. Training a machine-learning component such as, in this case, one of the first or second models, requires establishing a “ground truth” for the training examples. In machine learning, the term “ground truth” refers to the accuracy of a training set's classification for supervised learning techniques. Various techniques may be used to train the models including backpropagation, statistical learning, supervised learning, semi-supervised learning, stochastic learning, or other known techniques. A voice-controlled system may be configured to receive and process a call request from one device110and route that call request to communication servers125in order to establish an asynchronous call from one device110to another device110where the call may be placed and potentially answered using voice commands. Certain aspects of speech command processing may be handled by server(s)120while certain aspects of call routing and management may be handled by communication server(s)125. Calls may be audio calls, video and audio calls, or other such combinations. FIG.6illustrates an example of signaling to initiate a communication session according to examples of the present disclosure. In one example configuration, the server(s)120are configured to enable voice commands (e.g., perform ASR, NLU, etc. to identify a voice command included in audio data), whereas the communication server(s)125are configured to enable communication sessions (e.g., using session initiation protocol (SIP)). For example, the server(s)125may send SIP messages to endpoints (e.g., adapter, device110, remote devices, etc.) in order to establish a communication session for sending and receiving audio data and/or video data. The communication session may use network protocols such as real-time transport protocol (RTP), RTP Control Protocol (RTCP), Web Real-Time communication (WebRTC) and/or the like. For example, the server(s)125may send SIP messages to initiate a single RTP media stream between two endpoints (e.g., direct RTP media stream between two devices device110) and/or to initiate and facilitate RTP media streams between the two endpoints (e.g., RTP media streams between devices110and the server(s)125). During a communication session, the server(s)125may initiate multiple media streams, with a first media stream corresponding to incoming audio data from the device110to an adapter and a second media stream corresponding to outgoing audio data from an adapter to the device110, although for ease of explanation this may be illustrated as a single RTP media stream. As illustrated inFIG.6, the device110may send (502) audio data to the server(s)120and the server(s)120may determine (504) call information using the audio data and may send (506) the call information to the server(s)125. The server(s)120may determine the call information by performing ASR, NLU, etc., as discussed above with regard toFIG.2, and the call information may include a data source name (DSN), a number from which to call, a number to which to call, encodings and/or additional information. For example, the server(s)120may identify from which phone number the user would like to initiate the telephone call, to which phone number the user would like to initiate the telephone call, from which device110the user would like to perform the telephone call, etc. WhileFIG.6illustrates the server(s)120sending the call information to the server(s)125in a single step (e.g.,606), the disclosure is not limited thereto. Instead, the server(s)120may send the call information to the device110and the device110may send the call information to the server(s)125in order to initiate the telephone call without departing from the disclosure. Thus, the server(s)120may not communicate directly with the server(s)125in step606, but may instead instruct the device110to connect to the server(s)125in order to initiate the telephone call. The server(s)125may include an outbound SIP translator632, an inbound SIP translator634and a call state database640. The outbound SIP translator632may include logic to convert commands received from the server(s)120into SIP requests/responses and may handle sending outgoing SIP requests and sending responses to incoming SIP requests. After receiving the call information by the outbound SIP translator632, the outbound SIP translator632may persist (508) a SIP dialog using the call state database640. For example, the DSN may include information such as the name, location and driver associated with the call state database640(and, in some examples, a user identifier (ID) and password of the user) and the outbound SIP translator632may send a SIP dialog to the call state database640regarding the communication session. The call state database640may persist the call state if provided a device ID and one of a call ID or a dialog ID. The outbound SIP translator632may send (510) a SIP Invite to an Endpoint650, a remote device, a Session Border Controller (SBC) or the like. In some examples, the endpoint650may be a SIP endpoint, although the disclosure is not limited thereto. The inbound SIP translator634may include logic to convert SIP requests/responses into commands to send to the server(s)120and may handle receiving incoming SIP requests and incoming SIP responses. The endpoint650may send (512) a100TRYING message to the inbound SIP translator634and may send (514) a180RINGING message to the inbound SIP translator634. The inbound SIP translator634may update (516) the SIP dialog using the call state database640and may send (518) a RINGING message to the server(s)120, which may send (520) the RINGING message to the device110. When the communication session is accepted by the endpoint650, the endpoint650may send (522) a200OK message to the inbound SIP translator634, the inbound SIP translator645may send (524) a startSending message to the server(s)120and the server(s)120may send (526) the startSending message to the device110. The startSending message may include information associated with an internet protocol (IP) address, a port, encoding or the like required to initiate the communication session. Using the startSending message, the device110may establish (528) an RTP communication session with the endpoint650via the server(s)125. WhileFIG.6illustrates the server(s)125sending the RINGING message and the StartSending message to the device110via the server(s)120, the disclosure is not limited thereto. Instead, steps618and620may be combined into a single step and the server(s)125may send the RINGING message directly to the device110without departing from the disclosure. Similarly, steps624and626may be combined into a single step and the server(s)125may send the StartSending message directly to the device110without departing from the disclosure. Thus, the server(s)125may communicate with the device110directly without using the server(s)120as an intermediary. For ease of explanation, the disclosure illustrates the system100using SIP. However, the disclosure is not limited thereto and the system100may use any communication protocol for signaling and/or controlling communication sessions without departing from the disclosure. Similarly, while some descriptions of the communication sessions refer only to audio data, the disclosure is not limited thereto and the communication sessions may include audio data, video data and/or any other multimedia data without departing from the disclosure. FIG.7A-8Billustrate examples of signaling to end a communication session according to examples of the present disclosure. After establishing the RTP communication session628between the device110and the endpoint650, the RTP communication session may be ended by the user inputting a command to end the telephone call to the device110, as illustrated inFIG.7A, or a remote party inputting a command to end the telephone call to the endpoint650, as illustrated inFIG.7B. The device110may initiate the end of the communication session. As illustrated inFIG.7A, the device110may send a state change message702to the server(s)120, which may determine704that a user of the device110wishes to end the session. The server(s)120may corresponding send an end message706to the server(s)125. The outbound SIP translator632may, in response, update the session using the call state database640and may send a SIP BYE708message to the endpoint650. The endpoint650may send a200OK message710to the inbound SIP translator634and the inbound SIP translator634may update the session using the call state database640. In some examples, the inbound SIP translator634may send the200OK message710to the device110to confirm that the communication session has been ended. Thus, the RTP communication session628may be ended between the device110and the endpoint650. The endpoint650may instead or in addition initiate the end of the session. As illustrated inFIG.7B, the endpoint650may send a SIP BYE message752to the inbound SIP translator634and the inbound SIP translator634may update the session using the call state database640. The inbound SIP translator634may send a stopSending message756to the server(s)120and the server(s)120may send a corresponding stopSending message758to the device110. The device110may send a state change message760to the server(s)120and the server(s)120may confirm762the end of the session and send an End message764to the outbound SIP translator632; the End message764may include a DSN. The outbound SIP translator632may then update the session using the call state database640, and send a200OK766message to the endpoint650. Thus, the RTP communication session628may be ended between the device110and the endpoint650. WhileFIGS.7A and7Billustrate the server(s)120acting as an intermediary between the device110and the server(s)125, the disclosure is not limited thereto. Instead, the device110may directly send the state change message(s)702/760and/or the End message(s)706/764to the server(s)125without departing from the disclosure. Similarly, the server(s)125may send the StopSending message756directly to the device110without departing from the disclosure, and/or the device110may directly send the state change message(s)702/760and/or the End message(s)706/764to the server(s)125without departing from the disclosure. WhileFIGS.6,7A and7Billustrate the RTP communication session628being established between the device110and the endpoint650, the disclosure is not limited thereto and the RTP communication session628may be established between an adapter (such as an adapter to a packet switched telephone network, not shown). FIG.8A-8Billustrate examples of establishing communication sessions and/or media streams between devices according to examples of the present disclosure. In some examples, the device110may have a publicly accessible IP address and may be configured to establish the RTP communication session directly with the endpoint650. To enable the device110to establish the RTP communication session, the server(s)125may include Session Traversal of User Datagram Protocol (UDP) Through Network Address Translators (NATs) server(s) (e.g., STUN server(s)810). The STUN server(s)810may be configured to allow NAT clients (e.g., device110behind a firewall) to setup telephone calls to a VoIP provider hosted outside of the local network by providing a public IP address, the type of NAT they are behind and a port identifier associated by the NAT with a particular local port. With reference toFIG.8A, the device110may perform IP discovery using the STUN server(s)810and may use this information to set up an RTP communication session814(e.g., UDP communication) between the device110and the endpoint650to establish a telephone call. In some examples, the device110may not have a publicly accessible IP address. For example, in some types of NAT the device110cannot route outside of the local network. To enable the device110to establish an RTP communication session, the server(s)125may include Traversal Using relays around NAT (TURN) server(s)820. The TURN server(s)820may be configured to connect the device110to the endpoint650when the client110is behind a NAT. With reference toFIG.8B, the device110may establish an RTP session with the TURN server(s)820and the TURN server(s)820may establish an RTP session with the endpoint650. Thus, the device110may communicate with the endpoint650via the TURN server(s)820. For example, the device110may send outgoing audio data to the server(s)125and the server(s)125may send the outgoing audio data to the endpoint650. Similarly, the endpoint650may send incoming audio data to the server(s)125and the server(s)125may send the incoming audio data to the device110. In some examples, the system100may establish communication sessions using a combination of the STUN server(s)810and the TURN server(s)820. For example, a communication session may be more easily established/configured using the TURN server(s)820, but may benefit from latency improvements using the STUN server(s)810. Thus, the system100may use the STUN server(s)810when the communication session may be routed directly between two devices and may use the TURN server(s)820for all other communication sessions. Additionally or alternatively, the system100may use the STUN server(s)810and/or the TURN server(s)820selectively based on the communication session being established. For example, the system100may use the STUN server(s)810when establishing a communication session between two devices (e.g., point to point) within a single network (e.g., corporate LAN and/or WLAN), but may use the TURN server(s)820when establishing a communication session between two devices on separate networks and/or three or more devices regardless of network(s). When the communication session goes from only two devices to three or more devices, the system100may need to transition from the STUN server(s)810to the TURN server(s)820. Thus, the system100may anticipate three or more devices being included in the communication session and may establish the communication session using the TURN server(s)820. FIGS.9A-9Ddescribe various embodiments of the present disclosure. Referring first toFIG.9A, in various embodiments, a calling device (here, a smart speaker110c) requests that a communications connection be established with a receiving device (here, an autonomously mobile device110a). The calling device110creceives (902) audio corresponding to an utterance by a first user and may detect a wakeword in the utterance. The utterance may include the request to create the communication connection and may further include information regarding the receiving device110a, such as the name, type, or address of the receiving device110aand/or the name, nickname, or address of a second user of the receiving device110a. The calling device110cgenerates audio data corresponding to the audio and sends (904) the audio data to server(s)120(in some embodiments, based on detecting the wakeword). The server(s)120may perform (906) speech processing on the audio data, as described above, to determine that the utterance corresponds to the call request to establish a communication connection with the receiving device110a. The server(s)120may further determine that the audio data includes the name, type, or address of the receiving device110aand/or the name, nickname, or address of a second user of the receiving device110a. Based on the speech processing, the server(s)120may identify (908) the second user as the recipient of the call request. As described in greater detail below, the server(s)120may make further determinations, such as whether the first user requires authorization from the second user before establishing the communication connect (as specified in a user profile associated with the second user, as described above), if the receiving device110ais an autonomously mobile device, or similar determinations. The server(s)120may thereafter send (910) a call request to communication server(s)125. If the receiving device110ais not an autonomously mobile device, the communication server(s) may simply send an indication (911) to the receiving device110ato ring or otherwise indicate that a call is incoming. If the receiving device110ais an autonomously mobile device, however, the server(s)125may send commands, such as the API commands described above, to the receiving device110ato locate and/or find (914) the second user. If the receiving device locates and/or finds the second user, it may request (916) permission from the second user to open the call (using, for example, an audio or touchscreen interface). If the receiving device receives (918) such permission (using, for the example, the audio or touchscreen interface), it may send (920) an indication of finding the user and an indication that the call was accepted to the server(s)125. In response, the server(s)125may send a first start-call instruction922ato the calling device110cand a second start-call instruction922bto the receiving device110a, which may then create an ongoing call924over the communication connection. FIG.9B, likeFIG.9Adescribed above, illustrates creating a communication connection in accordance with embodiments of the present disclosure; in these embodiments, however, the first user of the calling device110cis authorized to create the communication connect without requesting permission from the second user. As above, the calling device110crequests that a communications connection be established with a receiving device110a. The calling device110creceives (902) audio corresponding to an utterance by a first user and may detect a wakeword in the utterance. The utterance may include the request to create the communication connection and may further include information regarding the receiving device110a, such as the name, type, or address of the receiving device110aand/or the name, nickname, or address of a second user of the receiving device110a. The calling device110cgenerates audio data corresponding to the audio and sends (904) the audio data to server(s)120(in some embodiments, based on detecting the wakeword). The server(s)120may perform (906) speech processing on the audio data, as described above, to determine that the utterance corresponds to the call request to establish a communication connection with the receiving device110a. The server(s)120may further determine that the audio data includes the name, type, or address of the receiving device110aand/or the name, nickname, or address of a second user of the receiving device110a. Based on the speech processing, the server(s)120may identify (908) the second user as the recipient of the call request. In these embodiments, the server(s)120determine (926) that the calling device110cis authorized to start the call (without, for example, additional authorization from the second user). As explained above, the server(s)120may identify a user account associated with the second user and determine that the user account includes authorization for the first user to start the call. The server(s)120may therefore send (910) a call request, which may also indicate that said authorization is granted, to the server(s)125, which may send an indication (911) and a corresponding request (912) to the receiving device110a. The receiving device110amay thereafter initiate (914) the user locate and/or find operation and, if the user is located and/or found, send (928) a corresponding indication to the server(s)125. In response, the server(s)125may send a first start-call instruction922ato the calling device110cand a second start-call instruction922bto the receiving device110a, which may then create an ongoing call924over the communication connection. FIG.9Cillustrates further embodiments of the present disclosure. In these embodiments, the first user may control movement of the receiving device110a(e.g., movement in an environment, pan/tilt of a camera of the receiving device110a, or other such movement). In various embodiments, for example, the first user is a doctor, therapist, or other caregiver, and may wish to move the receiving device110ato better check in on the second user, who may have difficulty moving or may be prone to falling, bouts of unconsciousness, or other affectations that may affect his or her ability to respond to a call. In other embodiments, the first user is the owner of the receiving device110aand wishes to operate it remotely to, for example, check in on a status of a small child, pet, or residence. As described above, the second user may authorize the first user to do so. An ongoing call924is established as described above. The calling device110creceives (930) movement input from the first user. If the calling device110cis a smart speaker, as illustrated, the movement input may be voice input, such as “pan right” or “move forward.” If the calling device is, for example, a smartphone110b, the movement input may be touch input. The calling device110csends (932) movement data corresponding to the movement input to the server(s)120. The server(s)120may determine (934) that the calling device110cis authorized to move the receiving device110a. As described above, a user profile may be associated with the receiving device110a, and the user profile may include authorization for the first user of the calling device110cto move the receiving device110a. In other embodiments (not illustrated), the server(s)120send a request to the receiving device110ato authorization for the first user to move the receiving device110a, and the second user of the receiving device110amay send said authorization back to the server(s)120. The server(s)120may send (934) a movement request to the server(s)125and/or directly to the receiving device110a; the server(s)125may send (938) the movement request to the receiving device110a, if necessary. The receiving device110amay move (940) in accordance with the movement request. FIG.9Dillustrates embodiments of the present disclosure in which the autonomously mobile device is the calling device110amay be used to initiate communications with a user of a receiving device110c. For example, the first user may wish to communicate with the receiving device110cand may know that the receiving device110cis located in the same environment as the calling device110a, but the first user may not be able to move to the second user. The calling device110amay be used to find the second user; thereafter, the calling device110amay output audio asking the second user to contact the first user or may send a request to communicate with the second device. The calling device110areceives (950) a request to call the second device (via, as described above, voice, touch, or other input) and sends (952) request data corresponding to the request to the server(s)120. The server(s)120may determine (954) that the calling device954is authorized to call the third device and, if so, send a call request (910) to the server(s)125and/or directly to the sending device110a. The sending device110ainitiates (960) a locate and/or a find operation to locate and/or find the receiving device and/or second user. Once the second user and/or receiving device is found, the sending device110amay output audio corresponding to a request for the third user to establish communication with the calling device110a. Alternatively or in addition, the receiving device110may send an indication of finding the third device to the server(s)125, which may then initiate communication between the third device and the calling device110c. As illustrated inFIG.9E, in various embodiments, the receiving device110amay track (964) movement of the second user to maintain the second user in an area of observation if and when the second user moves in the environment while participating in a video, audio, or other call. The receiving device110amay determine a location of the second user by one or more of the techniques described above (e.g., by analyzing video and/or audio data). Based on the location, the receiving device110amay configure an input device (e.g., a video camera or microphone array) to receive input from the location. The area of observation of a camera may correspond to objects, scenes, or other environmental elements in the field of view of the camera; the area of observation of a microphone array may correspond to a direction in which the microphone array amplifies received audio (e.g., a beamforming direction). For example, the receiving device110amay pan or tilt the camera to face the location or may configure the microphone array such that a target beam corresponds to the location. If the receiving device110adetermines that the second user has moved from a first location to a second location (by, e.g., determining movement of the user), the receiving device110configures the input device to receive input from the second location. Data from one input device may be used to configure a position of a second input device. For example, if data from the microphone array indicates that a source of audio lies in a particular direction, the device110amay rotate or otherwise move a camera to point in that direction; the camera may then be used to collect data for facial recognition. In another example, if data from the camera indicates that a face lies in a particular direction, the device110amay configure the microphone array to amplify sounds from that direction. If an input device does not detect a user at a particular location, the device110amay search for the user at other locations. In some embodiments, the second user may wish that the receiving device110anot capture the second user on video during a video call or similar communication. In these embodiments, the second user may issue an utterance expressing this wish (e.g., “stop looking at me”). The receiving device110amay determine that the utterance corresponds to a request to stop the video (and/or may send audio data corresponding to the utterance to the server(s)120, which may instead or in addition determine that the utterance corresponds to the request to stop the video). Based on the determination, the receiving device110amay cease capturing video or may pan/tilt its video camera such that the video does not include the second user. In other embodiments, the second user may first receive the call from the first user on another device, such as a second smart speaker. The server(s)120may determine that a communication connection exists between the sending device110cand the second smart speaker and determine a location at which the second smart speaker is disposed. The server(s)120may thus send a command to the receiving device110ato move to the location of the second smart speaker, after which time the receiving device110ato establish its own communication connection with the sending device110c. In some embodiments, the receiving device110amay track the user by keeping the user in a field of observation. In some embodiments, the user grants authorization to start or continue the call by interacting with the receiving device110a; in these embodiments, the receiving device110amay track the user based on the interaction. In other embodiments, the receiving device110astarts or continues the call based on a property of the user, such as a level of interaction between the user and the receiving device110a. In some embodiments, more than one user (i.e., the second user and another, third user) may be proximate the receiving device110a. The receiving device110aand/or the servers120may initially select the second user as the recipient of the call, but the third user may have been the intended recipient. If the receiving device110aand/or the server(s)120determine that the third user is interacting with the receiving device110a(i.e., facing it and/or speaking to it) and if the second user is not interacting with the receiving device110a, the receiving device110amay configure an input device (e.g., a video camera or microphone array) to receive input corresponding to the third user. FIG.10is a flow diagram of a process to provide communication, according to embodiments of the present invention. The process may be implemented at least in part by one or more of the device(s)110and/or the servers120,125. The server(s)120receives (1002) a request from, for example, the first device110bto establish a communication connection with a second device110a. The request may include, for example, a communication type, such as a video communication and/or audio communication, and information identifying the second device110aand/or the second user, such as a device name, device address, user name, and/or user address. The server(s)120determines (1004) that a user account of the second user indicates that the second device110cis an autonomously mobile device. The user account may indicate, for example, a device type, device name, or device properties of the second device110athat indicates that the second device110ais autonomously mobile device. The information in the user account may be provided by the first user or second user or determined from the second device110a. The server(s)120determines (1006) that the second device110ais disposed in a first location in an environment, such as a house, room, yard, or other such environment. For example, the server(s)120may send a query to the second device110arequesting its location; the second device110amay, in response determine its location using, for example, sensor data, video data, network data, or other such data, and may send its location to the server(s)120. The server(s)120may determine (1008) that the second user is disposed at the first location. The server(s)120may, for example, send a query to the second device110ato monitor the first location to determine if the user is present using, for example, a camera and/or microphone. The second device110amay send audio and/or video data to the server(s)120, and the server(s)120may determine that the second user is present at the first location. In other embodiments, the second device110adetermines that the second user is present at the first location. If the second user is present at the first location, the communication server125may establish (1010) a communication connection between the first device and the second device, as described above. The communication server125may instruct the second device110ato ask the second user whether they would like to accept the incoming call; in other embodiments, the first user is authorized to establish the communication connection without the second device110aasking the second user. If, however, the server(s)120and/or second device110adetermines that the second user is not at the first location, the second device110asearches the environment to locate the second user. For example, the second device110amay move to a different room or different area in an effort to locate the second user. If the server(s)120and/or second device110adetermines (1014) that the second user is at a next location, the communication server125establishes the communication connect as described above. If not, the second device continues searching (1012). FIG.11is a block diagram conceptually illustrating a device110that may be used with the system.FIG.12is a block diagram conceptually illustrating example components of a remote device, such as the server(s)120, which may assist with ASR processing, NLU processing, etc., and the skill server(s)225. The term “server” as used herein may refer to a traditional server as understood in a server/client computing structure but may also refer to a number of different computing components that may assist with the operations discussed herein. For example, a server may include one or more physical computing components (such as a rack server) that are connected to other devices/components either physically and/or over a network and is capable of performing computing operations. A server may also include one or more virtual machines that emulates a computer system and is run on one or across multiple devices. A server may also include other combinations of hardware, software, firmware, or the like to perform operations discussed herein. The server(s) may be configured to operate using one or more of a client-server model, a computer bureau model, grid computing techniques, fog computing techniques, mainframe techniques, utility computing techniques, a peer-to-peer model, sandbox techniques, or other computing techniques. Multiple servers (120/125/225) may be included in the system, such as one or more servers120for performing ASR processing, one or more servers120for performing NLU processing, one or more server(s)125for routing and managing communications, one or more skill server(s)225for performing actions responsive to user inputs, etc. In operation, each of these devices (or groups of devices) may include computer-readable and computer-executable instructions that reside on the respective device (120/125/225), as will be discussed further below. Each of these devices (110/120/125/225) may include one or more controllers/processors (1104/1204), which may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory (1106/1206) for storing data and instructions of the respective device. The memories (1106/1206) may individually include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive memory (MRAM), and/or other types of memory. Each device (110/120/125/225) may also include a data storage component (1108/1208) for storing data and controller/processor-executable instructions. Each data storage component (1108/1208) may individually include one or more non-volatile storage types such as magnetic storage, optical storage, solid-state storage, etc. Each device (110/120/125/225) may also be connected to removable or external non-volatile memory and/or storage (such as a removable memory card, memory key drive, networked storage, etc.) through respective input/output device interfaces (1102/1202). Computer instructions for operating each device (110/120/125/225) and its various components may be executed by the respective device's controller(s)/processor(s) (1104/1204), using the memory (1106/1206) as temporary “working” storage at runtime. A device's computer instructions may be stored in a non-transitory manner in non-volatile memory (1106/1206), storage (1108/1208), or an external device(s). Alternatively, some or all of the executable instructions may be embedded in hardware or firmware on the respective device in addition to or instead of software. Each device (110/120/125/225) includes input/output device interfaces (1102/1202). A variety of components may be connected through the input/output device interfaces (1102/1202), as will be discussed further below. Additionally, each device (110/120/125/225) may include an address/data bus (1124/1224) for conveying data among components of the respective device. Each component within a device (110/120/125/225) may also be directly connected to other components in addition to (or instead of) being connected to other components across the bus (1124/1224). Referring toFIG.11, the device110may include input/output device interfaces1102that connect to a variety of components such as an audio output component such as a speaker1112, a wired headset or a wireless headset (not illustrated), or other component capable of outputting audio. The device110may also include an audio capture component. The audio capture component may be, for example, a microphone1120or array of microphones, a wired headset or a wireless headset (not illustrated), etc. If an array of microphones is included, approximate distance to a sound's point of origin may be determined by acoustic localization based on time and amplitude differences between sounds captured by different microphones of the array. The device110may additionally include a display1116for displaying content. The device110may further include a camera1118. Via antenna(s)1114, the input/output device interfaces1102may connect to one or more networks199via a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, 4G twork, 5G network, etc. A wired connection such as Ethernet may also be supported. Through the network(s)199, the system may be distributed across a networked environment. The I/O device interface (1102/1202) may also include communication components that allow data to be exchanged between devices such as different physical servers in a collection of servers or other components. The components of the device(s)110, the server(s)120, or the skill server(s)225may include their own dedicated processors, memory, and/or storage. Alternatively, one or more of the components of the device(s)110, the server(s)120, or the skill server(s)225may utilize the I/O interfaces (1102/1202), processor(s) (1104/1204), memory (1106/1206), and/or storage (1108/1208) of the device(s)110server(s)120, or the skill server(s)225, respectively. Thus, the ASR component250may have its own I/O interface(s), processor(s), memory, and/or storage; the NLU component260may have its own I/O interface(s), processor(s), memory, and/or storage; and so forth for the various components discussed herein. As noted above, multiple devices may be employed in a single system. In such a multi-device system, each of the devices may include different components for performing different aspects of the system's processing. The multiple devices may include overlapping components. The components of the device110, the server(s)120, and the communication server(s)25, as described herein, are illustrative, and may be located as a stand-alone device or may be included, in whole or in part, as a component of a larger device or system. As illustrated inFIG.13, multiple devices (110a-110g,120,225) may contain components of the system and the devices may be connected over a network(s)199. The network(s)199may include a local or private network or may include a wide network such as the Internet. Devices may be connected to the network(s)199through either wired or wireless connections. For example, an autonomously mobile device110a, a smartphone110b, a voice-controlled device110c, a tablet computer110d, a vehicle110e, a display device110f, and/or a smart television110gmay be connected to the network(s)199through a wireless service provider, over a WiFi or cellular network connection, or the like. Other devices are included as network-connected support devices, such as the server(s)120, the skill server(s)225, and/or others. The support devices may connect to the network(s)199through a wired connection or wireless connection. Networked devices may capture audio using one-or-more built-in or connected microphones or other audio capture devices, with processing performed by ASR components, NLU components, or other components of the same device or another device connected via the network(s)199, such as the ASR component250, the NLU component260, etc. of one or more servers120. The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, speech processing systems, and distributed computing environments. The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers and speech processing should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein. Aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk, and/or other media. In addition, components of system may be implemented as in firmware or hardware, such as an acoustic front end (AFE), which comprises, among other things, analog and/or digital filters (e.g., filters configured as firmware to a digital signal processor (DSP)). Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
118,468
11862160
DESCRIPTION OF EXEMPLARY EMBODIMENTS A display system1according to a first embodiment has a projector2, a smart speaker3, and a server device4, as shown inFIG.1. The projector2is an example of a display device. The smart speaker3is an example of a voice processing device. The projector2, the smart speaker3, and the server device4are connected to each other via a network5. The projector2displays an image on a screen7or the like, based on image data supplied from an image supply device6shown inFIG.2. The image supply device6is a disk-type recording medium playback device, a television tuner device, a personal computer or the like. The data supplied from the image supply device6to the projector2is not limited to image data and also includes data relating to a voice. The data relating to a voice is, for example, data in which a voice is played back along with a dynamic image of a movie that is displayed. The dynamic image is not limited to a movie and includes various dynamic images such as a television program or a video distributed via the internet. Also, the image data supplied from the image supply device6is not limited to data relating to a dynamic image and also includes a still image. The data supplied from the image supply device6includes data in which a voice is played back along with a still image that is displayed. Referring back toFIG.1, the smart speaker3is a device implementing a voice assistant function. The voice assistant function is a function of implementing an operation in response to a question or a request uttered by a user. The server device4provides various kinds of information, data, an operation command or the like to a device connected thereto via the network5. In this embodiment, the projector2can be operated, based on a voice of an utterance of the user. The smart speaker3generates voice data based on the voice of the utterance of the user. The smart speaker3transmits the voice data to the server device4. The server device4analyzes the voice data received from the smart speaker3and transmits, to the projector2, a command to execute an operation to the projector2. The projector2executes the operation in response to the command received from the server device4. The voice assistant function is thus implemented. Various operations to the projector2are classified into a plurality of types. Of the plurality of types of operations, an operation belonging to a first type, which is a part of the types, is referred to as a first-type operation. Of the plurality of types of operations, an operation belonging to a second type, which is a different type from the first type, is referred to as a second-type operation. When the content of an utterance of the user of the display system1is to request a first-type operation to the projector2, the voice of the utterance of the user is defined as a first voice. Voice data based on a first voice is referred to as first voice data. When the content of an utterance of the user of the display system1is to request a second-type operation to the projector2, the voice of the utterance of the user is defined as a second voice. Voice data based on a second voice is referred to as second voice data. In this embodiment, when the voice of an utterance of the user is a first voice requesting a first-type operation, the server device4analyzes voice data received from the smart speaker3and transmits, to the projector2, a command to execute the first-type operation to the projector2. A second-type operation is performed by the projector2recognizing a second voice of an utterance of the user, without going through the server device4. The second-type operation includes adjustment of the volume of the projector2and adjustment of the image quality of a display image displayed by the projector2. The adjustment of the image quality of the display image includes adjustment of the brightness of the display image, adjustment of the contrast of the display image, enlargement and reduction of the display image, and the like. That is, in this embodiment, the projector2can perform these adjustment operations by recognizing a second voice of an utterance of the user, without going through the server device4. As shown inFIG.2, the projector2has a first control unit10, an interface unit11, a frame memory12, an image processing unit13, an OSD processing unit14, a voice input-output unit15, a first communication unit16, a projection unit17, and a drive unit18. These units are communicatively coupled to the first control unit10via a bus19. The first control unit10has a first processor21and a first storage unit22. The first control unit10comprehensively controls the operations of the projector2. The first processor21reads out a control program23saved in the first storage unit22and executes various kinds of processing. The first control unit10executes various kinds of processing by the cooperation of hardware and software. In the first control unit10, the first processor21executes processing based on the control program23and thus functions as a voice data acquisition unit31, a voice recognition unit32, an operation processing unit33, and a projection control unit34. The first storage unit22stores setting data36and voice dictionary data37in addition to the control program23. The first storage unit22has a non-volatile storage area and a volatile storage area. The control program23, the setting data36, and the voice dictionary data37are saved in the non-volatile storage area of the first storage unit22. The volatile storage area forms a work area for temporarily storing a program executed by the first processor21and various data. The setting data36includes a set value relating to the operation of the projector2. The set value included in the setting data36is, for example, a set value representing the volume level of a voice outputted from a speaker38, described later, a set value representing the content of processing executed by the image processing unit13and the OSD processing unit14, a parameter used for the processing by the image processing unit13and the OSD processing unit14, or the like. The voice dictionary data37is data for converting a voice of the user detected by a microphone39, described later, into data that can be recognized by the voice recognition unit32. For example, the voice dictionary data37includes dictionary data for converting digital data of a voice of the user into text data in Japanese, English or another language. The voice dictionary data37also includes data representing the content of the foregoing second-type operation. The interface unit11has communication hardware such as a connector and an interface circuit conforming to a predetermined communication standard. The interface unit11transmits and receives image data, voice data, control data and the like to and from the image supply device6, under the control of the first control unit10. The frame memory12, the image processing unit13, and the OSD processing unit14are formed, for example, of an integrated circuit. In the frame memory12, image data received from the image supply device6is temporarily loaded. The image processing unit13performs various kinds of image processing on the image data loaded in the frame memory12, based on an instruction from the first control unit10. The image processing is, for example, resolution conversion, resizing, correction of a distortion, shape correction, digital zoom, adjustment of the color tone and luminance of an image, or the like. The image processing unit13reads out the image data that has been processed from the frame memory12and outputs the image data to the OSD processing unit14. The OSD processing unit14, under the control of the first control unit10, superimposes various OSD (on-screen display) images on an image represented by the image data inputted from the image processing unit13. The OSD image is a menu image for various settings of the projector2, a message image for giving various messages, or the like. The OSD processing unit14combines the image data inputted from the image processing unit13with the image data of the OSD image, under the control of the first control unit10. The combined image data is outputted to the drive unit18. When the first control unit10gives no instruction to superimpose an OSD image, the OSD processing unit14outputs the image data inputted from the image processing unit13, directly to the drive unit18without processing the image data. The voice input-output unit15has the speaker38, the microphone39, and a signal processing unit40. When digital voice data is inputted to the signal processing unit40from the first control unit10, the signal processing unit40converts the inputted digital voice data into analog voice data. The signal processing unit40outputs the converted analog voice data to the speaker38. The speaker38outputs a voice based on the inputted voice data. The voice outputted from the speaker38includes a voice supplied from the image supply device6, a voice giving various messages, or the like. The microphone39detects a voice in the peripheries of the projector2. Analog voice data is inputted to the signal processing unit40via the microphone39. The signal processing unit40converts the analog voice data inputted from the microphone39into digital voice data. The signal processing unit40outputs the digital voice data to the first control unit10. The first communication unit16has communication hardware conforming to a predetermined communication standard and communicates with a device connected to the network5according to the predetermined communication standard under the control of the first control unit10. The first communication unit16in this embodiment can communicate with the server device4via the network5. The communication standard used by the first communication unit16may be a wireless communication standard or a wired communication standard. The projection unit17has a light source unit41, a light modulation device42, and a projection system43. The drive unit18has a light source drive circuit44and a light modulation device drive circuit45. The light source drive circuit44is coupled to the first control unit10via the bus19. The light source drive circuit44is also coupled to the light source unit41. The light source drive circuit44controls the light emission of the light source unit41under the control of the first control unit10. The control of the light emission includes not only a control to turn on and off the light source unit41but also a control on the intensity of the light emission of the light source unit41. The light modulation device drive circuit45is coupled to the first control unit10via the bus19. The light modulation device drive circuit45is also coupled to the light modulation device42. The light modulation device drive circuit45, under the control of the first control unit10, drives the light modulation device42and draws an image on a frame basis at a light modulation element provided in the light modulation device42. Image data corresponding to each of the primary colors of R, G, and B is inputted to the light modulation device drive circuit45from the image processing unit13. The light modulation device drive circuit45converts the inputted image data into a data signal suitable for the operation of a liquid crystal panel that is the light modulation element provided in the light modulation device42. The light modulation device drive circuit45applies a voltage to each pixel in each liquid crystal panel, based on the converted data signal, and thus draws an image on each liquid crystal panel. The light source unit41is formed of a lamp such as a halogen lamp, a xenon lamp, or an ultra-high-pressure mercury lamp, or a solid-state light source such as an LED or a laser light source. The light source unit41turns on with electric power supplied from the light source drive circuit44and emits light toward the light modulation device42. The light modulation device42has, for example, three liquid crystal panels corresponding to the three primary colors of R, G, and B. R represents red. G represents green. B represents blue. The light emitted from the light source unit41is separated into color lights of the three colors of R, G, and B, which then become incident on the corresponding liquid crystal panels. Each of the three liquid crystal panels is a transmission-type liquid crystal panel, and modulates light transmitted therethrough and thus generates image light. The image lights passed and modulated through the respective liquid crystal panels are combined together by a light combining system such as a cross dichroic prism and emitted to the projection system43. While a case where the light modulation device42has transmission-type liquid crystal panels as light modulation elements is described as example in this embodiment, the light modulation element may be a reflection-type liquid crystal panel or a digital micromirror device. The projection system43has a lens, a mirror, and the like for causing the image light modulated by the light modulation device42to form an image on the screen7. The projection system43may have a zoom mechanism for enlarging or reducing an image projected on the screen7, a focus adjustment mechanism for adjusting the focus, and the like. The voice data acquisition unit31acquires voice data representing a voice detected by the microphone39, from the voice input-output unit15. The voice data acquisition unit31outputs the acquired voice data to the voice recognition unit32. The voice recognition unit32recognizes the voice detected by the microphone39, based on the voice data inputted from the voice data acquisition unit31. The voice recognition unit32outputs the result of the voice recognition to the operation processing unit33. The recognition of a voice by the voice recognition unit32is performed in the following manner. The voice recognition unit32converts a voice collected by the microphone39into a text. The voice recognition unit32analyzes the voice data of the text, referring to the voice dictionary data37. At this point, the voice recognition unit32determines whether a wording that matches the wording represented by the voice data acquired from the microphone39is included in the voice dictionary data37or not. For example, the voice recognition unit32performs character string search through the voice data of the text and thus determines whether the wording represented by the voice data acquired from the microphone39is included in the voice dictionary data37or not. When the wording represented by the voice data acquired from the microphone39is included in the voice dictionary data37, the voice recognition unit32converts the voice data into a text and thus generates second voice data. The voice recognition unit32outputs the second voice data to the operation processing unit33as the result of the voice recognition. More specifically, the voice recognition unit32outputs the voice data to the operation processing unit33with a flag indicating that the voice data converted into the text is second voice data, as the result of the voice recognition. When the wording represented by the voice data acquired from the microphone39is not included in the voice dictionary data37, the voice recognition unit32outputs the voice data converted into a text to the operation processing unit33as the result of the voice recognition. More specifically, the voice recognition unit32outputs the voice data to the operation processing unit33without a flag indicating that the voice data converted into the text is second voice data, as the result of the voice recognition. The operation processing unit33executes processing to implement an operation to the projector2, in response to a command from the server device4. The operation to the projector2includes a first-type operation and a second-type operation, as described above. An operation executed based on a command from the server device4is a first-type operation. Meanwhile, a second-type operation is performed by the projector2recognizing a second voice of an utterance of the user, without going through the server device4. A second-type operation is executable when the projector2receives a command permitting the execution of the second-type operation from the server device4. That is, a second-type operation is executable during a period permitted by the server device4. When the user makes an utterance requesting the permission for the execution of a second-type operation, the smart speaker3transmits first voice data generated based on the first voice of the utterance, to the server device4. The permission for the execution of a second-type operation is a first-type operation. When the projector2receives a command permitting the execution of the second-type operation from the server device4, the operation processing unit33executes processing to implement the second-type operation to the projector2in response to a command from the first control unit10. That is, the operation processing unit33executes processing to the projector2in response to a command from the first control unit10during a period when the execution of the second-type operation is permitted. In this way, the operation processing unit33executes the processing to implement a first-type operation and the processing to implement a second-type operation. The projection control unit34controls the image processing unit13, the OSD processing unit14, the drive unit18, and the like, to display an image on the screen7. At this point, the projection control unit34controls the image processing unit13and causes the image processing unit13to process image data loaded in the frame memory12. The projection control unit34also controls the OSD processing unit14and causes the OSD processing unit14to process the image data inputted from the image processing unit13. The projection control unit34also controls the light source drive circuit44and causes the light source drive circuit44to turn on the light source unit41. The projection control unit34also controls the light modulation device drive circuit45to drive the light modulation device42and causes the projection unit17to project image light and display an image on the screen7. The projection control unit34also controls the driving of the projection system43to adjust the zoom and the focus of the projection system43. As shown inFIG.3, the smart speaker3has a second control unit50, a second communication unit51, and a voice input-output unit52. The second control unit50has a second processor53and a second storage unit54. The second control unit50comprehensively controls the operations of the smart speaker3. The second processor53reads out a second control program55saved in the second storage unit54and executes various kinds of processing. The second control unit50executes various kinds of processing by the cooperation of hardware and software. In the second control unit50, the second processor53executes processing based on the second control program55and thus functions as a voice data acquisition unit56, a wake word determination unit58, and a response unit59. The second storage unit54stores second setting data61and wake word data62in addition to the second control program55. The second setting data61includes a set value relating to the operation of the smart speaker3. The wake word data62is data representing a wake word, which is a predetermined wording. The wake word is a wording that uniquely specifies the smart speaker3and can include any word. The second storage unit54has a non-volatile storage area and a volatile storage area. The second control program55, the second setting data61, and the wake word data62are saved in the non-volatile storage area of the second storage unit54. The volatile storage area forms a work area for temporarily storing a program executed by the second processor53and various data. The second communication unit51has communication hardware conforming to a predetermined communication standard and communicates with a device connected to the network5according to the predetermined communication standard under the control of the second control unit50. The second communication unit51in this embodiment can communicate with the server device4via the network5. The communication standard used by the second communication unit51may be a wireless communication standard or a wired communication standard. The voice input-output unit52has a second speaker63, a second microphone64, and a second signal processing unit65. When digital voice data is inputted to the second signal processing unit65from the second control unit50, the second signal processing unit65converts the inputted digital voice data into analog voice data. The second signal processing unit65outputs the converted analog voice data to the second speaker63. The second speaker63outputs a voice based on the inputted voice data. The voice outputted from the second speaker63includes a voice supplied from the server device4, a voice giving various messages, or the like. The second microphone64detects a voice in the peripheries of the smart speaker3. Analog voice data is inputted to the second signal processing unit65via the second microphone64. The second signal processing unit65converts the analog voice data inputted from the second microphone64into digital voice data. The second signal processing unit65outputs the digital voice data to the second control unit50. The voice data acquisition unit56acquires voice data representing a voice detected by the second microphone64, from the voice input-output unit52. The voice data acquisition unit56outputs the acquired voice data to the wake word determination unit58. The wake word determination unit58determines whether the voice data includes the wake word or not, based on the voice data inputted from the voice data acquisition unit56. The wake word determination unit58outputs the result of the determination to the server device4. The determination of the wake word by the wake word determination unit58is performed in the following manner. The wake word determination unit58converts a voice collected by the second microphone64into a text. The wake word determination unit58analyzes the voice data of the text, referring to the wake word data62. At this point, the wake word determination unit58determines whether the voice data of the text includes a wording that matches the wake word or not. The wake word is represented by the wake word data62. The wake word determination unit58determines whether the voice data of the text includes the wake word or not, referring to the wake word data62. For example, the wake word determination unit58performs character string search through the voice data of the text and thus determines whether the voice data of the text includes the wake word or not. The wake word included in the voice detected by the second microphone64is a first voice. The wake word determination unit58outputs wake word detection information representing whether the voice includes the wake word or not, to the server device4, as the result of the determination. When the voice detected by the second microphone64includes the wake word, the voice data acquisition unit56outputs the voice data following the wake word to the server device4. At this point, the voice data following the wake word is first voice data based on a first voice. The server device4executes a voice assistant function. The voice assistant function is a function of processing an operation corresponding to the voice following the wake word. The voice assistant function is, for example, to turn on and off the power of the projector2, to start displaying an image, to switch image sources, to project an OSD image, to search for or output information of a video or music, and the like. These operations are classified as the first-type operation of the plurality of types of operations. The voice requesting a first-type operation following the wake word is a first voice. Based on the first voice requesting the first-type operation, first voice data is generated. The server device4gives the projector2a command to execute the processing of the first-type operation corresponding to the first voice following the wake word. The operation processing unit33of the projector2shown inFIG.2executes the processing of the first-type operation in response to a processing execution command inputted from the server device4. Referring back toFIG.3, the server device4outputs response data to the effect that the voice assistant function is to be executed, to the response unit59of the smart speaker3. The response unit59outputs a response signal to the effect that a request for the first-type operation is accepted, to the voice input-output unit52, based on the response data inputted from the server device4. The voice input-output unit52outputs a voice to the effect that the request is accepted, from the second speaker63, based on the inputted response data. Thus, the user can recognize that the request is accepted. A flow of starting the projector2as an example of the first-type operation and adjusting the volume as an example of the second-type operation will now be described. This operation is started by the user uttering the wake word and a request to start the projector2in step S1, as shown inFIG.4. For example, the user utters “start the projector” following the wake word. Based on the first voice of this utterance, the smart speaker3recognizes the wake word. In step S2, the smart speaker3transmits first voice data having the content “start the projector” following the wake word, to the server device4. In step S3, the server device4transmits a command to turn on the power of the projector2to the projector2, based on the first voice data having the content “start the projector”. In step S4, the projector2sends a response to the effect that the command to turn on the power is accepted, to the server device4. In step S5, the projector2turns on the power of the projector2in response to the command from the server device4. The startup of the projector2, which is an example of the first-type operation, is thus executed. In this case, the projector2sends a response to the effect that the command to turn on the power is accepted, to the server device4. In this case, the first control unit10of the projector2shown inFIG.2turns on the power of the projector2. The first control unit10of the projector2also sends a response to the effect that the command to turn on the power is accepted, to the server device4. Referring back toFIG.4, in step S6, the server device4transmits response data representing that the startup of the projector2is accepted, to the smart speaker3. In step S7, the smart speaker3notifies by voice that the request by the user is accepted, based on the response data from the server device4. At this point, for example, the smart speaker3gives a voice notification “OK”. A flow of volume adjustment of the projector2, which is an example of the second-type operation, will now be described. The processing of volume adjustment of the projector2is started by the user uttering the wake word and a request for volume adjustment of the projector2. In step SB, for example, the user utters “projector volume adjustment” following the wake word. In step S9, the smart speaker3transmits first voice data having the content “projector volume adjustment” following the wake word, to the server device4. In step S10, the server device4transmits a command permitting the execution of the second-type operation, to the projector2, based on the first voice data having the content “projector volume adjustment”. In step S11, the projector2sends a response to the effect that the volume adjustment of the projector2is accepted, to the server device4, in response to the command permitting the execution of the second-type operation received from the server device4. In step S12, the server device4transmits response data representing that the volume adjustment, which is the second-type operation, is accepted, to the smart speaker3. In step S13, the smart speaker3notifies by voice that the request by the user is accepted, based on the response data from the server device4. At this point, for example, the smart speaker3gives a voice notification “volume adjustment is available”. In step S14, the projector2starts executing the volume adjustment, which is the second-type operation, in response to the command permitting the execution of the second-type operation received from the server device4. At this point, the first control unit10of the projector2shown inFIG.2starts executing the second-type operation based on the second voice recognized by the voice recognition unit32. In step S15, for example, the user utters “higher”, requesting an increase in the volume. In step S16, the projector2executes an operation of increasing the volume, based on the second voice “higher”. In step S17, for example, the user utters “more”, requesting a further increase in the volume. In step S18, the projector2executes an operation of increasing the volume, based on the second voice “more”. The second-type operation based on the second voice “more” is the same operation as the second-type operation executed immediately before by the projector2. That is, the second-type operation based on the second voice “more” is an operation of repeating the second-type operation executed immediately before by the projector2. In step S19, for example, the user utters “lower”, requesting a decrease in the volume. In step S20, the projector2executes an operation of decreasing the volume, based on the second voice “lower”. As the wording that means an increase in the volume, in addition to “higher”, various other words such as “up”, “larger”, “increase”, “large”, and “high” are saved in the voice dictionary data37shown inFIG.2. As the wording that means a decrease in the volume, in addition to “lower”, various other words such as “down”, “smaller”, “decrease”, “small”, and “low” are saved in the voice dictionary data37. The voice recognition unit32determines whether such words used in association with the volume adjustment are included in the voice dictionary data37or not, referring to the voice dictionary data37. When the word used is included in the voice dictionary data37, the voice recognition unit32outputs the result of the voice recognition to the operation processing unit33, as second voice data. The operation processing unit33executes the volume adjustment of the projector2, based on the second voice data. When the word uttered by the user is not included in the voice dictionary data37, the voice recognition unit32outputs the result of the voice recognition to the effect that the voice is not equivalent to second voice data, to the operation processing unit33. In this case, the operation processing unit33does not execute any processing. As the wording that means repeating the second-type operation executed immediately before by the projector2, in addition to “more”, various other words such as “further”, “once more”, “once again”, and “again” are saved in the voice dictionary data37. The voice recognition unit32determines whether such words used in association with the second-type operation including the volume adjustment are included in the voice dictionary data37or not, referring to the voice dictionary data37. The voice recognition unit32then outputs the result of the determination to the operation processing unit33. Referring back toFIG.4, the second-type operation is ended by the user uttering the wake word and a request to end the volume adjustment of the projector2in step S21. For example, the user utters “end volume adjustment” following the wake word. Based on the first voice of the utterance, the smart speaker3recognizes the wake word. In step S22, the smart speaker3transmits first voice data having the content “end volume adjustment” following the wake word, to the server device4. In step S23, the server device4transmits a command prohibiting the execution of the second-type operation, to the projector2, based on the first voice data having the content “end volume adjustment”. In step S24, the projector2sends a response to the effect that the volume adjustment of the projector2is to end, to the server device4, in response to the command prohibiting the execution of the second-type operation received from the server device4. In step S25, the server device4transmits response data representing that the volume adjustment, which is the second-type operation, is to end, to the smart speaker3. In step S26, the smart speaker3notifies by voice that the request by the user is accepted, based on the response data from the server device4. At this point, for example, the smart speaker3gives a voice notification “volume adjustment ends”. In step S27, the projector2ends the execution of the volume adjustment, which is the second-type operation, in response to the command to end the execution of the second-type operation received from the server device4. In the display system1according to the first embodiment, based on a first voice requesting a permission for the execution of a second-type operation, the projector2receives a command permitting the execution of the second-type operation from the server device4. The first control unit10starts executing the second-type operation based on a second voice recognized by the voice recognition unit32, in response to the command permitting the execution of the second-type operation. Thus, in the execution of the second-type operation, the communication from the smart speaker3to the server device4and the communication from the server device4to the projector2can be omitted. Therefore, the time taken for the execution of the second-type operation can be reduced. A display system1according to a second embodiment will now be described. The display system1according to the second embodiment has a configuration similar to the configuration of the display system1according to the first embodiment except for not having the smart speaker3of the display system1according to the first embodiment and instead having the projector2provided with the functions of the smart speaker3. In the description below of the display system1according to the second embodiment, the same components as those of the display system1according to the first embodiment are denoted by the same reference signs as in the first embodiment and are not described further in detail. In the display system1according to the second embodiment, the projector2has the wake word determination unit58, the response unit59, and the wake word data62, as shown inFIG.5. The projector2in the second embodiment has the functions of the second control unit50, the functions of the second communication unit51, the functions of the voice input-output unit52, the functions of the second processor53, and the functions of the voice data acquisition unit56of the smart speaker3shown inFIG.3. In the projector2in the second embodiment, the functions of the second control unit50of the smart speaker3shown inFIG.3are included in the functions of the first control unit10. Similarly, in the projector2in the second embodiment, the functions of the second communication unit51are included in the functions of the first communication unit16. The functions of the voice input-output unit52are included in the functions of the voice input-output unit15. The functions of the second processor53are included in the functions of the first processor21. The functions of the voice data acquisition unit56are included in the functions of the voice data acquisition unit31. The projector2in the second embodiment also includes the second control program55and the second setting data61of the smart speaker3shown inFIG.3. In the projector2in the second embodiment, the second control program55shown inFIG.3is included in the control program23shown inFIG.5. Similarly, the second setting data61shown inFIG.3is included in the setting data36shown inFIG.5. The flow of operation processing in the second embodiment is similar to the flow of operation processing in the first embodiment except that the projector2performs the processing of the smart speaker3. Therefore, the flow of operation processing in the second embodiment will not be described further in detail. The display system1according to the second embodiment with the above configuration has effects similar to those of the display system1according to the first embodiment. In the first and second embodiments, when the end of the execution of a second-type operation is requested during a period when the second-type operation is permitted, the projector2receives a command prohibiting the execution of the second-type operation from the server device4. The first control unit10ends the execution of the second-type operation based on the second voice recognized by the voice recognition unit32, in response to the command prohibiting the execution of the second-type operation. Thus, the execution of the second-type operation can be ended, based on the first voice requesting the end of the execution of the second-type operation. The condition for ending the execution of the second-type operation is not limited to that the end of the execution of the second-type operation is requested. The condition for ending the execution of the second-type operation can be that the execution of a first-type operation is requested during the period when the execution of the second-type operation is permitted. For example, the execution of the second-type operation can be ended, based on the user requesting to switch image sources, which is an example of the first-type operation, during the period when the second-type operation is permitted. In this case, the execution of the second-type operation can be ended, based on the projector2receiving a command to switch images sources from the server device4during the period when the second-type operation is permitted. That is, during the period when the second-type operation is permitted, first voice data generated based on a first voice requesting a first-type operation is transmitted to the server device4. Based on this, the projector2receives a command prohibiting the execution of the second-type operation from the server device4. The first control unit10ends the execution of the second-type operation based on the second voice recognized by the voice recognition unit32, in response to the command prohibiting the execution of the second-type operation. Thus, the execution of the second-type operation can be ended, based on the first voice requesting the first-type operation. In the display system1according to the first embodiment, the smart speaker3has the second speaker63. The smart speaker3receives, from the server device4, response data representing a permission in response to the first voice data requesting a permission for the execution of a second-type operation. Based on the response data, the smart speaker3notifies by voice that the execution of the second-type operation is to start, from the second speaker63. Thus, it can be notified by voice that the execution of the second-type operation is to start. In the display system1according to the second embodiment, the projector2has the speaker38. The projector2receives, from the server device4, response data representing a permission in response to first voice data requesting a permission for the execution of a second-type operation. Based on the response data, the projector2notifies by voice that the execution of the second-type operation is to start, from the speaker38. Thus, it can be notified by voice that the execution of the second-type operation is to start. The notification that the execution of the second-type operation is to start is not limited to a voice notification from the smart speaker3or a voice notification from the speaker38. The notification that the execution of the second-type operation is to start can be a notification via characters displayed by the projector2. The notification via characters displayed by the projector2can be implemented by the OSD processing unit14. Under the control of the first control unit10, the OSD processing unit14displays characters showing that the second-type operation is executable, when the projector2receives a command permitting the execution of the second-type operation from the server device4. Thus, it can be shown by characters that the second-type operation is executable. In each of the first and second embodiments, when second voices requesting a second-type operation occur successively, a plurality of successive second-type operations can be recognized as one request. For example, when the user utters “higher, higher” during a period when volume adjustment, which is an example of the second-type operation, is permitted, the voice recognition unit32includes the two successive second voices into one second voice data as one request. The operation processing unit33then executes processing to implement the plurality of second-type operations included in the one second voice data, in response to a command from the first control unit10. Thus, since a plurality of second-type operations recognized as one request can be executed, the time taken for the execution of a plurality of second-type operations can be made shorter than when only one second-type operation is executed in response to one request. In other words, since a plurality of second-type operations recognized as one request can be executed, the time taken for the execution of a plurality of second-type operations can be made shorter than when a plurality of second-type operations are recognized one by one and executed one by one. In each of the first and second embodiments, when a plurality of second-type operations recognized as one request include the same content twice successively, the amount of operation in the second second-type operation can be made greater than a prescribed amount of operation. For example, in volume adjustment, which is an example of the second-type operation, the amount of operation to increase the volume is prescribed to a predetermined amount of operation. For example, when the user utters “higher, higher”, that is, an operation of increasing the volume twice successively, the amount of operation in the second operation can be made greater than the prescribed amount of operation. This can be achieved by setting the amount of the operation in the second operation to a higher value than the prescribed amount of operation. Thus, a greater amount of operation can be achieved at a time and therefore the time taken for the operation is reduced further. In each of the first and second embodiments, when a plurality of second-type operations recognized as one request include successive contradictory contents, the amount of operation in the second second-type operation can be made smaller than a prescribed amount of operation. For example, in volume adjustment, which is an example of the second-type operation, the amount of operation to decrease the volume is prescribed to a predetermined amount of operation. For example, when the user successively utters “higher, lower”, that is, an operation of increasing the volume and an operation of decreasing the volume, the amount of operation in the second operation can be made smaller than the prescribed amount of operation. This can be achieved by setting the amount of operation in the second operation to a lower value than the prescribed amount of operation. Thus, fine adjustment can be achieved as in the case where the user wants to increase the volume by more than the prescribed amount of operation but wants to have a volume level lower than the volume acquired by increasing the volume twice successively.
43,965
11862161
The drawings are for purposes of illustrating example embodiments, but it should be understood that the inventions are not limited to the arrangements and instrumentality shown in the drawings. In the drawings, identical reference numbers identify at least generally similar elements. To facilitate the discussion of any particular element, the most significant digit or digits of any reference number refers to the Figure in which that element is first introduced. For example, element103ais first introduced and discussed with reference toFIG.1A. DETAILED DESCRIPTION I. Overview Example techniques described herein involve toggling voice input processing via a cloud-based voice assistant service (“VAS”). An example network microphone device (“NMD”) may enable or disable processing of voice inputs via a cloud-based voice assistant service based on the physical orientation of the NMD. While processing of voice inputs via the cloud-based VAS is disabled, the NMD may process voice inputs via a local natural language unit (NLU). An NMD is a networked computing device that typically includes an arrangement of microphones, such as a microphone array, that is configured to detect sound present in the NMD's environment. NMDs may facilitate voice control of smart home devices, such as wireless audio playback devices, illumination devices, appliances, and home-automation devices (e.g., thermostats, door locks, etc.). NMDs may also be used to query a cloud-based VAS for information such as search queries, news, weather, and the like. Some users are apprehensive of sending their voice data to a cloud-based VAS for privacy reasons. One possible advantage of a processing voice inputs via a local NLU is increased privacy. By processing voice utterances locally, a user may avoid transmitting voice recordings to the cloud (e.g., to servers of a voice assistant service). Further, in some implementations, the NMD may use a local area network to discover playback devices and/or smart devices connected to the network, which may avoid providing personal data relating to a user's home to the cloud. Also, the user's preferences and customizations may remain local to the NMD(s) in the household, perhaps only using the cloud as an optional backup. Other advantages are possible as well. On the other hand, a cloud-based VAS is relatively more capable than a local NLU. In contrast to a NLU implemented in one or more cloud servers that is capable of recognizing a wide variety of voice inputs, practically, local NLUs are capable of recognizing a relatively smaller library of keywords (e.g., 10,000 words and phrases). Moreover, the cloud-based VAS may support additional features (such a querying for real-time information) relative to a local NLU. Moreover, the cloud-based VAS may integrate with other cloud-based services to provide voice control of those services. Given these competing interests, a user may desire to selectively disable voice input processing via a cloud-based VAS (in favor of voice input processing via a local NLU). When voice input processing via the cloud-based VAS is disabled, the user has the benefit of increased privacy. Conversely, when voice input processing via the cloud-based VAS is enabled, the user may take advantage of the relatively more capable cloud-based VAS. Example NMDs may selectively disable voice input processing via a cloud-based VAS using physical orientation of its housing. For instance, an example NMD may be implemented with a cylindrical-shaped housing (e.g., similar in shape to a hockey puck). The first and second ends of the housing may carry a first set of microphones and a second set of microphones, respectively. When the cylindrical-shaped housing is placed on its first end (i.e., in a first orientation), the NMD disables voice input processing via a cloud-based VAS. Conversely, when the cylindrical-shaped housing is placed on its second end (i.e., in a second orientation), the NMD enables voice input processing via a cloud-based VAS. Disabling cloud-based processing using physical orientation may instill confidence in a user that their privacy is being protected, as the microphones associated with the cloud-based VAS are partially covered by whatever surface the housing of the network microphone device is resting upon. Other example network microphone devices may implement different toggling techniques. For instance, the NMD may include a physical switch or other hardware control to toggle voice processing via the cloud-based VAS. Alternatively, a graphical user interface (GUI) or voice user interface (VUI) may be used to toggle voice processing via the cloud-based VAS. Example NMDs that selectively disable voice input processing via a cloud-based VAS using physical orientation may include two or more sets of microphones. For instance, a first set of microphones may be utilized to capture audio in a first orientation while a second set of microphones is utilized in a second orientation. Continuing the puck-shaped housing example above, the housing may carry one or more first microphones near its first end and one or more second microphones in its second end. Then, when the housing is sitting on its first end, the NMD captures voice inputs using the one or more second microphones. Conversely, when the housing is sitting on its second end, the NMD captures voice inputs using the one or more first microphones. Some cloud-based voice assistant services are triggered based on a wake word. In such examples, a voice input typically includes a wake word followed by an utterance comprising a user request. In practice, a wake word is typically a predetermined nonce word or phrase used to “wake up” an NMD and cause it to invoke a particular voice assistant service (“VAS”) to interpret the intent of voice input in detected sound. For example, a user might speak the wake word “Alexa” to invoke the AMAZON® VAS, “Ok, Google” to invoke the GOOGLE® VAS, or “Hey, Siri” to invoke the APPLE® VAS, among other examples. In practice, a wake word may also be referred to as, for example, an activation-, trigger-, wakeup-word or -phrase, and may take the form of any suitable word, combination of words (e.g., a particular phrase), and/or some other audio cue. To identify whether sound detected by the NMD contains a voice input that includes a particular wake word, NMDs often utilize a wake-word engine, which is typically onboard the NMD. The wake-word engine may be configured to identify (i.e., “spot” or “detect”) a particular wake word in recorded audio using one or more identification algorithms. Such identification algorithms may include pattern recognition trained to detect the frequency and/or time domain patterns that speaking the wake word creates. This wake-word identification process is commonly referred to as “keyword spotting.” In practice, to help facilitate keyword spotting, the NMD may buffer sound detected by a microphone of the NMD and then use the wake-word engine to process that buffered sound to determine whether a wake word is present in the recorded audio. When a wake-word engine detects a wake word in recorded audio, the NMD may determine that a wake-word event (i.e., a “wake-word trigger”) has occurred, which indicates that the NMD has detected sound that includes a potential voice input. The occurrence of the wake-word event typically causes the NMD to perform additional processes involving the detected sound. With a VAS wake-word engine, these additional processes may include extracting detected-sound data from a buffer, among other possible additional processes, such as outputting an alert (e.g., an audible chime and/or a light indicator) indicating that a wake word has been identified. Extracting the detected sound may include reading out and packaging a stream of the detected-sound according to a particular format and transmitting the packaged sound-data to an appropriate VAS for interpretation. In turn, the VAS corresponding to the wake word that was identified by the wake-word engine receives the transmitted sound data from the NMD over a communication network. A VAS traditionally takes the form of a remote service implemented using one or more cloud servers configured to process voice inputs (e.g., AMAZON's ALEXA, APPLE's SIRI, MICROSOFT's CORTANA, GOOGLE'S ASSISTANT, etc.). In some instances, certain components and functionality of the VAS may be distributed across local and remote devices. When a VAS receives detected-sound data, the VAS processes this data, which involves identifying the voice input and determining intent of words captured in the voice input. The VAS may then provide a response back to the NMD with some instruction according to the determined intent. Based on that instruction, the NMD may cause one or more smart devices to perform an action. For example, in accordance with an instruction from a VAS, an NMD may cause a playback device to play a particular song or an illumination device to turn on/off, among other examples. In some cases, an NMD, or a media system with NMDs (e.g., a media playback system with NMD-equipped playback devices) may be configured to interact with multiple VASes. In practice, the NMD may select one VAS over another based on the particular wake word identified in the sound detected by the NMD. Within examples, local processing of a voice input may be trigged based on detection of one or more keywords in sound captured by the NMD. Example NMDs may include a local voice input engine to detect “local keywords” and generate events to process voice inputs when a local keyword is detected. These local keywords may take the form of a nonce keyword (e.g., “Hey, Sonos”) or a keyword that invokes a command (referred to herein as a “command keyword”). A command keyword is a word or phrase that functions as a command itself, rather than being a nonce word that merely triggers a wake word event. As noted above, a detected local keyword event may cause one or more subsequent actions, such as local natural language processing of a voice input. For instance, when a local voice input engine detects a local keyword in recorded audio, the NMD may determine that a local keyword event has occurred and responsively process the voice input locally using a local NLU. Processing the input may involve the local NLU determining an intent from one or more keywords in the voice input. In some implementations, voice input processing via the local NLU may remain enabled when the voice input processing via the cloud-based VAS is enabled. In such embodiments, a user may target the cloud-based VAS for processing a voice input by speaking a VAS wake word. The user may target the local NLU for processing of the voice input by speaking a local wake word or by speaking a voice command without a VAS wake word. Alternatively, the NMD may disable voice input processing via the local NLU when voice input processing via the cloud-based VAS is enabled. As noted above, example techniques relate to toggling a cloud-based VAS between enabled and disabled modes. An example implementation involves a network microphone device including one or more first microphones, one or more second microphones, a network interface, one or more processors, and a housing carrying the one or more first microphones, the one or more second microphones, the network interface, the one or more processors, and data storage having stored therein instructions executable by the one or more processors. The network microphone device detects that the housing is in a first orientation. After detecting that the housing is in the first orientation, the device enables a first mode. Enabling the first mode includes (i) disabling voice input processing via a cloud-based voice assistant service and (ii) enabling voice input processing via a local natural language unit. While the first mode is enabled, the network microphone device (i) captures sound data associated with a first voice input via the one or more first microphones and (ii) detects, via a local natural language unit, that the first voice input comprises sound data matching one or more keywords from a local natural language unit library of the local natural language unit. The network microphone device determines, via the local natural language unit, an intent of the first voice input based on at least one of the one or more keywords and performs a first command according to the determined intent of the first voice input. The network microphone device may detects that the housing is in a second orientation different than the first orientation. After detecting that the housing is in the second orientation, the network microphone device enables the second mode. Enabling the second mode includes enabling voice input processing via the cloud-based voice assistant service. While some embodiments described herein may refer to functions performed by given actors, such as “users” and/or other entities, it should be understood that this description is for purposes of explanation only. The claims should not be interpreted to require action by any such example actor unless explicitly required by the language of the claims themselves. Moreover, some functions are described herein as being performed “based on” or “in response to” another element or function. “Based on” should be understood that one element or function is related to another function or element. “In response to” should be understood that one element or function is a necessary result of another function or element. For the sake of brevity, functions are generally described as being based on another function when a functional link exists; however, such disclosure should be understood as disclosing either type of functional relationship. II. Example Operation Environment FIGS.1A and1Billustrate an example configuration of a media playback system100(or “MPS100”) in which one or more embodiments disclosed herein may be implemented. Referring first toFIG.1A, the MPS100as shown is associated with an example home environment having a plurality of rooms and spaces, which may be collectively referred to as a “home environment,” “smart home,” or “environment101.” The environment101comprises a household having several rooms, spaces, and/or playback zones, including a master bathroom101a, a master bedroom101b, (referred to herein as “Nick's Room”), a second bedroom101c, a family room or den101d, an office101e, a living room101f, a dining room101g, a kitchen101h, and an outdoor patio101i. While certain embodiments and examples are described below in the context of a home environment, the technologies described herein may be implemented in other types of environments. In some embodiments, for example, the MPS100can be implemented in one or more commercial settings (e.g., a restaurant, mall, airport, hotel, a retail or other store), one or more vehicles (e.g., a sports utility vehicle, bus, car, a ship, a boat, an airplane), multiple environments (e.g., a combination of home and vehicle environments), and/or another suitable environment where multi-zone audio may be desirable. Within these rooms and spaces, the MPS100includes one or more computing devices. Referring toFIGS.1A and1Btogether, such computing devices can include playback devices102(identified individually as playback devices102a-102o), network microphone devices103(identified individually as “NMDs”103a-102i), and controller devices104aand104b(collectively “controller devices104”). Referring toFIG.1B, the home environment may include additional and/or other computing devices, including local network devices, such as one or more smart illumination devices108(FIG.1B), a smart thermostat110, and a local computing device105(FIG.1A). In embodiments described below, one or more of the various playback devices102may be configured as portable playback devices, while others may be configured as stationary playback devices. For example, the headphones102o(FIG.1B) are a portable playback device, while the playback device102don the bookcase may be a stationary device. As another example, the playback device102con the Patio may be a battery-powered device, which may allow it to be transported to various areas within the environment101, and outside of the environment101, when it is not plugged in to a wall outlet or the like. With reference still toFIG.1B, the various playback, network microphone, and controller devices102,103, and104and/or other network devices of the MPS100may be coupled to one another via point-to-point connections and/or over other connections, which may be wired and/or wireless, via a network111, such as a LAN including a network router109. For example, the playback device102jin the Den101d(FIG.1A), which may be designated as the “Left” device, may have a point-to-point connection with the playback device102a, which is also in the Den101dand may be designated as the “Right” device. In a related embodiment, the Left playback device102jmay communicate with other network devices, such as the playback device102b, which may be designated as the “Front” device, via a point-to-point connection and/or other connections via the NETWORK111. As further shown inFIG.1B, the MPS100may be coupled to one or more remote computing devices106via a wide area network (“WAN”)107. In some embodiments, each remote computing device106may take the form of one or more cloud servers. The remote computing devices106may be configured to interact with computing devices in the environment101in various ways. For example, the remote computing devices106may be configured to facilitate streaming and/or controlling playback of media content, such as audio, in the home environment101. In some implementations, the various playback devices, NMDs, and/or controller devices102-104may be communicatively coupled to at least one remote computing device associated with a VAS and at least one remote computing device associated with a media content service (“MCS”). For instance, in the illustrated example ofFIG.1B, remote computing devices106are associated with a VAS190and remote computing devices106bare associated with an MCS192. Although only a single VAS190and a single MCS192are shown in the example ofFIG.1Bfor purposes of clarity, the MPS100may be coupled to multiple, different VASes and/or MCSes. In some implementations, VASes may be operated by one or more of AMAZON, GOOGLE, APPLE, MICROSOFT, SONOS or other voice assistant providers. In some implementations, MCSes may be operated by one or more of SPOTIFY, PANDORA, AMAZON MUSIC, or other media content services. As further shown inFIG.1B, the remote computing devices106further include remote computing device106cconfigured to perform certain operations, such as remotely facilitating media playback functions, managing device and system status information, directing communications between the devices of the MPS100and one or multiple VASes and/or MCSes, among other operations. In one example, the remote computing devices106cprovide cloud servers for one or more SONOS Wireless HiFi Systems. In various implementations, one or more of the playback devices102may take the form of or include an on-board (e.g., integrated) network microphone device. For example, the playback devices102a-einclude or are otherwise equipped with corresponding NMDs103a-e, respectively. A playback device that includes or is equipped with an NMD may be referred to herein interchangeably as a playback device or an NMD unless indicated otherwise in the description. In some cases, one or more of the NMDs103may be a stand-alone device. For example, the NMDs103fand103gmay be stand-alone devices. A stand-alone NMD may omit components and/or functionality that is typically included in a playback device, such as a speaker or related electronics. For instance, in such cases, a stand-alone NMD may not produce audio output or may produce limited audio output (e.g., relatively low-quality audio output). The various playback and network microphone devices102and103of the MPS100may each be associated with a unique name, which may be assigned to the respective devices by a user, such as during setup of one or more of these devices. For instance, as shown in the illustrated example ofFIG.1B, a user may assign the name “Bookcase” to playback device102dbecause it is physically situated on a bookcase. Similarly, the NMD103fmay be assigned the named “Island” because it is physically situated on an island countertop in the Kitchen101h(FIG.1A). Some playback devices may be assigned names according to a zone or room, such as the playback devices102e,1021,102m, and102n, which are named “Bedroom,” “Dining Room,” “Living Room,” and “Office,” respectively. Further, certain playback devices may have functionally descriptive names. For example, the playback devices102aand102bare assigned the names “Right” and “Front,” respectively, because these two devices are configured to provide specific audio channels during media playback in the zone of the Den101d(FIG.1A). The playback device102cin the Patio may be named portable because it is battery-powered and/or readily transportable to different areas of the environment101. Other naming conventions are possible. As discussed above, an NMD may detect and process sound from its environment, such as sound that includes background noise mixed with speech spoken by a person in the NMD's vicinity. For example, as sounds are detected by the NMD in the environment, the NMD may process the detected sound to determine if the sound includes speech that contains voice input intended for the NMD and ultimately a particular VAS. For example, the NMD may identify whether speech includes a wake word associated with a particular VAS. In the illustrated example ofFIG.1B, the NMDs103are configured to interact with the VAS190over a network via the network111and the router109. Interactions with the VAS190may be initiated, for example, when an NMD identifies in the detected sound a potential wake word. The identification causes a wake-word event, which in turn causes the NMD to begin transmitting detected-sound data to the VAS190. In some implementations, the various local network devices102-105(FIG.1A) and/or remote computing devices106cof the MPS100may exchange various feedback, information, instructions, and/or related data with the remote computing devices associated with the selected VAS. Such exchanges may be related to or independent of transmitted messages containing voice inputs. In some embodiments, the remote computing device(s) and the MPS100may exchange data via communication paths as described herein and/or using a metadata exchange channel as described in U.S. application Ser. No. 15/438,749 filed Feb. 21, 2017, and titled “Voice Control of a Media Playback System,” which is herein incorporated by reference in its entirety. Upon receiving the stream of sound data, the VAS190determines if there is voice input in the streamed data from the NMD, and if so the VAS190will also determine an underlying intent in the voice input. The VAS190may next transmit a response back to the MPS100, which can include transmitting the response directly to the NMD that caused the wake-word event. The response is typically based on the intent that the VAS190determined was present in the voice input. As an example, in response to the VAS190receiving a voice input with an utterance to “Play Hey Jude by The Beatles,” the VAS190may determine that the underlying intent of the voice input is to initiate playback and further determine that intent of the voice input is to play the particular song “Hey Jude.” After these determinations, the VAS190may transmit a command to a particular MCS192to retrieve content (i.e., the song “Hey Jude”), and that MCS192, in turn, provides (e.g., streams) this content directly to the MPS100or indirectly via the VAS190. In some implementations, the VAS190may transmit to the MPS100a command that causes the MPS100itself to retrieve the content from the MCS192. In certain implementations, NMDs may facilitate arbitration amongst one another when voice input is identified in speech detected by two or more NMDs located within proximity of one another. For example, the NMD-equipped playback device102din the environment101(FIG.1A) is in relatively close proximity to the NMD-equipped Living Room playback device102m, and both devices102dand102mmay at least sometimes detect the same sound. In such cases, this may require arbitration as to which device is ultimately responsible for providing detected-sound data to the remote VAS. Examples of arbitrating between NMDs may be found, for example, in previously referenced U.S. application Ser. No. 15/438,749. In certain implementations, an NMD may be assigned to, or otherwise associated with, a designated or default playback device that may not include an NMD. For example, the Island NMD103fin the Kitchen101h(FIG.1A) may be assigned to the Dining Room playback device102l, which is in relatively close proximity to the Island NMD103f. In practice, an NMD may direct an assigned playback device to play audio in response to a remote VAS receiving a voice input from the NMD to play the audio, which the NMD might have sent to the VAS in response to a user speaking a command to play a certain song, album, playlist, etc. Additional details regarding assigning NMDs and playback devices as designated or default devices may be found, for example, in previously referenced U.S. patent application No. Further aspects relating to the different components of the example MPS100and how the different components may interact to provide a user with a media experience may be found in the following sections. While discussions herein may generally refer to the example MPS100, technologies described herein are not limited to applications within, among other things, the home environment described above. For instance, the technologies described herein may be useful in other home environment configurations comprising more or fewer of any of the playback, network microphone, and/or controller devices102-104. For example, the technologies herein may be utilized within an environment having a single playback device102and/or a single NMD103. In some examples of such cases, the NETWORK111(FIG.1B) may be eliminated and the single playback device102and/or the single NMD103may communicate directly with the remote computing devices106-d. In some embodiments, a telecommunication network (e.g., an LTE network, a 5G network, etc.) may communicate with the various playback, network microphone, and/or controller devices102-104independent of a LAN. a. Example Playback & Network Microphone Devices FIG.2Ais a functional block diagram illustrating certain aspects of one of the playback devices102of the MPS100ofFIGS.1A and1B. As shown, the playback device102includes various components, each of which is discussed in further detail below, and the various components of the playback device102may be operably coupled to one another via a system bus, communication network, or some other connection mechanism. In the illustrated example ofFIG.2A, the playback device102may be referred to as an “NMD-equipped” playback device because it includes components that support the functionality of an NMD, such as one of the NMDs103shown inFIG.1A. As shown, the playback device102includes at least one processor212, which may be a clock-driven computing component configured to process input data according to instructions stored in memory213. The memory213may be a tangible, non-transitory, computer-readable medium configured to store instructions that are executable by the processor212. For example, the memory213may be data storage that can be loaded with software code214that is executable by the processor212to achieve certain functions. In one example, these functions may involve the playback device102retrieving audio data from an audio source, which may be another playback device. In another example, the functions may involve the playback device102sending audio data, detected-sound data (e.g., corresponding to a voice input), and/or other information to another device on a network via at least one network interface224. In yet another example, the functions may involve the playback device102causing one or more other playback devices to synchronously playback audio with the playback device102. In yet a further example, the functions may involve the playback device102facilitating being paired or otherwise bonded with one or more other playback devices to create a multi-channel audio environment. Numerous other example functions are possible, some of which are discussed below. As just mentioned, certain functions may involve the playback device102synchronizing playback of audio content with one or more other playback devices. During synchronous playback, a listener may not perceive time-delay differences between playback of the audio content by the synchronized playback devices. U.S. Pat. No. 8,234,395 filed on Apr. 4, 2004, and titled “System and method for synchronizing operations among a plurality of independently clocked digital data processing devices,” which is hereby incorporated by reference in its entirety, provides in more detail some examples for audio playback synchronization among playback devices. To facilitate audio playback, the playback device102includes audio processing components216that are generally configured to process audio prior to the playback device102rendering the audio. In this respect, the audio processing components216may include one or more digital-to-analog converters (“DAC”), one or more audio preprocessing components, one or more audio enhancement components, one or more digital signal processors (“DSPs”), and so on. In some implementations, one or more of the audio processing components216may be a subcomponent of the processor212. In operation, the audio processing components216receive analog and/or digital audio and process and/or otherwise intentionally alter the audio to produce audio signals for playback. The produced audio signals may then be provided to one or more audio amplifiers217for amplification and playback through one or more speakers218operably coupled to the amplifiers217. The audio amplifiers217may include components configured to amplify audio signals to a level for driving one or more of the speakers218. Each of the speakers218may include an individual transducer (e.g., a “driver”) or the speakers218may include a complete speaker system involving an enclosure with one or more drivers. A particular driver of a speaker218may include, for example, a subwoofer (e.g., for low frequencies), a mid-range driver (e.g., for middle frequencies), and/or a tweeter (e.g., for high frequencies). In some cases, a transducer may be driven by an individual corresponding audio amplifier of the audio amplifiers217. In some implementations, a playback device may not include the speakers218, but instead may include a speaker interface for connecting the playback device to external speakers. In certain embodiments, a playback device may include neither the speakers218nor the audio amplifiers217, but instead may include an audio interface (not shown) for connecting the playback device to an external audio amplifier or audio-visual receiver. In addition to producing audio signals for playback by the playback device102, the audio processing components216may be configured to process audio to be sent to one or more other playback devices, via the network interface224, for playback. In example scenarios, audio content to be processed and/or played back by the playback device102may be received from an external source, such as via an audio line-in interface (e.g., an auto-detecting 3.5 mm audio line-in connection) of the playback device102(not shown) or via the network interface224, as described below. As shown, the at least one network interface224, may take the form of one or more wireless interfaces225and/or one or more wired interfaces226. A wireless interface may provide network interface functions for the playback device102to wirelessly communicate with other devices (e.g., other playback device(s), NMD(s), and/or controller device(s)) in accordance with a communication protocol (e.g., any wireless standard including IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G mobile communication standard, and so on). A wired interface may provide network interface functions for the playback device102to communicate over a wired connection with other devices in accordance with a communication protocol (e.g., IEEE 802.3). While the network interface224shown inFIG.2Ainclude both wired and wireless interfaces, the playback device102may in some implementations include only wireless interface(s) or only wired interface(s). In general, the network interface224facilitates data flow between the playback device102and one or more other devices on a data network. For instance, the playback device102may be configured to receive audio content over the data network from one or more other playback devices, network devices within a LAN, and/or audio content sources over a WAN, such as the Internet. In one example, the audio content and other signals transmitted and received by the playback device102may be transmitted in the form of digital packet data comprising an Internet Protocol (IP)-based source address and IP-based destination addresses. In such a case, the network interface224may be configured to parse the digital packet data such that the data destined for the playback device102is properly received and processed by the playback device102. As shown inFIG.2A, the playback device102also includes voice processing components220that are operably coupled to one or more microphones222. The microphones222are configured to detect sound (i.e., acoustic waves) in the environment of the playback device102, which is then provided to the voice processing components220. More specifically, each microphone222is configured to detect sound and convert the sound into a digital or analog signal representative of the detected sound, which can then cause the voice processing component220to perform various functions based on the detected sound, as described in greater detail below. In one implementation, the microphones222are arranged as an array of microphones (e.g., an array of six microphones). In some implementations, the playback device102includes more than six microphones (e.g., eight microphones or twelve microphones) or fewer than six microphones (e.g., four microphones, two microphones, or a single microphones). In operation, the voice-processing components220are generally configured to detect and process sound received via the microphones222, identify potential voice input in the detected sound, and extract detected-sound data to enable a VAS, such as the VAS190(FIG.1B), to process voice input identified in the detected-sound data. The voice processing components220may include one or more analog-to-digital converters, an acoustic echo canceller (“AEC”), a spatial processor (e.g., one or more multi-channel Wiener filters, one or more other filters, and/or one or more beam former components), one or more buffers (e.g., one or more circular buffers), one or more wake-word engines, one or more voice extractors, and/or one or more speech processing components (e.g., components configured to recognize a voice of a particular user or a particular set of users associated with a household), among other example voice processing components. In example implementations, the voice processing components220may include or otherwise take the form of one or more DSPs or one or more modules of a DSP. In this respect, certain voice processing components220may be configured with particular parameters (e.g., gain and/or spectral parameters) that may be modified or otherwise tuned to achieve particular functions. In some implementations, one or more of the voice processing components220may be a subcomponent of the processor212. As further shown inFIG.2A, the playback device102also includes power components227. The power components227include at least an external power source interface228, which may be coupled to a power source (not shown) via a power cable or the like that physically connects the playback device102to an electrical outlet or some other external power source. Other power components may include, for example, transformers, converters, and like components configured to format electrical power. In some implementations, the power components227of the playback device102may additionally include an internal power source229(e.g., one or more batteries) configured to power the playback device102without a physical connection to an external power source. When equipped with the internal power source229, the playback device102may operate independent of an external power source. In some such implementations, the external power source interface228may be configured to facilitate charging the internal power source229. As discussed before, a playback device comprising an internal power source may be referred to herein as a “portable playback device.” On the other hand, a playback device that operates using an external power source may be referred to herein as a “stationary playback device,” although such a device may in fact be moved around a home or other environment. The playback device102further includes a user interface240that may facilitate user interactions independent of or in conjunction with user interactions facilitated by one or more of the controller devices104. In various embodiments, the user interface240includes one or more physical buttons and/or supports graphical interfaces provided on touch sensitive screen(s) and/or surface(s), among other possibilities, for a user to directly provide input. The user interface240may further include one or more of lights (e.g., LEDs) and the speakers to provide visual and/or audio feedback to a user. As an illustrative example,FIG.2Bshows an example housing230of the playback device102that includes a user interface in the form of a control area232at a top portion234of the housing230. The control area232includes buttons236a-cfor controlling audio playback, volume level, and other functions. The control area232also includes a button236dfor toggling the microphones222to either an on state or an off state. As further shown inFIG.2B, the control area232is at least partially surrounded by apertures formed in the top portion234of the housing230through which the microphones222(not visible inFIG.2B) receive the sound in the environment of the playback device102. The microphones222may be arranged in various positions along and/or within the top portion234or other areas of the housing230so as to detect sound from one or more directions relative to the playback device102. By way of illustration, SONOS, Inc. presently offers (or has offered) for sale certain playback devices that may implement certain of the embodiments disclosed herein, including a “PLAY:1,” “PLAY:3,” “PLAY:5,” “PLAYBAR,” “CONNECT:AMP,” “PLAYBASE,” “BEAM,” “CONNECT,” and “SUB.” Any other past, present, and/or future playback devices may additionally or alternatively be used to implement the playback devices of example embodiments disclosed herein. Additionally, it should be understood that a playback device is not limited to the examples illustrated inFIG.2A or2Bor to the SONOS product offerings. For example, a playback device may include, or otherwise take the form of, a wired or wireless headphone set, which may operate as a part of the MPS100via a network interface or the like. In another example, a playback device may include or interact with a docking station for personal mobile media playback devices. In yet another example, a playback device may be integral to another device or component such as a television, a lighting fixture, or some other device for indoor or outdoor use. FIG.2Cis a diagram of an example voice input280that may be processed by an NMD or an NMD-equipped playback device. The voice input280may include a keyword portion280aand an utterance portion280b. The keyword portion280amay include a wake word or a command keyword. In the case of a wake word, the keyword portion280acorresponds to detected sound that caused a wake-word The utterance portion280bcorresponds to detected sound that potentially comprises a user request following the keyword portion280a. An utterance portion280bcan be processed to identify the presence of any words in detected-sound data by the NMD in response to the event caused by the keyword portion280a. In various implementations, an underlying intent can be determined based on the words in the utterance portion280b. In certain implementations, an underlying intent can also be based or at least partially based on certain words in the keyword portion280a, such as when keyword portion includes a command keyword. In any case, the words may correspond to one or more commands, as well as a certain command and certain keywords. A keyword in the voice utterance portion280bmay be, for example, a word identifying a particular device or group in the MPS100. For instance, in the illustrated example, the keywords in the voice utterance portion280bmay be one or more words identifying one or more zones in which the music is to be played, such as the Living Room and the Dining Room (FIG.1A). In some cases, the utterance portion280bmay include additional information, such as detected pauses (e.g., periods of non-speech) between words spoken by a user, as shown inFIG.2C. The pauses may demarcate the locations of separate commands, keywords, or other information spoke by the user within the utterance portion280b. Based on certain command criteria, the NMD and/or a remote VAS may take actions as a result of identifying one or more commands in the voice input. Command criteria may be based on the inclusion of certain keywords within the voice input, among other possibilities. Additionally, or alternatively, command criteria for commands may involve identification of one or more control-state and/or zone-state variables in conjunction with identification of one or more particular commands. Control-state variables may include, for example, indicators identifying a level of volume, a queue associated with one or more devices, and playback state, such as whether devices are playing a queue, paused, etc. Zone-state variables may include, for example, indicators identifying which, if any, zone players are grouped. In some implementations, the MPS100is configured to temporarily reduce the volume of audio content that it is playing upon detecting a certain keyword, such as a wake word, in the keyword portion280a. The MPS100may restore the volume after processing the voice input280. Such a process can be referred to as ducking, examples of which are disclosed in U.S. patent application Ser. No. 15/438,749, incorporated by reference herein in its entirety. FIG.2Dshows an example sound specimen. In this example, the sound specimen corresponds to the sound-data stream (e.g., one or more audio frames) associated with a spotted wake word or command keyword in the keyword portion280aofFIG.2A. As illustrated, the example sound specimen comprises sound detected in an NMD's environment (i) immediately before a wake or command word was spoken, which may be referred to as a pre-roll portion (between times t0and t1), (ii) while a wake or command word was spoken, which may be referred to as a wake-meter portion (between times t1and t2), and/or (iii) after the wake or command word was spoken, which may be referred to as a post-roll portion (between times t2and t3). Other sound specimens are also possible. In various implementations, aspects of the sound specimen can be evaluated according to an acoustic model which aims to map mels/spectral features to phonemes in a given language model for further processing. For example, automatic speech recognition (ASR) may include such mapping for command-keyword detection. Wake-word detection engines, by contrast, may be precisely tuned to identify a specific wake-word, and a downstream action of invoking a VAS (e.g., by targeting only nonce words in the voice input processed by the playback device). ASR for command keyword detection may be tuned to accommodate a wide range of keywords (e.g., 5, 10, 100, 1,000, 10,000 keywords). Command keyword detection, in contrast to wake-word detection, may involve feeding ASR output to an onboard, local NLU which together with the ASR determine when command word events have occurred. In some implementations described below, the local NLU may determine an intent based on one or more other keywords in the ASR output produced by a particular voice input. In these or other implementations, a playback device may act on a detected command keyword event only when the playback devices determines that certain conditions have been met, such as environmental conditions (e.g., low background noise). b. Example Playback Device Configurations FIGS.3A-3Eshow example configurations of playback devices. Referring first toFIG.3A, in some example instances, a single playback device may belong to a zone. For example, the playback device102c(FIG.1A) on the Patio may belong to Zone A. In some implementations described below, multiple playback devices may be “bonded” to form a “bonded pair,” which together form a single zone. For example, the playback device102f(FIG.1A) named “Bed1” inFIG.3Amay be bonded to the playback device102g(FIG.1A) named “Bed2” inFIG.3Ato form Zone B. Bonded playback devices may have different playback responsibilities (e.g., channel responsibilities). In another implementation described below, multiple playback devices may be merged to form a single zone. For example, the playback device102dnamed “Bookcase” may be merged with the playback device102mnamed “Living Room” to form a single Zone C. The merged playback devices102dand102mmay not be specifically assigned different playback responsibilities. That is, the merged playback devices102dand102mmay, aside from playing audio content in synchrony, each play audio content as they would if they were not merged. For purposes of control, each zone in the MPS100may be represented as a single user interface (“UI”) entity. For example, as displayed by the controller devices104, Zone A may be provided as a single entity named “Portable,” Zone B may be provided as a single entity named “Stereo,” and Zone C may be provided as a single entity named “Living Room.” In various embodiments, a zone may take on the name of one of the playback devices belonging to the zone. For example, Zone C may take on the name of the Living Room device102m(as shown). In another example, Zone C may instead take on the name of the Bookcase device102d. In a further example, Zone C may take on a name that is some combination of the Bookcase device102dand Living Room device102m. The name that is chosen may be selected by a user via inputs at a controller device104. In some embodiments, a zone may be given a name that is different than the device(s) belonging to the zone. For example, Zone B inFIG.3Ais named “Stereo” but none of the devices in Zone B have this name. In one aspect, Zone B is a single UI entity representing a single device named “Stereo,” composed of constituent devices “Bed1” and “Bed2.” In one implementation, the Bed1device may be playback device102fin the master bedroom101h(FIG.1A) and the Bed2device may be the playback device102galso in the master bedroom101h(FIG.1A). As noted above, playback devices that are bonded may have different playback responsibilities, such as playback responsibilities for certain audio channels. For example, as shown inFIG.3B, the Bed1and Bed2devices102fand102gmay be bonded so as to produce or enhance a stereo effect of audio content. In this example, the Bed1playback device102fmay be configured to play a left channel audio component, while the Bed2playback device102gmay be configured to play a right channel audio component. In some implementations, such stereo bonding may be referred to as “pairing.” Additionally, playback devices that are configured to be bonded may have additional and/or different respective speaker drivers. As shown inFIG.3C, the playback device102bnamed “Front” may be bonded with the playback device102knamed “SUB.” The Front device102bmay render a range of mid to high frequencies, and the SUB device102kmay render low frequencies as, for example, a subwoofer. When unbonded, the Front device102bmay be configured to render a full range of frequencies. As another example,FIG.3Dshows the Front and SUB devices102band102kfurther bonded with Right and Left playback devices102aand102j, respectively. In some implementations, the Right and Left devices102aand102jmay form surround or “satellite” channels of a home theater system. The bonded playback devices102a,102b,102j, and102kmay form a single Zone D (FIG.3A). In some implementations, playback devices may also be “merged.” In contrast to certain bonded playback devices, playback devices that are merged may not have assigned playback responsibilities, but may each render the full range of audio content that each respective playback device is capable of. Nevertheless, merged devices may be represented as a single UI entity (i.e., a zone, as discussed above). For instance,FIG.3Eshows the playback devices102dand102min the Living Room merged, which would result in these devices being represented by the single UI entity of Zone C. In one embodiment, the playback devices102dand102mmay playback audio in synchrony, during which each outputs the full range of audio content that each respective playback device102dand102mis capable of rendering. In some embodiments, a stand-alone NMD may be in a zone by itself. For example, the NMD103hfromFIG.1Ais named “Closet” and forms Zone I inFIG.3A. An NMD may also be bonded or merged with another device so as to form a zone. For example, the NMD device103fnamed “Island” may be bonded with the playback device102iKitchen, which together form Zone F, which is also named “Kitchen.” Additional details regarding assigning NMDs and playback devices as designated or default devices may be found, for example, in previously referenced U.S. patent application Ser. No. 15/438,749. In some embodiments, a stand-alone NMD may not be assigned to a zone. Zones of individual, bonded, and/or merged devices may be arranged to form a set of playback devices that playback audio in synchrony. Such a set of playback devices may be referred to as a “group,” “zone group,” “synchrony group,” or “playback group.” In response to inputs provided via a controller device104, playback devices may be dynamically grouped and ungrouped to form new or different groups that synchronously play back audio content. For example, referring toFIG.3A, Zone A may be grouped with Zone B to form a zone group that includes the playback devices of the two zones. As another example, Zone A may be grouped with one or more other Zones C-I. The Zones A-I may be grouped and ungrouped in numerous ways. For example, three, four, five, or more (e.g., all) of the Zones A-I may be grouped. When grouped, the zones of individual and/or bonded playback devices may play back audio in synchrony with one another, as described in previously referenced U.S. Pat. No. 8,234,395. Grouped and bonded devices are example types of associations between portable and stationary playback devices that may be caused in response to a trigger event, as discussed above and described in greater detail below. In various implementations, the zones in an environment may be assigned a particular name, which may be the default name of a zone within a zone group or a combination of the names of the zones within a zone group, such as “Dining Room+Kitchen,” as shown inFIG.3A. In some embodiments, a zone group may be given a unique name selected by a user, such as “Nick's Room,” as also shown inFIG.3A. The name “Nick's Room” may be a name chosen by a user over a prior name for the zone group, such as the room name “Master Bedroom.” Referring back toFIG.2A, certain data may be stored in the memory213as one or more state variables that are periodically updated and used to describe the state of a playback zone, the playback device(s), and/or a zone group associated therewith. The memory213may also include the data associated with the state of the other devices of the MPS100, which may be shared from time to time among the devices so that one or more of the devices have the most recent data associated with the system. In some embodiments, the memory213of the playback device102may store instances of various variable types associated with the states. Variables instances may be stored with identifiers (e.g., tags) corresponding to type. For example, certain identifiers may be a first type “a1” to identify playback device(s) of a zone, a second type “b1” to identify playback device(s) that may be bonded in the zone, and a third type “c1” to identify a zone group to which the zone may belong. As a related example, inFIG.1A, identifiers associated with the Patio may indicate that the Patio is the only playback device of a particular zone and not in a zone group. Identifiers associated with the Living Room may indicate that the Living Room is not grouped with other zones but includes bonded playback devices102a,102b,102j, and102k. Identifiers associated with the Dining Room may indicate that the Dining Room is part of Dining Room+Kitchen group and that devices103fand102iare bonded. Identifiers associated with the Kitchen may indicate the same or similar information by virtue of the Kitchen being part of the Dining Room+Kitchen zone group. Other example zone variables and identifiers are described below. In yet another example, the MPS100may include variables or identifiers representing other associations of zones and zone groups, such as identifiers associated with Areas, as shown inFIG.3A. An Area may involve a cluster of zone groups and/or zones not within a zone group. For instance,FIG.3Ashows a first area named “First Area” and a second area named “Second Area.” The First Area includes zones and zone groups of the Patio, Den, Dining Room, Kitchen, and Bathroom. The Second Area includes zones and zone groups of the Bathroom, Nick's Room, Bedroom, and Living Room. In one aspect, an Area may be used to invoke a cluster of zone groups and/or zones that share one or more zones and/or zone groups of another cluster. In this respect, such an Area differs from a zone group, which does not share a zone with another zone group. Further examples of techniques for implementing Areas may be found, for example, in U.S. application Ser. No. 15/682,506 filed Aug. 21, 2017 and titled “Room Association Based on Name,” and U.S. Pat. No. 8,483,853 filed Sep. 11, 2007, and titled “Controlling and manipulating groupings in a multi-zone media system.” Each of these applications is incorporated herein by reference in its entirety. In some embodiments, the MPS100may not implement Areas, in which case the system may not store variables associated with Areas. The memory213may be further configured to store other data. Such data may pertain to audio sources accessible by the playback device102or a playback queue that the playback device (or some other playback device(s)) may be associated with. In embodiments described below, the memory213is configured to store a set of command data for selecting a particular VAS when processing voice inputs. During operation, one or more playback zones in the environment ofFIG.1Amay each be playing different audio content. For instance, the user may be grilling in the Patio zone and listening to hip hop music being played by the playback device102c, while another user may be preparing food in the Kitchen zone and listening to classical music being played by the playback device102i. In another example, a playback zone may play the same audio content in synchrony with another playback zone. For instance, the user may be in the Office zone where the playback device102nis playing the same hip-hop music that is being playing by playback device102cin the Patio zone. In such a case, playback devices102cand102nmay be playing the hip-hop in synchrony such that the user may seamlessly (or at least substantially seamlessly) enjoy the audio content that is being played out-loud while moving between different playback zones. Synchronization among playback zones may be achieved in a manner similar to that of synchronization among playback devices, as described in previously referenced U.S. Pat. No. 8,234,395. As suggested above, the zone configurations of the MPS100may be dynamically modified. As such, the MPS100may support numerous configurations. For example, if a user physically moves one or more playback devices to or from a zone, the MPS100may be reconfigured to accommodate the change(s). For instance, if the user physically moves the playback device102cfrom the Patio zone to the Office zone, the Office zone may now include both the playback devices102cand102n. In some cases, the user may pair or group the moved playback device102cwith the Office zone and/or rename the players in the Office zone using, for example, one of the controller devices104and/or voice input. As another example, if one or more playback devices102are moved to a particular space in the home environment that is not already a playback zone, the moved playback device(s) may be renamed or associated with a playback zone for the particular space. Further, different playback zones of the MPS100may be dynamically combined into zone groups or split up into individual playback zones. For example, the Dining Room zone and the Kitchen zone may be combined into a zone group for a dinner party such that playback devices102iand102lmay render audio content in synchrony. As another example, bonded playback devices in the Den zone may be split into (i) a television zone and (ii) a separate listening zone. The television zone may include the Front playback device102b. The listening zone may include the Right, Left, and SUB playback devices102a,102j, and102k, which may be grouped, paired, or merged, as described above. Splitting the Den zone in such a manner may allow one user to listen to music in the listening zone in one area of the living room space, and another user to watch the television in another area of the living room space. In a related example, a user may utilize either of the NMD103aor103b(FIG.1B) to control the Den zone before it is separated into the television zone and the listening zone. Once separated, the listening zone may be controlled, for example, by a user in the vicinity of the NMD103a, and the television zone may be controlled, for example, by a user in the vicinity of the NMD103b. As described above, however, any of the NMDs103may be configured to control the various playback and other devices of the MPS100. c. Example Controller Devices FIG.4is a functional block diagram illustrating certain aspects of a selected one of the controller devices104of the MPS100ofFIG.1A. Such controller devices may also be referred to herein as a “control device” or “controller.” The controller device shown inFIG.4may include components that are generally similar to certain components of the network devices described above, such as a processor412, memory413storing program software414, at least one network interface424, and one or more microphones422. In one example, a controller device may be a dedicated controller for the MPS100. In another example, a controller device may be a network device on which media playback system controller application software may be installed, such as for example, an iPhone™, iPad™ or any other smart phone, tablet, or network device (e.g., a networked computer such as a PC or Mac™). The memory413of the controller device104may be configured to store controller application software and other data associated with the MPS100and/or a user of the system100. The memory413may be loaded with instructions in software414that are executable by the processor412to achieve certain functions, such as facilitating user access, control, and/or configuration of the MPS100. The controller device104is configured to communicate with other network devices via the network interface424, which may take the form of a wireless interface, as described above. In one example, system information (e.g., such as a state variable) may be communicated between the controller device104and other devices via the network interface424. For instance, the controller device104may receive playback zone and zone group configurations in the MPS100from a playback device, an NMD, or another network device. Likewise, the controller device104may transmit such system information to a playback device or another network device via the network interface424. In some cases, the other network device may be another controller device. The controller device104may also communicate playback device control commands, such as volume control and audio playback control, to a playback device via the network interface424. As suggested above, changes to configurations of the MPS100may also be performed by a user using the controller device104. The configuration changes may include adding/removing one or more playback devices to/from a zone, adding/removing one or more zones to/from a zone group, forming a bonded or merged player, separating one or more playback devices from a bonded or merged player, among others. As shown inFIG.4, the controller device104also includes a user interface440that is generally configured to facilitate user access and control of the MPS100. The user interface440may include a touch-screen display or other physical interface configured to provide various graphical controller interfaces, such as the controller interfaces540aand540bshown inFIGS.5A and5B. Referring toFIGS.5A and5Btogether, the controller interfaces540aand540bincludes a playback control region542, a playback zone region543, a playback status region544, a playback queue region546, and a sources region548. The user interface as shown is just one example of an interface that may be provided on a network device, such as the controller device shown inFIG.4, and accessed by users to control a media playback system, such as the MPS100. Other user interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system. The playback control region542(FIG.5A) may include selectable icons (e.g., by way of touch or by using a cursor) that, when selected, cause playback devices in a selected playback zone or zone group to play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode, etc. The playback control region542may also include selectable icons that, when selected, modify equalization settings and/or playback volume, among other possibilities. The playback zone region543(FIG.5B) may include representations of playback zones within the MPS100. The playback zones regions543may also include a representation of zone groups, such as the Dining Room+Kitchen zone group, as shown. In some embodiments, the graphical representations of playback zones may be selectable to bring up additional selectable icons to manage or configure the playback zones in the MPS100, such as a creation of bonded zones, creation of zone groups, separation of zone groups, and renaming of zone groups, among other possibilities. For example, as shown, a “group” icon may be provided within each of the graphical representations of playback zones. The “group” icon provided within a graphical representation of a particular zone may be selectable to bring up options to select one or more other zones in the MPS100to be grouped with the particular zone. Once grouped, playback devices in the zones that have been grouped with the particular zone will be configured to play audio content in synchrony with the playback device(s) in the particular zone. Analogously, a “group” icon may be provided within a graphical representation of a zone group. In this case, the “group” icon may be selectable to bring up options to deselect one or more zones in the zone group to be removed from the zone group. Other interactions and implementations for grouping and ungrouping zones via a user interface are also possible. The representations of playback zones in the playback zone region543(FIG.5B) may be dynamically updated as playback zone or zone group configurations are modified. The playback status region544(FIG.5A) may include graphical representations of audio content that is presently being played, previously played, or scheduled to play next in the selected playback zone or zone group. The selected playback zone or zone group may be visually distinguished on a controller interface, such as within the playback zone region543and/or the playback status region544. The graphical representations may include track title, artist name, album name, album year, track length, and/or other relevant information that may be useful for the user to know when controlling the MPS100via a controller interface. The playback queue region546may include graphical representations of audio content in a playback queue associated with the selected playback zone or zone group. In some embodiments, each playback zone or zone group may be associated with a playback queue comprising information corresponding to zero or more audio items for playback by the playback zone or zone group. For instance, each audio item in the playback queue may comprise a uniform resource identifier (URI), a uniform resource locator (URL), or some other identifier that may be used by a playback device in the playback zone or zone group to find and/or retrieve the audio item from a local audio content source or a networked audio content source, which may then be played back by the playback device. In one example, a playlist may be added to a playback queue, in which case information corresponding to each audio item in the playlist may be added to the playback queue. In another example, audio items in a playback queue may be saved as a playlist. In a further example, a playback queue may be empty, or populated but “not in use” when the playback zone or zone group is playing continuously streamed audio content, such as Internet radio that may continue to play until otherwise stopped, rather than discrete audio items that have playback durations. In an alternative embodiment, a playback queue can include Internet radio and/or other streaming audio content items and be “in use” when the playback zone or zone group is playing those items. Other examples are also possible. When playback zones or zone groups are “grouped” or “ungrouped,” playback queues associated with the affected playback zones or zone groups may be cleared or re-associated. For example, if a first playback zone including a first playback queue is grouped with a second playback zone including a second playback queue, the established zone group may have an associated playback queue that is initially empty, that contains audio items from the first playback queue (such as if the second playback zone was added to the first playback zone), that contains audio items from the second playback queue (such as if the first playback zone was added to the second playback zone), or a combination of audio items from both the first and second playback queues. Subsequently, if the established zone group is ungrouped, the resulting first playback zone may be re-associated with the previous first playback queue or may be associated with a new playback queue that is empty or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped. Similarly, the resulting second playback zone may be re-associated with the previous second playback queue or may be associated with a new playback queue that is empty or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped. Other examples are also possible. With reference still toFIGS.5A and5B, the graphical representations of audio content in the playback queue region646(FIG.5A) may include track titles, artist names, track lengths, and/or other relevant information associated with the audio content in the playback queue. In one example, graphical representations of audio content may be selectable to bring up additional selectable icons to manage and/or manipulate the playback queue and/or audio content represented in the playback queue. For instance, a represented audio content may be removed from the playback queue, moved to a different position within the playback queue, or selected to be played immediately, or after any currently playing audio content, among other possibilities. A playback queue associated with a playback zone or zone group may be stored in a memory on one or more playback devices in the playback zone or zone group, on a playback device that is not in the playback zone or zone group, and/or some other designated device. Playback of such a playback queue may involve one or more playback devices playing back media items of the queue, perhaps in sequential or random order. The sources region548may include graphical representations of selectable audio content sources and/or selectable voice assistants associated with a corresponding VAS. The VASes may be selectively assigned. In some examples, multiple VASes, such as AMAZON's Alexa, MICROSOFT's Cortana, etc., may be invokable by the same NMD. In some embodiments, a user may assign a VAS exclusively to one or more NMDs. For example, a user may assign a first VAS to one or both of the NMDs102aand102bin the Living Room shown inFIG.1A, and a second VAS to the NMD103fin the Kitchen. Other examples are possible. d. Example Audio Content Sources The audio sources in the sources region548may be audio content sources from which audio content may be retrieved and played by the selected playback zone or zone group. One or more playback devices in a zone or zone group may be configured to retrieve for playback audio content (e.g., according to a corresponding URI or URL for the audio content) from a variety of available audio content sources. In one example, audio content may be retrieved by a playback device directly from a corresponding audio content source (e.g., via a line-in connection). In another example, audio content may be provided to a playback device over a network via one or more other playback devices or network devices. As described in greater detail below, in some embodiments audio content may be provided by one or more media content services. Example audio content sources may include a memory of one or more playback devices in a media playback system such as the MPS100ofFIG.1, local music libraries on one or more network devices (e.g., a controller device, a network-enabled personal computer, or a networked-attached storage (“NAS”)), streaming audio services providing audio content via the Internet (e.g., cloud-based music services), or audio sources connected to the media playback system via a line-in input connection on a playback device or network device, among other possibilities. In some embodiments, audio content sources may be added or removed from a media playback system such as the MPS100ofFIG.1A. In one example, an indexing of audio items may be performed whenever one or more audio content sources are added, removed, or updated. Indexing of audio items may involve scanning for identifiable audio items in all folders/directories shared over a network accessible by playback devices in the media playback system and generating or updating an audio content database comprising metadata (e.g., title, artist, album, track length, among others) and other associated information, such as a URI or URL for each identifiable audio item found. Other examples for managing and maintaining audio content sources may also be possible. FIG.6is a message flow diagram illustrating data exchanges between devices of the MPS100. At step650a, the MPS100receives an indication of selected media content (e.g., one or more songs, albums, playlists, podcasts, videos, stations) via the control device104. The selected media content can comprise, for example, media items stored locally on or more devices connected to the media playback system and/or media items stored on one or more media service servers (one or more of the remote computing devices106ofFIG.1B). In response to receiving the indication of the selected media content, the control device104transmits a message651ato the playback device102(FIGS.1A-1B) to add the selected media content to a playback queue on the playback device102. At step650b, the playback device102receives the message651aand adds the selected media content to the playback queue for play back. At step650c, the control device104receives input corresponding to a command to play back the selected media content. In response to receiving the input corresponding to the command to play back the selected media content, the control device104transmits a message651bto the playback device102causing the playback device102to play back the selected media content. In response to receiving the message651b, the playback device102transmits a message651cto the computing device106requesting the selected media content. The computing device106, in response to receiving the message651c, transmits a message651dcomprising data (e.g., audio data, video data, a URL, a URI) corresponding to the requested media content. At step650d, the playback device102receives the message651dwith the data corresponding to the requested media content and plays back the associated media content. At step650e, the playback device102optionally causes one or more other devices to play back the selected media content. In one example, the playback device102is one of a bonded zone of two or more players. The playback device102can receive the selected media content and transmit all or a portion of the media content to other devices in the bonded zone. In another example, the playback device102is a coordinator of a group and is configured to transmit and receive timing information from one or more other devices in the group. The other one or more devices in the group can receive the selected media content from the computing device106, and begin playback of the selected media content in response to a message from the playback device102such that all of the devices in the group play back the selected media content in synchrony. III. Example Network Microphone Device FIG.7Ais a functional block diagram illustrating certain aspects of an example network microphone device (NMD)703. Generally, the NMD703may be similar to the network microphone device(s)103illustrated inFIGS.1A and1B. As shown, the NMD703includes various components, each of which is discussed in further detail below. Many of these components are similar to the playback device102ofFIG.2A. In contrast to the NMD-equipped playback device ofFIG.2A, however, the NMD703is not designed for audio content playback and therefore may exclude audio processing components216, amplifiers217, and/or speakers218or may include relatively less capable versions of these components. The various components of the NMD703may be operably coupled to one another via a system bus, communication network, or some other connection mechanism. As shown, the NMD703includes at least one processor712, which may be a clock-driven computing component configured to process input data according to instructions stored in memory713. The memory713may be a tangible, non-transitory, computer-readable medium configured to store instructions that are executable by the processor712. For example, the memory713may be data storage that can be loaded with software code714that is executable by the processor712to achieve certain functions. The at least one network interface724may take the form of one or more wireless interfaces725and/or one or more wired interfaces726. The wireless interface725may provide network interface functions for the NMD703to wirelessly communicate with other devices (e.g., playback device(s)102, other NMD(s)103, and/or controller device(s)104) in accordance with a communication protocol (e.g., any wireless standard including IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G mobile communication standard, and so on). The wired interface726may provide network interface functions for the NMD703to communicate over a wired connection with other devices in accordance with a communication protocol (e.g., IEEE 802.3). While the network interface724shown inFIG.7Aincludes both wired and wireless interfaces, the playback device102may in various implementations include only wireless interface(s) or only wired interface(s). As shown inFIG.7A, the NMD703also includes voice processing components720that are operably coupled to microphones722. The microphones722are configured to detect sound (i.e., acoustic waves) in the environment of the NMD703, which is then provided to the voice processing components720. More specifically, the microphones722are configured to detect sound and convert the sound into a digital or analog signal representative of the detected sound, which can then cause the voice processing component720to perform various functions based on the detected sound, as described in greater detail below. In one implementation, the microphones722are arranged as one or more arrays of microphones (e.g., an array of six microphones). In some implementations, the NMD703includes more than six microphones (e.g., eight microphones or twelve microphones) or fewer than six microphones (e.g., four microphones, two microphones, or a single microphone). In operation, similar to the voice-processing components220of the NMD-equipped playback device102the voice-processing components720are generally configured to detect and process sound received via the microphones722, identify potential voice input in the detected sound, and extract detected-sound data to enable processing of the voice input by a cloud-based VAS, such as the VAS190(FIG.1B), or a local NLU. The voice processing components720may include one or more analog-to-digital converters, an acoustic echo canceller (“AEC”), a spatial processor, one or more buffers (e.g., one or more circular buffers), one or more wake-word engines, one or more voice extractors, and/or one or more speech processing components (e.g., components configured to recognize a voice of a particular user or a particular set of users associated with a household), among other example voice processing components. In example implementations, the voice processing components720may include or otherwise take the form of one or more DSPs or one or more modules of a DSP. In some implementations, one or more of the voice processing components720may be a subcomponent of the processor712. The NMD703also includes one or more orientation sensors723configured to detect an orientation of the NMD703. The orientation sensor(s)723may include one or more accelerometers, one or more gyroscopes, and/or a magnetometer facilitate detecting an orientation of the NMD703. Various implementations may implement any suitable orientation detection techniques. As further shown inFIG.2A, the NMD703also includes power components727. The power components727include at least an external power source interface728, which may be coupled to a power source (not shown) via a power cable or the like that physically connects the NMD703to an electrical outlet or some other external power source. Other power components may include, for example, transformers, converters, and like components configured to format electrical power. In some implementations, the power components727of the NMD703may additionally include an internal power source729(e.g., one or more batteries) configured to power the NMD703without a physical connection to an external power source. When equipped with the internal power source729, the NMD703may operate independent of an external power source. In some such implementations, the external power source interface728may be configured to facilitate charging the internal power source729. As discussed before, a NMD comprising an internal power source may be referred to herein as a “portable NMD.” On the other hand, a NMD that operates using an external power source may be referred to herein as a “stationary NMD,” although such a device may in fact be moved around a home or other environment (e.g., to be connected to different power outlets of a home or other building). The NMD703further includes a user interface740that may facilitate user interactions independent of or in conjunction with user interactions facilitated by one or more of the controller devices104. In various embodiments, the user interface740includes one or more physical buttons and/or supports graphical interfaces provided on touch sensitive screen(s) and/or surface(s), among other possibilities, for a user to directly provide input. The user interface740may further include one or more of lights (e.g., LEDs) and the speakers to provide visual and/or audio feedback to a user. As an illustrative example,FIG.7Bshows an example housing730of the NMD703in a first orientation with a top portion734aof the housing730oriented upwards. The top portion734aof the housing730includes a user interface740acarried on the top portion734aof the housing730. The user interface740aincludes buttons736a-736cfor controlling audio playback, volume level, and other functions. The user interface740aalso includes a button736dfor toggling the microphones722ato either an on state or an off state. As further shown inFIG.7B, apertures are formed in the top portion734aof the housing730through which one or more first microphones722areceive sound in the environment of the NMD703. The microphones722amay be arranged in various positions along and/or within the top portion734aor other areas of the housing730so as to detect sound from one or more directions relative to the NMD703. FIG.7Cshows the example housing730of the NMD703in a second orientation with a bottom portion734bof the housing730oriented upwards. Similar to the top portion734a, the bottom portion734bof the housing730includes a user interface740bcarried on the bottom portion734aof the housing730. The user interface740bincludes buttons736a′-736c′ for controlling audio playback, volume level, and other functions. The user interface740balso includes a button736d′ for toggling the microphones722bto either an on state or an off state. Similar to the top portion734a, apertures are formed in the bottom portion734bof the housing730through which one or more second microphones722breceive sound in the environment of the NMD703. The microphones722bmay be arranged in various positions along and/or within the bottom portion734bor other areas of the housing730so as to detect sound from one or more directions relative to the NMD703. FIG.7Dillustrates the NMD703being re-oriented from the first orientation to the second orientation by flipping over the housing730. In operation, the orientation sensor(s)723detect that the housing730is in the first orientation or the second orientation. When the orientation sensor(s)723detect that the housing730is in the first orientation, the NMD703enables a first mode associated with local processing of voice inputs detected via the microphone(s)722a. Conversely, when the orientation sensor(s)723detect that the housing730is in the second orientation, the NMD703enables a second mode associated with cloud processing of voice inputs detected via the microphone(s)722b. More particularly, in the first mode, voice input processing via cloud-based voice assistant services is disabled. Instead, voice inputs are processed locally on via a local natural language unit. Since voice inputs are not sent to any cloud-based VAS in the first mode, operation in the first mode may enhance user privacy. In contrast, in the second mode, voice input processing via cloud-based voice assistant services is enabled. In this mode, voice inputs directed to a cloud-based VAS (e.g., via a VAS wake word) are send to the cloud-based VAS for processing. This second mode allows the user to take advantage of the relatively-greater capabilities of cloud-based voice assistant services relative to processing via a local NLU. At the same time, in some implementations, the local NLU remains enabled in the second mode, which allows users to direct certain voice inputs for local processing (e.g., via a local wake word). In various examples, the top portion734aand bottom portion of the734bmay be implemented using different colors, patterns, textures, or other visual differences. Visual differences between the top portion734aand bottom portion of the734bof the housing730may assist a user in determining whether the NMD703is operating in the first mode (with the top portion734afacing upwards) or operating in the second mode (with the bottom portion734bfacing upwards), especially from across a room. Within example implementations, enabling the first mode or the second mode may involve enabling or disabling the microphones722. In particular, while the NMD730is in the first orientation, the microphones722aare enabled and the microphones722bare disabled. Conversely, while the NMD730is operating in the second mode, the microphones722aare disabled and the microphones722bare disabled. This may prevent the microphones722on the bottom of the housing730(i.e., either the microphones722aor722b, depending on the orientation) from receiving muffled or otherwise distorted audio. Further, in example implementations, the NMD may include a control to toggle between the first mode and the second mode. For instance, the housing730of the NMD703may include a physical switch or other hardware control to toggle between the first mode and the second mode. Alternatively, a control on a graphical user interface on a control device (e.g., the controller interfaces540of the control device104) or voice inputs to a voice user interface may be used to toggle between the first mode and the second mode. Such a control may be implemented in addition to a toggle based on device orientation or as an alternative to the toggle based on device orientation. FIG.7Eis a functional block diagram showing aspects of an NMD703configured in accordance with embodiments of the disclosure. As described in more detail below, the NMD703is configured to handle certain voice inputs locally while in a first mode (and possibly also in the second mode), without necessarily transmitting data representing the voice input to a VAS. The NMD703is also configured to process other voice inputs using a voice assistant service while the NMD703is in a second mode. Referring to theFIG.7E, the NMD703includes voice capture components (“VCC”)760, a VAS wake-word engine770a, and a voice extractor773. The VAS wake-word engine770aand the voice extractor773are operably coupled to the VCC760. The NMD703afurther a local voice input engine771aoperably coupled to the VCC760. The NMD703further includes microphones722aand722b(referred to collectively as the microphones722). The microphones722of the NMD703aare configured to provide detected sound, SD, from the environment of the NMD703to the VCC760. The detected sound SDmay take the form of one or more analog or digital signals. In example implementations, the detected sound SDmay be composed of a plurality signals associated with respective channels762aor762b(referred to collectively as channels762) that are fed to the VCC760. Each channel762may correspond to a particular microphone722. For example, an NMD having six microphones may have six corresponding channels. Each channel of the detected sound SDmay bear certain similarities to the other channels but may differ in certain regards, which may be due to the position of the given channel's corresponding microphone relative to the microphones of other channels. For example, one or more of the channels of the detected sound SDmay have a greater signal to noise ratio (“SNR”) of speech to background noise than other channels. As further shown inFIG.7E, the VCC760includes an AEC763, a spatial processor764, and one or more buffers768. In operation, the AEC763receives the detected sound SDand filters or otherwise processes the sound to suppress echoes and/or to otherwise improve the quality of the detected sound SD. That processed sound may then be passed to the spatial processor764. The spatial processor764is typically configured to analyze the detected sound SDand identify certain characteristics, such as a sound's amplitude (e.g., decibel level), frequency spectrum, directionality, etc. In one respect, the spatial processor764may help filter or suppress ambient noise in the detected sound SDfrom potential user speech based on similarities and differences in the constituent channels762of the detected sound SD, as discussed above. As one possibility, the spatial processor764may monitor metrics that distinguish speech from other sounds. Such metrics can include, for example, energy within the speech band relative to background noise and entropy within the speech band—a measure of spectral structure—which is typically lower in speech than in most common background noise. In some implementations, the spatial processor764may be configured to determine a speech presence probability, examples of such functionality are disclosed in U.S. patent application Ser. No. 15/984,073, filed May 18, 2018, titled “Linear Filtering for Noise-Suppressed Speech Detection,” which is incorporated herein by reference in its entirety. In operation, the one or more buffers768—one or more of which may be part of or separate from the memory713(FIG.7A)—capture data corresponding to the detected sound SD. More specifically, the one or more buffers768capture detected-sound data that was processed by the upstream AEC764and spatial processor766. The network interface724may then provide this information to a remote server that may be associated with the MPS100. In one aspect, the information stored in the additional buffer769does not reveal the content of any speech but instead is indicative of certain unique features of the detected sound itself. In a related aspect, the information may be communicated between computing devices, such as the various computing devices of the MPS100, without necessarily implicating privacy concerns. In practice, the MPS100can use this information to adapt and fine-tune voice processing algorithms, including sensitivity tuning as discussed below. In some implementations the additional buffer may comprise or include functionality similar to lookback buffers disclosed, for example, in U.S. patent application Ser. No. 15/989,715, filed May 25, 2018, titled “Determining and Adapting to Changes in Microphone Performance of Playback Devices”; U.S. patent application Ser. No. 16/141,875, filed Sep. 25, 2018, titled “Voice Detection Optimization Based on Selected Voice Assistant Service”; and U.S. patent application Ser. No. 16/138,111, filed Sep. 21, 2018, titled “Voice Detection Optimization Using Sound Metadata,” which are incorporated herein by reference in their entireties. In any event, the detected-sound data forms a digital representation (i.e., sound-data stream), SDS, of the sound detected by the microphones720. In practice, the sound-data stream SDSmay take a variety of forms. As one possibility, the sound-data stream SDSmay be composed of frames, each of which may include one or more sound samples. The frames may be streamed (i.e., read out) from the one or more buffers768for further processing by downstream components, such as the VAS wake-word engines770and the voice extractor773of the NMD703. In some implementations, at least one buffer768captures detected-sound data utilizing a sliding window approach in which a given amount (i.e., a given window) of the most recently captured detected-sound data is retained in the at least one buffer768while older detected-sound data is overwritten when it falls outside of the window. For example, at least one buffer768may temporarily retain 20 frames of a sound specimen at given time, discard the oldest frame after an expiration time, and then capture a new frame, which is added to the 19 prior frames of the sound specimen. In practice, when the sound-data stream SDSis composed of frames, the frames may take a variety of forms having a variety of characteristics. As one possibility, the frames may take the form of audio frames that have a certain resolution (e.g., 16 bits of resolution), which may be based on a sampling rate (e.g., 44,100 Hz). Additionally, or alternatively, the frames may include information corresponding to a given sound specimen that the frames define, such as metadata that indicates frequency response, power input level, SNR, microphone channel identification, and/or other information of the given sound specimen, among other examples. Thus, in some embodiments, a frame may include a portion of sound (e.g., one or more samples of a given sound specimen) and metadata regarding the portion of sound. In other embodiments, a frame may only include a portion of sound (e.g., one or more samples of a given sound specimen) or metadata regarding a portion of sound. In any case, downstream components of the NMD703may process the sound-data stream SDS. For instance, the VAS wake-word engines770are configured to apply one or more identification algorithms to the sound-data stream SDS(e.g., streamed sound frames) to spot potential wake words in the detected-sound SD. This process may be referred to as automatic speech recognition. The VAS wake-word engine770aand command keyword engine771aapply different identification algorithms corresponding to their respective wake words, and further generate different events based on detecting a wake word in the detected-sound SD. Example wake word detection algorithms accept audio as input and provide an indication of whether a wake word is present in the audio. Many first- and third-party wake word detection algorithms are known and commercially available. For instance, operators of a voice service may make their algorithm available for use in third-party devices. Alternatively, an algorithm may be trained to detect certain wake-words. For instance, when the VAS wake-word engine770adetects a potential VAS wake word, the VAS work-word engine770aprovides an indication of a “VAS wake-word event” (also referred to as a “VAS wake-word trigger”). In the illustrated example ofFIG.7A, the VAS wake-word engine770aoutputs a signal SVWthat indicates the occurrence of a VAS wake-word event to the voice extractor773. In multi-VAS implementations, the NMD703may include a VAS selector774(shown in dashed lines) that is generally configured to direct extraction by the voice extractor773and transmission of the sound-data stream SDSto the appropriate VAS when a given wake-word is identified by a particular wake-word engine (and a corresponding wake-word trigger), such as the VAS wake-word engine770aand at least one additional VAS wake-word engine770b(shown in dashed lines). In such implementations, the NMD703may include multiple, different VAS wake-word engines and/or voice extractors, each supported by a respective VAS. Similar to the discussion above, each VAS wake-word engine770may be configured to receive as input the sound-data stream SDSfrom the one or more buffers768and apply identification algorithms to cause a wake-word trigger for the appropriate VAS. Thus, as one example, the VAS wake-word engine770amay be configured to identify the wake word “Alexa” and cause the NMD703ato invoke the AMAZON VAS when “Alexa” is spotted. As another example, the wake-word engine770bmay be configured to identify the wake word “Ok, Google” and cause the NMD520to invoke the GOOGLE VAS when “Ok, Google” is spotted. In single-VAS implementations, the VAS selector774may be omitted. In response to the VAS wake-word event (e.g., in response to the signal SVWindicating the wake-word event), the voice extractor773is configured to receive and format (e.g., packetize) the sound-data stream SDS. For instance, the voice extractor773packetizes the frames of the sound-data stream SDSinto messages. The voice extractor773transmits or streams these messages, MV, that may contain voice input in real time or near real time to a remote VAS via the network interface724. As noted above, in the first mode, voice input processing via cloud-based voice assistant services is disabled. In some examples, to disable the voice input processing via cloud-based voice assistant services, the NMD703physically or logically disables the VAS wake-word engine(s)770. For instance, the NMD703may physically or logically prevent the sound-data stream SDSfrom the microphones722afrom reaching the VAS wake-word engine(s)770and/or voice extractor773. Alternatively, suppressing generation may involve the NMD703ceasing to feed the sound-data stream SDSto the ASR772. Suppressing generation may involve gating, blocking or otherwise preventing output from the VAS wake-word engine(s)770from generating a local keyword event. In the second mode, voice input processing via a cloud-based voice assistant service is enabled. The VAS is configured to process the sound-data stream SDScontained in the messages MVsent from the NMD703. More specifically, in the first mode, the NMD703is configured to identify a voice input780acaptured by the microphones722abased on the sound-data stream SDS1. In a second mode, the NMD703is configured to identify a voice input780acaptured by the microphones772bbased on the sound-data stream SDS2. The voice inputs780aand780bare referred to collectively as a voice input780and the sound-data streams SDS1and SDS2are referred to collectively as a SDS. As described in connection withFIG.2C, the voice input780may include a keyword portion and an utterance portion. The keyword portion may correspond to detected sound that causes a VAS wake-word event (i.e., a VAS wake word). Alternatively, the keyword portion may correspond to a local wake word or a command keyword, which may generate a local wake-word event. For instance, when the voice input780bincludes a VAS wake word, the keyword portion corresponds to detected sound that causes the wake-word engine770ato output the wake-word event signal SVWto the voice extractor773. The utterance portion in this case corresponds to detected sound that potentially comprises a user request following the keyword portion. When a VAS wake-word event occurs, the VAS may first process the keyword portion within the sound-data stream SDSto verify the presence of a VAS wake word. In some instances, the VAS may determine that the keyword portion comprises a false wake word (e.g., the word “Election” when the word “Alexa” is the target VAS wake word). In such an occurrence, the VAS may send a response to the NMD703with an instruction for the NMD703to cease extraction of sound data, which causes the voice extractor773to cease further streaming of the detected-sound data to the VAS. The VAS wake-word engine770amay resume or continue monitoring sound specimens until it spots another potential VAS wake word, leading to another VAS wake-word event. In some implementations, the VAS does not process or receive the keyword portion but instead processes only the utterance portion. In any case, the VAS processes the utterance portion to identify the presence of any words in the detected-sound data and to determine an underlying intent from these words. The words may correspond to one or more commands, as well as certain keywords. The keyword may be, for example, a word in the voice input identifying a particular device or group in the MPS100. For instance, in the illustrated example, the keyword may be one or more words identifying one or more zones in which the music is to be played, such as the Living Room and the Dining Room (FIG.1A). To determine the intent of the words, the VAS is typically in communication with one or more databases associated with the VAS (not shown) and/or one or more databases (not shown) of the MPS100. Such databases may store various user data, analytics, catalogs, and other information for natural language processing and/or other processing. In some implementations, such databases may be updated for adaptive learning and feedback for a neural network based on voice-input processing. In some cases, the utterance portion may include additional information such as detected pauses (e.g., periods of non-speech) between words spoken by a user, as shown inFIG.2C. The pauses may demarcate the locations of separate commands, keywords, or other information spoke by the user within the utterance portion. After processing the voice input, the VAS may send a response to the MPS100with an instruction to perform one or more actions based on an intent it determined from the voice input. For example, based on the voice input, the VAS may direct the MPS100to initiate playback on one or more of the playback devices102, control one or more of these playback devices102(e.g., raise/lower volume, group/ungroup devices, etc.), or turn on/off certain smart devices, among other actions. After receiving the response from the VAS, the wake-word engine770aof the NMD703may resume or continue to monitor the sound-data stream SDS1until it spots another potential wake-word, as discussed above. In general, the one or more identification algorithms that a particular VAS wake-word engine, such as the VAS wake-word engine770a, applies are configured to analyze certain characteristics of the detected sound stream SDSand compare those characteristics to corresponding characteristics of the particular VAS wake-word engine's one or more particular VAS wake words. For example, the wake-word engine770amay apply one or more identification algorithms to spot spectral characteristics in the detected sound stream SDSthat match the spectral characteristics of the engine's one or more wake words, and thereby determine that the detected sound SDcomprises a voice input including a particular VAS wake word. In some implementations, the one or more identification algorithms may be third-party identification algorithms (i.e., developed by a company other than the company that provides the NMD703a). For instance, operators of a voice service (e.g., AMAZON) may make their respective algorithms (e.g., identification algorithms corresponding to AMAZON's ALEXA) available for use in third-party devices (e.g., the NMDs103), which are then trained to identify one or more wake words for the particular voice assistant service. Additionally, or alternatively, the one or more identification algorithms may be first-party identification algorithms that are developed and trained to identify certain wake words that are not necessarily particular to a given voice service. Other possibilities also exist. As noted above, the NMD703aalso includes a local voice input engine771ain parallel with the VAS wake-word engine770a. Like the VAS wake-word engine770a, the local voice input keyword engine771amay apply one or more identification algorithms corresponding to one or more wake words. A “local keyword event” is generated when a particular local keyword is identified in the detected-sound SD. Local keywords may take the form of a nonce wake word corresponding to local processing (e.g., “Hey Sonos”), which is different from the VAS wake words corresponding to respective voice assistant services. Local keywords may also take the form of command keywords. In contrast to the nonce words typically as utilized as VAS wake words, command keywords function as both the activation word and the command itself. For instance, example command keywords may correspond to playback commands (e.g., “play,” “pause,” “skip,” etc.) as well as control commands (“turn on”), among other examples. Under appropriate conditions, based on detecting one of these command keywords, the NMD703aperforms the corresponding command. The local voice input engine771acan employ an automatic speech recognizer772. The ASR772is configured to output phonetic or phenomic representations, such as text corresponding to words, based on sound in the sound-data stream SDSto text. For instance, the ASR772may transcribe spoken words represented in the sound-data stream SDSto one or more strings representing the voice input780as text. The local voice input engine771acan feed ASR output (labeled as SASR) to a local natural language unit (NLU)779that identifies particular keywords as being local keywords for invoking local-keyword events, as described below. As noted above, in some example implementations, the NMD703is configured to perform natural language processing, which may be carried out using an onboard natural language processor, referred to herein as a natural language unit (NLU)779. The local NLU779is configured to analyze text output of the ASR772of the local voice input keyword engine771ato spot (i.e., detect or identify) keywords in the voice input780. InFIG.7A, this output is illustrated as the signal SASR. The local NLU779includes a library of keywords (i.e., words and phrases) corresponding to respective commands and/or parameters. In one aspect, the library of the local NLU779includes local keywords, which, as noted above, may take the form of nonce keywords or command keywords. When the local NLU779identifies a local keyword in the signal SASR, the local voice input engine771agenerates a local keyword event. If the identified local keyword is a command keyword, the NMD703performs a command corresponding to the command keyword in the signal SASR, assuming that one or more conditions corresponding to that command keyword are satisfied. If the identified local keyword is a nonce keyword, the local NLU779attempts to identify a keyword or keywords corresponding to a command in the signal SASR. Further, the library of the local NLU779may also include keywords corresponding to parameters. The local NLU779may then determine an underlying intent from the matched keywords in the voice input780. For instance, if the local NLU matches the keywords “David Bowie” and “kitchen” in combination with a play command, the local NLU779may determine an intent of playing David Bowie in the Kitchen101hon the playback device102i. In contrast to a processing of the voice input780by a cloud-based VAS, local processing of the voice input780by the local NLU779may be relatively less sophisticated, as the NLU779does not have access to the relatively greater processing capabilities and larger voice databases that a VAS generally has access to. In some examples, the local NLU779may determine an intent with one or more slots, which correspond to respective keywords. For instance, referring back to the play David Bowie in the Kitchen example, when processing the voice input, the local NLU779may determine that an intent is to play music (e.g., intent=playMusic), while a first slot includes David Bowie as target content (e.g., slot1=DavidBowie) and a second slot includes the Kitchen101has the target playback device (e.g., slot2=kitchen). Here, the intent (to “playMusic”) is based on the command keyword and the slots are parameters modifying the intent to a particular target content and playback device. Within examples, the local voice input engine771aoutputs a signal, SLW, that indicates the occurrence of a local keyword event to the local NLU779. In response to the local keyword event (e.g., in response to the signal SLWindicating the command keyword event), the local NLU779is configured to receive and process the signal SASR. In particular, the local NLU779looks at the words within the signal SASRto find keywords that match keywords in the library of the local NLU779. Some error in performing local automatic speech recognition is expected. Within examples, the ASR772may generate a confidence score when transcribing spoken words to text, which indicates how closely the spoken words in the voice input780matches the sound patterns for that word. In some implementations, generating a local keyword event is based on the confidence score for a given local keyword. For instance, the local voice input engine771amay generate a command keyword event when the confidence score for a given sound exceeds a given threshold value (e.g., 0.5 on a scale of 0-1, indicating that the given sound is more likely than not the local keyword). Conversely, when the confidence score for a given sound is at or below the given threshold value, the command keyword engine771adoes not generate the local keyword event. Similarly, some error in performing keyword matching is expected. Within examples, the local NLU may generate a confidence score when determining an intent, which indicates how closely the transcribed words in the signal SASRmatch the corresponding keywords in the library of the local NLU. In some implementations, performing an operation according to a determined intent is based on the confidence score for keywords matched in the signal SASR. For instance, the NMD703may perform an operation according to a determined intent when the confidence score for a given sound exceeds a given threshold value (e.g., 0.5 on a scale of 0-1, indicating that the given sound is more likely than not the command keyword). Conversely, when the confidence score for a given intent is at or below the given threshold value, the NMD703does not perform the operation according to the determined intent. As noted above, in some implementations, a phrase may be used as a local keyword, which provides additional syllables to match (or not match). For instance, the phrase “Hey, Sonos” has more syllables than “Sonos,” which provides additional sound patterns to match to words. As another example, the phrase “play me some music” has more syllables than “play,” which provides additional sound patterns to match to words. Accordingly, local keywords that are phrases may generally be less prone to false wake words. In example implementations, the NMD703generates a local keyword event based on a local keyword taking the form of a command keyword (and performs a command corresponding to the detected command keyword) only when certain conditions corresponding to a detected command keyword are met. These conditions are intended to lower the prevalence of false positive command keyword events. For instance, after detecting the command keyword “skip,” the NMD703agenerates a command keyword event (and skips to the next track) only when certain playback conditions indicating that a skip should be performed are met. These playback conditions may include, for example, (i) a first condition that a media item is being played back, (ii) a second condition that a queue is active, and (iii) a third condition that the queue includes a media item subsequent to the media item being played back. If any of these conditions are not satisfied, the command keyword event is not generated (and no skip is performed). The NMD703may include one or more state machine(s)775to facilitate determining whether the appropriate conditions are met. An example state machine775atransitions between a first state and a second state based on whether one or more conditions corresponding to the detected command keyword are met. In particular, for a given command keyword corresponding to a particular command requiring one or more particular conditions, the state machine775atransitions into a first state when one or more particular conditions are satisfied and transitions into a second state when at least one condition of the one or more particular conditions is not satisfied. Within example implementations, the command conditions are based on states indicated in state variables. As noted above, the devices of the MPS100may store state variables describing the state of the respective device. For instance, the playback devices102may store state variables indicating the state of the playback devices102, such as the audio content currently playing (or paused), the volume levels, network connection status, and the like). These state variables are updated (e.g., periodically, or based on an event (i.e., when a state in a state variable changes)) and the state variables further can be shared among the devices of the MPS100, including the NMD703. Similarly, the NMD703may maintain these state variables (either by virtue of being implemented in a playback device or as a stand-alone NMD). The state machine(s)775monitor the states indicated in these state variables, and determines whether the states indicated in the appropriate state variables indicate that the command condition(s) are satisfied. Based on these determinations, the state machines775transition between the first state and the second state, as described above. In some implementations, the local voice input engine771is disabled unless certain conditions have been met via the state machines775. For example, the first state and the second state of the state machine775amay operate as enable/disable toggles to the local voice input engine771a. In particular, while a state machine775acorresponding to a particular command keyword is in the first state, the state machine775aenables the local voice input engine771aof the particular command keyword. Conversely, while the state machine775acorresponding to the particular command keyword is in the second state, the state machine775adisables the local voice input engine771aof the particular command keyword. Accordingly, the disabled local voice input engine771aceases analyzing the sound-data stream SDS. In such cases when at least one command condition is not satisfied, the NMD703may suppress generation of local keyword events when the local voice input engine771adetects a local keyword. Suppressing generation may involve gating, blocking or otherwise preventing output from the local voice input engine771afrom generating a local keyword event. Alternatively, suppressing generation may involve the NMD703ceasing to feed the sound-data stream SDSto the ASR772. Such suppression prevents a command corresponding to the detected local keyword from being performed when at least one command condition is not satisfied. In such embodiments, the local voice input engine771amay continue analyzing the sound-data stream SDSwhile the state machine775ais in the first state, but command keyword events are disabled. Other example conditions may be based on the output of a voice activity detector (“VAD”)765. The VAD765is configured to detect the presence (or lack thereof) of voice activity in the sound-data stream SDS. In particular, the VAD765may analyze frames corresponding to the pre-roll portion of the voice input780(FIG.2D) with one or more voice detection algorithms to determine whether voice activity was present in the environment in certain time windows prior to a keyword portion of the voice input780. The VAD765may utilize any suitable voice activity detection algorithms. Example voice detection algorithms involve determining whether a given frame includes one or more features or qualities that correspond to voice activity, and further determining whether those features or qualities diverge from noise to a given extent (e.g., if a value exceeds a threshold for a given frame). Some example voice detection algorithms involve filtering or otherwise reducing noise in the frames prior to identifying the features or qualities. In some examples, the VAD765may determine whether voice activity is present in the environment based on one or more metrics. For example, the VAD765can be configured distinguish between frames that include voice activity and frames that don't include voice activity. The frames that the VAD determines have voice activity may be caused by speech regardless of whether it near- or far-field. In this example and others, the VAD765may determine a count of frames in the pre-roll portion of the voice input780that indicate voice activity. If this count exceeds a threshold percentage or number of frames, the VAD765may be configured to output a signal or set a state variable indicating that voice activity is present in the environment. Other metrics may be used as well in addition to, or as an alternative to, such a count. The presence of voice activity in an environment may indicate that a voice input is being directed to the NMD703. Accordingly, when the VAD765indicates that voice activity is not present in the environment (perhaps as indicated by a state variable set by the VAD765) this may be configured as one of the command conditions for the local keywords. When this condition is met (i.e., the VAD765indicates that voice activity is present in the environment), the state machine775awill transition to the first state to enable performing commands based on local keywords, so long as any other conditions for a particular local keyword are satisfied. Further, in some implementations, the NMD703may include a noise classifier766. The noise classifier766is configured to determine sound metadata (frequency response, signal levels, etc.) and identify signatures in the sound metadata corresponding to various noise sources. The noise classifier766may include a neural network or other mathematical model configured to identify different types of noise in detected sound data or metadata. One classification of noise may be speech (e.g., far-field speech). Another classification, may be a specific type of speech, such as background speech, and example of which is described in greater detail with reference toFIG.8. Background speech may be differentiated from other types of voice-like activity, such as more general voice activity (e.g., cadence, pauses, or other characteristics) of voice-like activity detected by the VAD765. For example, analyzing the sound metadata can include comparing one or more features of the sound metadata with known noise reference values or a sample population data with known noise. For example, any features of the sound metadata such as signal levels, frequency response spectra, etc. can be compared with noise reference values or values collected and averaged over a sample population. In some examples, analyzing the sound metadata includes projecting the frequency response spectrum onto an eigenspace corresponding to aggregated frequency response spectra from a population of NMDs. Further, projecting the frequency response spectrum onto an eigenspace can be performed as a pre-processing step to facilitate downstream classification. In various embodiments, any number of different techniques for classification of noise using the sound metadata can be used, for example machine learning using decision trees, or Bayesian classifiers, neural networks, or any other classification techniques. Alternatively or additionally, various clustering techniques may be used, for example K-Means clustering, mean-shift clustering, expectation-maximization clustering, or any other suitable clustering technique. Techniques to classify noise may include one or more techniques disclosed in U.S. application Ser. No. 16/227,308 filed Dec. 20, 2018, and titled “Optimization of Network Microphone Devices Using Noise Classification,” which is herein incorporated by reference in its entirety. In some implementations, the additional buffer769(shown in dashed lines) may store information (e.g., metadata or the like) regarding the detected sound SDthat was processed by the upstream AEC763and spatial processor764. This additional buffer769may be referred to as a “sound metadata buffer.” Examples of such sound metadata include: (1) frequency response data, (2) echo return loss enhancement measures, (3) voice direction measures; (4) arbitration statistics; and/or (5) speech spectral data. In example implementations, the noise classifier766may analyze the sound metadata in the buffer769to classify noise in the detected sound SD. As noted above, one classification of sound may be background speech, such as speech indicative of far-field speech and/or speech indicative of a conversation not involving the NMD703. The noise classifier766may output a signal and/or set a state variable indicating that background speech is present in the environment. The presence of voice activity (i.e., speech) in the pre-roll portion of the voice input780indicates that the voice input780might not be directed to the NMD703, but instead be conversational speech within the environment. For instance, a household member might speak something like “our kids should have a play date soon” without intending to direct the command keyword “play” to the NMD703. Further, when the noise classifier indicates that background speech is present is present in the environment, this condition may disable the voice input engine771a. In some implementations, the condition of background speech being absent in the environment (perhaps as indicated by a state variable set by the noise classifier766) is configured as one of the command conditions for the command keywords. Accordingly, the state machine775awill not transition to the first state when the noise classifier766indicates that background speech is present in the environment. Further, the noise classifier766may determine whether background speech is present in the environment based on one or more metrics. For example, the noise classifier766may determine a count of frames in the pre-roll portion of the voice input780that indicate background speech. If this count exceeds a threshold percentage or number of frames, the noise classifier766may be configured to output the signal or set the state variable indicating that background speech is present in the environment. Other metrics may be used as well in addition to, or as an alternative to, such a count. Within example implementations, the NMD703amay support a plurality of local keywords. To facilitate such support, the local voice input engine771amay implement multiple identification algorithms corresponding to respective local keywords. Alternatively, the NMD703amay implement additional local voice input engines771bconfigured to identify respective local keywords. Yet further, the library of the local NLU779may include a plurality of local keywords and be configured to search for text patterns corresponding to these command keywords in the signal SASR. Further, local keywords may require different conditions. For instance, the conditions for “skip” may be different than the conditions for “play” as “skip” may require that the condition that a media item is being played back and play may require the opposite condition that a media item is not being played back. To facilitate these respective conditions, the NMD703amay implement respective state machines775corresponding to each local keyword. Alternatively, the NMD703may implement a state machine775having respective states for each command keyword. Other examples are possible as well. Further techniques related to conditioning of local keyword events and VAS wake word events are described in in U.S. application Ser. No. 16/439,009 filed Jun. 12, 2019, and titled “Network Microphone Device With Command Keyword Conditioning,” which is herein incorporated by reference in its entirety. Referring still toFIG.7E, in example embodiments, the VAS wake-word engine770aand the local voice input engine771amay take a variety of forms. For example, the VAS wake-word engine770aand the local voice input engine771amay take the form of one or more modules that are stored in memory of the NMD703(e.g., the memory713ofFIG.7A). As another example, the VAS wake-word engine770aand the local voice input engine771amay take the form of a general-purposes or special-purpose processor, or modules thereof. In this respect, multiple engines770and771may be part of the same component of the NMD703or each engine770and771may take the form of a component that is dedicated for the particular wake-word engine. Other possibilities also exist. In some implementations, in the second mode, voice input processing via a cloud-based VAS and local voice input processing are concurrently enabled. A user may speak a local keyword to invoke local processing of a voice input780bvia the local voice input engine771a. Notably, even in the second mode, the NMD703may forego sending any data representing the detected sound SD(e.g., the messages MV) to a VAS when processing a voice input780bincluding a local keyword. Rather, the voice input780bis processed locally using the local voice input engine771a. Accordingly, speaking a voice input780b(with a local keyword) to the NMD703may provide increased privacy relative to other NMDs that process all voice inputs using a VAS. As indicated above, some keywords in the library of the local NLU779correspond to parameters. These parameters may define to perform the command corresponding to a detected command keyword. When keywords are recognized in the voice input780, the command corresponding to the detected command keyword is performed according to parameters corresponding to the detected keywords. For instance, an example voice input780may be “play music at low volume” with “play” being the command keyword portion (corresponding to a playback command) and “music at low volume” being the voice utterance portion. When analyzing this voice input780, the NLU779may recognize that “low volume” is a keyword in its library corresponding to a parameter representing a certain (low) volume level. Accordingly, the NLU779may determine an intent to play at this lower volume level. Then, when performing the playback command corresponding to “play,” this command is performed according to the parameter representing a certain volume level. In a second example, another example voice input780may be “play my favorites in the Kitchen” with “play” again being the command keyword portion (corresponding to a playback command) and “my favorites in the Kitchen” as the voice utterance portion. When analyzing this voice input780, the NLU779may recognize that “favorites” and “Kitchen” match keywords in its library. In particular, “favorites” corresponds to a first parameter representing particular audio content (i.e., a particular playlist that includes a user's favorite audio tracks) while “Kitchen” corresponds to a second parameter representing a target for the playback command (i.e., the kitchen101hzone. Accordingly, the NLU779may determine an intent to play this particular playlist in the kitchen101hzone. In a third example, a further example voice input780may be “volume up” with “volume” being the command keyword portion (corresponding to a volume adjustment command) and “up” being the voice utterance portion. When analyzing this voice input780, the NLU779may recognize that “up” is a keyword in its library corresponding to a parameter representing a certain volume increase (e.g., a 10 point increase on a 100 point volume scale). Accordingly, the NLU779may determine an intent to increase volume. Then, when performing the volume adjustment command corresponding to “volume,” this command is performed according to the parameter representing the certain volume increase. Other example voice inputs may relate to smart device commands. For instance, an example voice input780may be “turn on patio lights” with “turn on” being the command keyword portion (corresponding to a power on command) and “patio lights” being the voice utterance portion. When analyzing this voice input780, the NLU779may recognize that “patio” is a keyword in its library corresponding to a first parameter representing a target for the smart device command (i.e., the patio101izone) and “lights” is a keyword in its library corresponding to a second parameter representing certain class of smart device (i.e., smart illumination devices, or “smart lights”) in the patio101izone. Accordingly, the NLU779may determine an intent to turn on smart lights associated with the patio101izone. As another example, another example voice input780may be “set temperature to 75” with “set temperature” being the command keyword portion (corresponding to a thermostat adjustment command) and “to 75” being the voice utterance portion. When analyzing this voice input780, the NLU779may recognize that “to 75” is a keyword in its library corresponding to a parameter representing a setting for the thermostat adjustment command. Accordingly, the NLU779may determine an intent to set a smart thermostat to 75 degrees. Within examples, certain command keywords are functionally linked to a subset of the keywords within the library of the local NLU779, which may hasten analysis. For instance, the command keyword “skip” may be functionality linked to the keywords “forward” and “backward” and their cognates. Accordingly, when the command keyword “skip” is detected in a given voice input780, analyzing the voice utterance portion of that voice input780with the local NLU779may involve determining whether the voice input780includes any keywords that match these functionally linked keywords (rather than determining whether the voice input780includes any keywords that match any keyword in the library of the local NLU779). Since vastly fewer keywords are checked, this analysis is relatively quicker than a full search of the library. By contrast, a nonce VAS wake word such as “Alexa” provides no indication as to the scope of the accompanying voice input. Some commands may require one or more parameters, as such the command keyword alone does not provide enough information to perform the corresponding command. For example, the command keyword “volume” might require a parameter to specify a volume increase or decrease, as the intent of “volume” of volume alone is unclear. As another example, the command keyword “group” may require two or more parameters identifying the target devices to group. Accordingly, in some example implementations, when a given command keyword is detected in the voice input780by the command keyword engine771a, the local NLU779may determine whether the voice input780includes keywords matching keywords in the library corresponding to the required parameters. If the voice input780does include keywords matching the required parameters, the NMD703aproceeds to perform the command (corresponding to the given command keyword) according to the parameters specified by the keywords. However, if the voice input780does include keywords matching the required parameters for the command, the NMD703amay prompt the user to provide the parameters. For instance, in a first example, the NMD703amay play an audible prompt such as “I've heard a command, but I need more information” or “Can I help you with something?” Alternatively, the NMD703amay send a prompt to a user's personal device via a control application (e.g., the software components132cof the control device(s)104). In further examples, the NMD703amay play an audible prompt customized to the detected command keyword. For instance, after detect a command keyword corresponding to a volume adjustment command (e.g., “volume”), the audible prompt may include a more specific request such as “Do you want to adjust the volume up or down?” As another example, for a grouping command corresponding to the command keyword “group,” the audible prompt may be “Which devices do you want to group?” Supporting such specific audible prompts may be made practicable by supporting a relatively limited number of command keywords (e.g., less than 100), but other implementations may support more command keywords with the trade-off of requiring additional memory and processing capability. Within additional examples, when a voice utterance portion does not include keywords corresponding to one or more required parameters, the NMD703amay perform the corresponding command according to one or more default parameters. For instance, if a playback command does not include keywords indicating target playback devices102for playback, the NMD703amay default to playback on the NMD703aitself (e.g., if the NMD703ais implemented within a playback device102) or to playback on one or more associated playback devices102(e.g., playback devices102in the same room or zone as the NMD703a). Further, in some examples, the user may configure default parameters using a graphical user interface (e.g., user interface430) or voice user interface. For example, if a grouping command does not specify the playback devices102to group, the NMD703amay default to instructing two or more pre-configured default playback devices102to form a synchrony group. Default parameters may be stored in data storage and accessed when the NMD703adetermines that keywords exclude certain parameters. Other examples are possible as well. In some implementations, while in the second mode, the NMD703asends the voice input780to a VAS when the local NLU779is unable to process the voice input780(e.g., when the local NLU is unable to find matches to keywords in the library, or when the local NLU779has a low confidence score as to intent). In an example, to trigger sending the voice input780, the NMD703amay generate a bridging event, which causes the voice extractor773to process the sound-data stream SD, as discussed above. That is, the NMD703agenerates a bridging event to trigger the voice extractor773without a VAS wake-word being detected by the VAS wake-word engine770a(instead based on a command keyword in the voice input780, as well as the NLU779being unable to process the voice input780). Before sending the voice input780to the VAS (e.g., via the messages MV), the NMD703amay obtain confirmation from the user that the user acquiesces to the voice input780being sent to the VAS. For instance, the NMD703amay play an audible prompt to send the voice input to a default or otherwise configured VAS, such as “I'm sorry, I didn't understand that. May I ask Alexa?” In another example, the NMD703amay play an audible prompt using a VAS voice (i.e., a voice that is known to most users as being associated with a particular VAS), such as “Can I help you with something?” In such examples, generation of the bridging event (and trigging of the voice extractor773) is contingent on a second affirmative voice input780from the user. Within certain example implementations, while in the first mode, the local NLU779may process the signal SASRwithout necessarily a local keyword event being generated by the command keyword engine771a(i.e., directly). That is, the automatic speech recognition772may be configured to perform automatic speech recognition on the sound-data stream SD, which the local NLU779processes for matching keywords without requiring a local keyword event. If keywords in the voice input780are found to match keywords corresponding to a command (possibly with one or more keywords corresponding to one or more parameters), the NMD703aperforms the command according to the one or more parameters. Further, in such examples, the local NLU779may process the signal SASRdirectly only when certain conditions are met. In particular, in some embodiments, the local NLU779processes the signal SASRonly when the state machine775ais in the first state. The certain conditions may include a condition corresponding to no background speech in the environment. An indication of whether background speech is present in the environment may come from the noise classifier766. As noted above, the noise classifier766may be configured to output a signal or set a state variable indicating that far-field speech is present in the environment. Further, another condition may correspond to voice activity in the environment. The VAD765may be configured to output a signal or set a state variable indicating that voice activity is present in the environment. The prevalence of false positive detection of commands with a direct processing approach may be mitigated using the conditions determined by the state machine775a. In example implementations, the NMD703is paired with one or more smart devices.FIG.8Aillustrates an example pairing arrangement between the NMD703and a smart device802, which includes an integrated playback device and smart illumination device. By pairing the NMD703with the smart device(s), voice commands to control the smart device(s) may be directed to the NMD703to control the smart device(s) without necessarily including a keyword identifying the smart device(s) in the voice command. For instance, commands such as “play back Better Oblivion Community Center” and “turn on lights” are received by the NMD703, but carried out on the smart device802without necessarily identifying the smart device802by name, room, zone, or the like. On the other hand, a user may still direct inputs to other smart devices in the MPS100by referencing the name, room, zone, group, area, etc. that the smart device is associated with. Within examples, a user may configure the pairing arrangement using a graphical user interface or voice user interface. For instance, the user may use a GUI on a application of a control device104to configure the pairing arrangement. Alternatively, the user may speak a voice command such as “Please pair with the Ikea® lamp” or “Please pair with the Sonos® Play:1” to configure the pairing relationship. The NMD703may store data representing the pairing arrangement in one or more state variables, which may be referenced when identifying a device to carry out a voice command. In the illustrative example ofFIG.8A, the NMD703is operating in the second mode. That is, the NMD703is in the second orientation and has enabled the second mode. Voice processing via the cloud-based voice assistant service(s) is enabled. The NMD703has established a local network connection via the LAN111to the smart device802, as well as an Internet-based connection to the VAS190via the network107(FIG.1B). Likewise, the smart device802has established a local network connection via the LAN111to the NMD703, as well as an Internet-based connection to the VAS190via the network107(FIG.1B). Further, in the exemplary pairing relationship ofFIG.8A, the smart device802may play back audio responses to voice inputs. As noted above, the NMD703may, in some examples, exclude audio playback components typically present in a playback device (e.g., audio processing components216, amplifiers217, and/or speakers218) or may include relatively less capable versions of these components. By pairing the NMD703to a playback device, the playback device may provide playback functions to complement the NMD, including playback of audio responses to voice inputs captured by the NMD703and playback of audio content initiated via voice command to the NMD703. For instance, while in the second mode, the user may speak the voice input “Alexa, what is the weather,” which is captured by the microphones722b(FIG.7C) of the NMD703. The NMD703transmits data representing this voice input to the servers106aof the VAS190. The servers106aprocess this voice input and provide data representing a spoken response. In some implementations, the smart device802receives this data directly from the computing devices106aof the VAS190via the networks107and the LAN111. Alternatively, the NMD703may receive the data from the VAS190, but send the data to the smart device802. In either case, the playback device802plays back the spoken response. As noted above, in the second mode, voice input processing via the VAS190and voice input processing via the local voice input engine771amay be concurrently enabled. In an example, a user may speak the voice input “Alexa, play ‘Hey Jude’ by the Beatles and turn on the Ikea lamps.” Here, “Alexa” is an example of a VAS wake word and “Ikea” is an example of a local keyword. Accordingly, such an input may generate both a VAS wake work event and a local keyword event on the NMD703. FIG.8Bagain shows the exemplary pairing relationship between the NMD703and the smart device802. In thisFIG.8Bexample, the NMD703is operating in the first mode, so voice input processing via the VAS190is disabled. This state is represented by the broken lines between the LAN111to the networks107. While in the first mode, the NMD703may receive voice inputs including commands to control the smart device802. The NMD703may process such voice inputs via the local voice input engine771aand transmit instructions to carry out the commands to the smart device802via the LAN111. In some examples, the library of the local NLU779is partially customized to the individual user(s). In a first aspect, the library may be customized to the devices that are within the household of the NMD (e.g., the household within the environment101(FIG.1A)). For instance, the library of the local NLU may include keywords corresponding to the names of the devices within the household, such as the zone names of the playback devices102in the MPS100. In a second aspect, the library may be customized to the users of the devices within the household. For example, the library of the local NLU779may include keywords corresponding to names or other identifiers of a user's preferred playlists, artists, albums, and the like. Then, the user may refer to these names or identifiers when directing voice inputs to the command keyword engine771aand the local NLU779. Within example implementations, the NMD703amay populate the library of the local NLU779locally within the network111(FIG.1B). As noted above, the NMD703amay maintain or have access to state variables indicating the respective states of devices connected to the network111(e.g., the playback devices104). These state variables may include names of the various devices. For instance, the kitchen101hmay include the playback device101b, which are assigned the zone name “Kitchen.” The NMD703amay read these names from the state variables and include them in the library of the local NLU779by training the local NLU779to recognize them as keywords. The keyword entry for a given name may then be associated with the corresponding device in an associated parameter (e.g., by an identifier of the device, such as a MAC address or IP address). The NMD703acan then use the parameters to customize control commands and direct the commands to a particular device. In further examples, the NMD703amay populate the library by discovering devices connected to the network111. For instance, the NMD703amay transmit discovery requests via the network111according to a protocol configured for device discovery, such as universal plug-and-play (UPnP) or zero-configuration networking. Devices on the network111may then respond to the discovery requests and exchange data representing the device names, identifiers, addresses and the like to facilitate communication and control via the network111. The NMD703may read these names from the exchanged messages and include them in the library of the local NLU779by training the local NLU779to recognize them as keywords. In further examples, the NMD703amay populate the library using the cloud. To illustrate,FIG.9is a schematic diagram of the MPS100and a cloud network902. The cloud network902includes cloud servers906, identified separately as media playback system control servers906a, streaming audio service servers906b, and IOT cloud servers906c. The streaming audio service servers906bmay represent cloud servers of different streaming audio services. Similarly, the IOT cloud servers906cmay represent cloud servers corresponding to different cloud services supporting smart devices990in the MPS100. Smart devices990include smart illumination devices, smart thermostats, smart plugs, security cameras, doorbells, and the like. Within examples, a user may link an account of the MPS100to an account of a IOT service. For instance, an IOT manufacturer (such as IKEA®) may operate a cloud-based IOT service to facilitate cloud-based control of their IOT products using smartphone app, website portal, and the like. In connection with such linking, keywords associated with the cloud-based service and the IOT devices may be populated in the library of the local NLU779. For instance, the library may be populated with a nonce keyword (e.g., “Hey Ikea”). Further, the library may be populated with names of various IOT devices, keyword commands for controlling the IOT devices, and keywords corresponding to parameters for the commands. One or more communication links903a,903b, and903c(referred to hereinafter as “the links903”) communicatively couple the MPS100and the cloud servers906. The links903can include one or more wired networks and one or more wireless networks (e.g., the Internet). Further, similar to the network111(FIG.1B), a network911communicatively couples the links903and at least a portion of the devices (e.g., one or more of the playback devices102, NMDs103and703a, control devices104, and/or smart devices990) of the MPS100. In some implementations, the media playback system control servers906afacilitate populating the library of local NLU779with the NMD(s)703a(representing one or more of the NMD703a(FIG.7A) within the MPS100). In an example, the media playback system control servers906amay receive data representing a request to populate the library of a local NLU779from the NMD703. Based on this request, the media playback system control servers906amay communicate with the streaming audio service servers906band/or IOT cloud servers906cto obtain keywords specific to the user. In some examples, the media playback system control servers906amay utilize user accounts and/or user profiles in obtaining keywords specific to the user. As noted above, a user of the MPS100may set-up a user profile to define settings and other information within the MPS100. The user profile may then in turn be registered with user accounts of one or more streaming audio services to facilitate streaming audio from such services to the playback devices102of the MPS100. Through use of these registered streaming audio services, the streaming audio service servers906bmay collect data indicating a user's saved or preferred playlists, artists, albums, tracks, and the like, either via usage history or via user input (e.g., via a user input designating a media item as saved or a favorite). This data may be stored in a database on the streaming audio service servers906bto facilitate providing certain features of the streaming audio service to the user, such as custom playlists, recommendations, and similar features. Under appropriate conditions (e.g., after receiving user permission), the streaming audio service servers906bmay share this data with the media playback system control servers906aover the links903b. Accordingly, within examples, the media playback system control servers906amay maintain or have access to data indicating a user's saved or preferred playlists, artists, albums, tracks, genres, and the like. If a user has registered their user profile with multiple streaming audio services, the saved data may include saved playlists, artists, albums, tracks, and the like from two or more streaming audio services. Further, the media playback system control servers906amay develop a more complete understanding of the user's preferred playlists, artists, albums, tracks, and the like by aggregating data from the two or more streaming audio services, as compared with a streaming audio service that only has access to data generated through use of its own service. Moreover, in some implementations, in addition to the data shared from the streaming audio service servers906b, the media playback system control servers906amay collect usage data from the MPS100over the links903a, after receiving user permission. This may include data indicating a user's saved or preferred media items on a zone basis. Different types of music may be preferred in different rooms. For instance, a user may prefer upbeat music in the Kitchen101hand more mellow music to assist with focus in the Office101e. Using the data indicating a user's saved or preferred playlists, artists, albums, tracks, and the like, the media playback system control servers906amay identify names of playlists, artists, albums, tracks, and the like that the user is likely to refer to when providing playback commands to the NMDs703via voice input. Data representing these names can then be transmitted via the links903aand the network904to the NMDs703aand then added to the library of the local NLU779as keywords. For instance, the media playback system control servers906amay send instructions to the NMD703to include certain names as keywords in the library of the local NLU779. Alternatively, the NMD703(or another device of the MPS100) may identify names of playlists, artists, albums, tracks, and the like that the user is likely to refer to when providing playback commands to the NMD703via voice input and then include these names in the library of the local NLU779. Due to such customization, similar voice inputs may result in different operations being performed when the voice input is processed by the local NLU779as compared with processing by a VAS. For instance, a first voice input of “Alexa, play me my favorites in the Office” may trigger a VAS wake-word event, as it includes a VAS wake word (“Alexa”). A second voice input of “Play me my favorites in the Office” may trigger a command keyword, as it includes a command keyword (“play”). Accordingly, the first voice input is sent by the NMD703ato the VAS, while the second voice input is processed by the local NLU779. While these voice inputs are nearly identical, they may cause different operations. In particular, the VAS may, to the best of its ability, determine a first playlist of audio tracks to add to a queue of the playback device102fin the office101e. Similarly, the local NLU779may recognize keywords “favorites” and “kitchen” in the second voice input. Accordingly, the NMD703aperforms the voice command of “play” with parameters of <favorites playlist> and <kitchen101hzone>, which causes a second playlist of audio tracks to be added to the queue of the playback device102fin the office101e. However, the second playlist of audio tracks may include a more complete and/or more accurate collection of the user's favorite audio tracks, as the second playlist of audio tracks may draw on data indicating a user's saved or preferred playlists, artists, albums, and tracks from multiple streaming audio services, and/or the usage data collected by the media playback system control servers906a. In contrast, the VAS may draw on its relatively limited conception of the user's saved or preferred playlists, artists, albums, and tracks when determining the first playlist. To illustrate,FIG.11shows a table1100illustrating the respective contents of a first and second playlist determined based on similar voice inputs, but processed differently. In particular, the first playlist is determined by a VAS while the second playlist is determined by the NMD703a(perhaps in conjunction with the media playback system control servers906a). As shown, while both playlists purport to include a user's favorites, the two playlists include audio content from dissimilar artists and genres. In particular, the second playlist is configured according to usage of the playback device102fin the Office101eand also the user's interactions with multiple streaming audio services, while the first playlist is based on the multiple user's interactions with the VAS. As a result, the second playlist is more attuned to the types of music that the user prefers to listen to in the office101e(e.g., indie rock and folk) while the first playlist is more representative of the interactions with the VAS as a whole. A household may include multiple users. Two or more users may configure their own respective user profiles with the MPS100. Each user profile may have its own user accounts of one or more streaming audio services associated with the respective user profile. Further, the media playback system control servers906amay maintain or have access to data indicating each user's saved or preferred playlists, artists, albums, tracks, genres, and the like, which may be associated with the user profile of that user. In various examples, names corresponding to user profiles may be populated in the library of the local NLU779. This may facilitate referring to a particular user's saved or preferred playlists, artists, albums, tracks, or genres. For instance, when a voice input of “Play Anne's favorites on the patio” is processed by the local NLU779, the local NLU779may determine that “Anne” matches a stored keyword corresponding to a particular user. Then, when performing the playback command corresponding to that voice input, the NMD703aadds a playlist of that particular user's favorite audio tracks to the queue of the playback device102cin the patio101i. In some cases, a voice input might not include a keyword corresponding to a particular user, but multiple user profiles are configured with the MPS100. In some cases, the NMD703amay determine the user profile to use in performing a command using voice recognition. Alternatively, the NMD703amay default to a certain user profile. Further, the NMD703amay use preferences from the multiple user profiles when performing a command corresponding to a voice input that did not identify a particular user profile. For instance, the NMD703amay determine a favorites playlist including preferred or saved audio tracks from each user profile registered with the MPS100. The IOT cloud servers906cmay be configured to provide supporting cloud services to the smart devices990. The smart devices990may include various “smart” internet-connected devices, such as lights, thermostats, cameras, security systems, appliances, and the like. For instance, an IOT cloud server906cmay provide a cloud service supporting a smart thermostat, which allows a user to control the smart thermostat over the internet via a smartphone app or website. Accordingly, within examples, the IOT cloud servers906cmay maintain or have access to data associated with a user's smart devices990, such as device names, settings, and configuration. Under appropriate conditions (e.g., after receiving user permission), the IOT cloud servers906cmay share this data with the media playback system control servers906aand/or the NMD703avia the links903c. For instance, the IOT cloud servers906cthat provide the smart thermostat cloud service may provide data representing such keywords to the NMD703a, which facilitates populating the library of the local NLU779with keywords corresponding to the temperature. Yet further, in some cases, the IOT cloud servers906cmay also provide keywords specific to control of their corresponding smart devices990. For instance, the IOT cloud server906cthat provides the cloud service supporting the smart thermostat may provide a set of keywords corresponding to voice control of a thermostat, such as “temperature,” “warmer,” or “cooler,” among other examples. Data representing such keywords may be sent to the NMDs703aover the links903and the network904from the IOT cloud servers906c. As noted above, some households may include more than NMD703. In example implementations, two or more NMDs703may synchronize or otherwise update the libraries of their respective local NLU779. For instance, a first NMD703aand a second NMD703bmay share data representing the libraries of their respective local NLU779, possibly using a network (e.g., the network904). Such sharing may facilitate the NMDs703abeing able to respond to voice input similarly, among other possible benefits. In some embodiments, one or more of the components described above can operate in conjunction with the microphones720to detect and store a user's voice profile, which may be associated with a user account of the MPS100. In some embodiments, voice profiles may be stored as and/or compared to variables stored in a set of command information or data table. The voice profile may include aspects of the tone or frequency of a user's voice and/or other unique aspects of the user, such as those described in previously-referenced U.S. patent application Ser. No. 15/438,749. In some embodiments, one or more of the components described above can operate in conjunction with the microphones720to determine the location of a user in the home environment and/or relative to a location of one or more of the NMDs103. Techniques for determining the location or proximity of a user may include one or more techniques disclosed in previously-referenced U.S. patent application Ser. No. 15/438,749, U.S. Pat. No. 9,084,058 filed Dec. 29, 2011, and titled “Sound Field Calibration Using Listener Localization,” and U.S. Pat. No. 8,965,033 filed Aug. 31, 2012, and titled “Acoustic Optimization.” Each of these applications is herein incorporated by reference in its entirety. FIGS.10A,10B,10C, and10Dshow exemplary input and output from the NMD703configured in accordance with aspects of the disclosure. FIG.10Aillustrates a first scenario in which a wake-word engine of the NMD703is configured to detect four local keywords (“play”, “stop”, “resume”, “turn on”). The local NLU779(FIG.7E) is disabled. In this scenario, the user has spoken the voice input “turn on” to the NMD703, which triggers a new recognition of one of the local keywords (e.g., a command keyword event corresponding to turn on). Yet further, the VAD765and noise classifier766(FIG.7E) have analyzed 150 frames of a pre-roll portion of the voice input. As shown, the VAD765has detected voice in 140 frames of the150pre-roll frames, which indicates that a voice input may be present in the detected sound. Further, the noise classifier766has detected ambient noise in 11 frames, background speech in 127 frames, and fan noise in 12 frames. In this example, the noise classifier766is classifying the predominant noise source in each frame. This indicates the presence of background speech. As a result, the NMD has determined not to trigger on the detected local keyword “turn on.” FIG.10Billustrates a second scenario in which the local voice input engine771aof the NMD703is configured to detect a local keyword (“play”) as well as two cognates of that command keyword (“play something” and “play me a song”). The local NLU779is disabled. In this second scenario, the user has spoken the voice input “play something” to the NMD703, which triggers a new recognition of one of the local keywords (e.g., a command keyword event). Yet further, the VAD765and noise classifier766have analyzed 150 frames of a pre-roll portion of the voice input. As shown, the VAD765has detected voice in 87 frames of the150pre-roll frames, which indicates that a voice input may be present in the detected sound. Further, the noise classifier766has detected ambient noise in 18 frames, background speech in 8 frames, and fan noise in 124 frames. This indicates that background speech is not present. Given the foregoing, the NMD703has determined to trigger on the detected local keyword “play.” FIG.10Cillustrates a third scenario in which the local voice input engine771aof the NMD703is configured to detect three local keywords (“play”, “stop”, and “resume”). The local NLU779is enabled. In this third scenario, the user has spoken the voice input “play Beatles in the Kitchen” to the NMD703, which triggers a new recognition of one of the local keywords (e.g., a command keyword event corresponding to play). As shown, the ASR772has transcribed the voice input as “play beet les in the kitchen.” Some error in performing ASR is expected (e.g., “beet les”). Here, the local NLU779has matched the keyword “beet les” to “The Beatles” in the local NLU library, which sets up this artist as a content parameter to the play command. Further, the local NLU779has also matched the keyword “kitchen” to “kitchen” in the local NLU library, which sets up the kitchen zone as a target parameter to the play command. The local NLU produced a confidence score of 0.63428231948273443 associated with the intent determination. Here as well, the VAD765and noise classifier766have analyzed 150 frames of a pre-roll portion of the voice input. As shown, the noise classifier766has detected ambient noise in 142 frames, background speech in 8 frames, and fan noise in 0 frames. This indicates that background speech is not present. The VAD765has detected voice in 112 frames of the150pre-roll frames, which indicates that a voice input may be present in the detected sound. Here, the NMD703has determined to trigger on the detected command keyword “play.” FIG.10Dillustrates a fourth scenario in which the local voice input engine771aof the NMD is not configured to spot any local keywords. Rather, the local voice input engine771awill perform ASR and pass the output of the ASR to the local NLU779. The local NLU779is enabled and configured to detect keywords corresponding to both commands and parameters. In the fourth scenario, the user has spoken the voice input “play some music in the Office” to the NMD703. As shown, the ASR772has transcribed the voice input as “lay some music in the office.” Here, the local NLU779has matched the keyword “lay” to “play” in the local NLU library, which corresponds to a playback command. Further, the local NLU779has also matched the keyword “office” to “office” in the local NLU library, which sets up the office101ezone as a target parameter to the play command. The local NLU779produced a confidence score of 0.14620494842529297 associated with the keyword matching. In some examples, this low confidence score may cause the NMD to not accept the voice input (e.g., if this confidence score is below a threshold, such as 0.5). IV. Example VAS Toggle Techniques FIG.11is a flow diagram showing an example method1100to toggle voice input processing based on device orientation. The method1100may be performed by a networked microphone device, such as the NMD703(FIG.7A). Alternatively, the method1100may be performed by any suitable device or by a system of devices, such as the playback devices103, NMDs103, control devices104, computing devices105, computing devices106, and/or NMD703. At block1102, the method1100involves detecting that the housing is in a first orientation. For instance, one or more orientation sensors (e.g., the orientation sensor(s)723(FIG.7A)) may generate data indicative of the orientation of the NMD703. The NMD703may detect that the housing730is in a first orientation (FIG.7B). In some implementations, the NMD703is configured to generate events when the orientation of the NMD703changes. Such events may trigger mode changes in the NMD703. For instance, when the housing730is switched from a second orientation (FIG.7C) to a second orientation (FIG.7B), the orientation sensors723may generate data indicative of acceleration of the NMD703. The NMD703may determine that this data indicates that the housing730is in the first orientation and generate an event indicating this orientation. In some examples, the orientation state of the NMD703is stored in one or more state variables, which can be referenced to determine the current orientation of the NMD703. At block1104, the method1100involves enabling a first mode. Enabling the first mode involves disabling voice input processing via a cloud-based voice assistant service and enabling voice input processing via a local natural language unit, such as the NLU779(FIG.7E). Enabling the first mode may further involve enabling one or more first microphones (e.g., the microphones722a(FIG.7B)) and/or disabling one or more second microphones (e.g., the microphones722b(FIG.7C)). In some examples, the NMD703enables the first mode after detecting that the housing is in a first orientation. As noted above, detecting that the housing is in a first orientation may involve detecting an event. For example, the NMD703may enable the first mode based on a particular event being generated where the particular event corresponds to a change in orientation from the second orientation to the first orientation. The first mode may remain enabled while the housing is in the first orientation. In some examples, while in the first mode, the NMD703may directly (e.g., via orientation sensor(s)723) or indirectly (e.g., via the one or more state variables) determine whether the NMD703is still in the first orientation. If the NMD703determines that the NMD703is no longer in the first orientation, the NMD703may switch modes. At block1106, the method1100involves receiving a voice input. Receiving a voice input may involve capturing sound data associated with a first voice input780avia the one or more first microphones722a(FIG.7E). Receiving the voice input may further involve detecting, via a local natural language unit779, that the first voice input comprises sound data matching one or more keywords from a local natural language unit library of the local natural language unit779. For instance, local natural language unit779may determine that the voice input includes one or more local keywords that generate a local keyword event, such as a nonce local keyword and/or a command keyword, as well as one or more additional keywords that correspond to parameters of the voice command. At block1108, the method1100involves determining, via the local natural language unit, an intent of the first voice input based on at least one of the one or more keywords. For instance, the NLU779may determine that the voice input includes a particular command keyword (e.g., turn on) and one or more keywords corresponding to parameters (e.g., the lights) and determine an intent of turning on the lights on a paired smart device802(FIG.8B). At block1110, the method1100involves performing a first command according to the determined intent of the first voice input. Performing the first command may involve sending instructions to one or more network devices over a network to perform one or more operations according to the first command, similar to the message exchange illustrated inFIG.6. For instance, the NMD703may transmit an instruction over the LAN111to the smart device802to toggle the lights or to play back audio content. Within examples, the target network devices to perform the first command may be explicitly or implicitly defined. For example, the target smart devices may be explicitly defined by reference in the voice input780to the name(s) of one or more smart devices (e.g., by reference to a room, zone or zone group name). Alternatively, the voice input might not include any reference to the name(s) of one or more smart devices and instead may implicitly refer to smart device(s) paired with the NMD703. Playback devices102associated with the NMD703amay include a playback device implementing the NMD703a, as illustrated by the playback device102dimplementing the NMD103d(FIG.1B)) or playback devices configured to be associated (e.g., where the playback devices102are in the same room or area as the NMD703a). Further, performing the first command may involve sending instructions to one or more remote computing devices. For example, the NMD703may transmit requests to the computing devices106of the MCS192to stream one or more audio tracks to the smart device902(FIG.8B). Alternatively, the instructions may be provided internally (e.g., over a local bus or other interconnection system) to one or more software or hardware components (e.g., the electronics112of the playback device102). Yet further, transmitting instructions may involve both local and cloud based operations. For instance, the NMD703may transmit instructions locally over the LAN111to the smart device802to add one or more audio tracks to the playback queue over the LAN111. Then, the smart device802may transmit a request to the computing devices106of the MCS192to stream one or more audio tracks to the smart device802for playback over the networks107. Other examples are possible as well. At block1112, the method1100involves detecting that the housing is in a second orientation different than the first orientation. For instance, one or more orientation sensors (e.g., the orientation sensor(s)723(FIG.7A)) may generate data indicative of the orientation of the NMD703. The NMD703may detect that the housing730is in a first orientation (FIG.7B). As noted above, in some implementations, the NMD703is configured to generate events when the orientation of the NMD703changes. Such events may trigger mode changes in the NMD703. For instance, when the housing730is switched from the first orientation to the second orientation (FIG.7D), the orientation sensors723may generate data indicative of acceleration of the NMD703. The NMD703may determine that this data indicates that the housing730is in the second orientation and generate an event indicating this orientation. At block1114, the method1100involves enabling a second mode. Enabling the second mode involves enabling voice input processing via a cloud-based voice assistant service. In some implementations, enabling the second mode also includes disabling voice input processing via a local natural language unit, such as the NLU779(FIG.7E). Alternatively, voice input processing via the local natural language unit may remain enabled in the second mode. Enabling the second mode may further involve enabling one or more second microphones (e.g., the microphones722b(FIG.7C)) and/or disabling one or more first microphones (e.g., the microphones722a(FIG.7B)). In some examples, the NMD703enables the second mode after detecting that the housing is in a second orientation. As noted above, detecting that the housing is in a first orientation may involve detecting an event. For example, the NMD703may enable the second mode based on a particular event being generated where the particular event corresponds to a change in orientation from the second orientation to the first orientation. Other examples are possible as well. CONCLUSION The description above discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including, among other components, firmware and/or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the firmware, hardware, and/or software aspects or components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only way(s) to implement such systems, methods, apparatus, and/or articles of manufacture. The specification is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood to those skilled in the art that certain embodiments of the present disclosure can be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the forgoing description of embodiments. When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware. The present technology is illustrated, for example, according to various aspects described below. Various examples of aspects of the present technology are described as numbered examples (1, 2, 3, etc.) for convenience. These are provided as examples and do not limit the present technology. It is noted that any of the dependent examples may be combined in any combination, and placed into a respective independent example. The other examples can be presented in a similar manner. Example 1: A method to be performed by a network microphone device including one or more first microphones, one or more second microphones, a network interface, one or more processors, and a housing carrying the one or more first microphones, the one or more second microphones, the network interface, the one or more processors, and data storage having stored therein instructions executable by the one or more processors. The network microphone device detects that the housing is in a first orientation. After detecting that the housing is in the first orientation, the device enables a first mode. Enabling the first mode includes (i) disabling voice input processing via a cloud-based voice assistant service and (ii) enabling voice input processing via a local natural language unit. While the first mode is enabled, the network microphone device (i) captures sound data associated with a first voice input via the one or more first microphones and (ii) detects, via a local natural language unit, that the first voice input comprises sound data matching one or more keywords from a local natural language unit library of the local natural language unit. The network microphone device determines, via the local natural language unit, an intent of the first voice input based on at least one of the one or more keywords and performs a first command according to the determined intent of the first voice input. The network microphone device may detects that the housing is in a second orientation different than the first orientation. After detecting that the housing is in the second orientation, the network microphone device enables the second mode. Enabling the second mode includes enabling voice input processing via the cloud-based voice assistant service. Example 2: The method of Example 1, wherein enabling the first mode further comprises disabling the one or more second microphones. Example 3: The method of any of Examples 1 and 2, wherein enabling the second mode further comprises at least one of: (a) disabling the one or more first microphones or (b) disabling voice input processing via the local natural language unit. Example 4: The method of any of Examples 1-3, further comprising pairing the NMD to a network device and wherein performing the first command comprises transmitting an instruction over a local area network to the network device. Example 5: The method of any of Examples 4, wherein the network device comprises a smart illumination device, and wherein the first command is a command to toggle the smart illumination device on or off. Example 6: The method of any of Example 4, wherein the functions further comprise pairing the NMD to a playback device separate from the network device, wherein the playback device is configured to process playback commands transmitted to the playback device from one or more remote computing devices of the cloud-based voice-assistant service. Example 7: The method of any of Examples 1-6, further comprising while the second mode is enabled, (i) detecting a sound data stream associated with a second voice input; (ii) detecting a wake-word in the second sound data stream; and (iii) after detecting the wake-word, transmitting the second sound data stream to one or more remote computing devices of the cloud-based voice-assistant service. Example 8: The method of any of Examples 1-8, wherein the network microphone device further comprises one or more sensors carried in the housing wherein detecting that the housing is in a second orientation different than the first orientation comprises detecting, via the one or sensors, sensor data indicating that the housing has been re-oriented from the first orientation to the second orientation. Example 9: A tangible, non-transitory, computer-readable medium having instructions stored thereon that are executable by one or more processors to cause a playback device to perform the method of any one of Examples 1-8. Example 10: A playback device comprising a speaker, a network interface, one or more microphones configured to detect sound, one or more processors, and a tangible, non-tangible computer-readable medium having instructions stored thereon that are executable by the one or more processors to cause the playback device to perform the method of any of Examples 1-8.
177,685
11862162
DETAILED DESCRIPTION In the following disclosure, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific implementations in which the disclosure may be practiced. It is understood that other implementations may be utilized and structural changes may be made without departing from the scope of the present disclosure. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media. Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media. Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter is described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described herein. Rather, the described features and acts are disclosed as example forms of implementing the claims. Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in-dash vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices. Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function. It should be noted that the sensor embodiments discussed herein may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s). At least some embodiments of the disclosure are directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein. Some embodiments begin parsing speech in response to a wake-up event such as a user saying a key phrase such as “hey Alexa”, a user tapping a microphone button, or a user gazing at a camera in a device. Such embodiments eventually cut off after a NVAD cut-off period. Some embodiments parse speech continuously, but cut off the parsing of a sentence, treating it as complete, after a NVAD cut-off period. To be responsive to fast speech without cutting off slow speech it is ideal to adapt the EOS NVAD period to the maximum pause length between words within an incomplete sentence. Some embodiments do so by having a set of cutoff periods and using a shorter one when the words captured so far constitute a complete parse according to a natural language grammar and a longer cutoff period when the words captured so far do not constitute a complete parse. Some such embodiments have a problem of cutting off users when the words so far are a complete parse but are a prefix to a longer sentence. For example, “what's the weather” is a parsable prefix of the sentence, “what's the weather in Timbuctoo”, which is a prefix of the sentence, “what's the weather in Timbuctoo going to be tomorrow”. Some embodiments have a problem with users not recognizing that the system detected a wake-up event and is attempting to parse speech. In such events, there can be long periods of silence before the user provides any speech voice activity. Some embodiments address this by having a long NVAD cut-off period for the time after a wake-up event occurs and before the system detects any voice activity. Some embodiments use a long NVAD period of 5 seconds. Some embodiments use a long NVAD period of 3.14159 seconds. FIG.1shows an embodiment of a human-machine interface. A human user12speaks to a machine14, saying, “hey robot, what's the weather in Timbuctoo”, as depicted by a speech bubble16. Training a Model Some words spoken so far, having a complete parse, are very likely the entire user's sentence, for example, “how high is Mount Everest”. It is possible, but infrequent, that a user would continue the sentence such as by saying, “how high is Mount Everest in Nepal”. In fact, it is rare that any sentence beginning with “how high is <thing>” going to be continued. However, Some words spoken so far, having a complete parse, are frequently followed by more information that creates another longer complete parse. For example, “what's the weather” (which implies a query about the present time and current location) is a complete parse that is often continued such as by saying, “what's the weather going to be tomorrow” or “what's the weather in <place>”. Some embodiments use a trained model of whether a complete parse is a user's intended complete sentence. The model in some embodiments is a neural network. Various other types of models are appropriate for various embodiments. Some embodiments use a statistical language model (SLM). They train the SLM using n-grams that include an end of sentence token. Some embodiments train a model from a corpus of captured spoken sentences. Some embodiments that use data from systems that cut off speech after EOSs, to avoid biasing the model with data from prematurely cut-off sentences, continue capturing sentences for a period of time after EOSs and discard sentences with speech after the EOS from the training corpus. Some embodiments train a model from sources of natural language expressions other than captured speech, such as The New York Times, Wikipedia, or Twitter. Some embodiments train models from sources of speech not subject to EOSs, such as movies and videos. Some embodiments train a model by analyzing natural language grammar rules to determine all possible complete parses in order to determine which complete parses are prefixes to other complete parses. Some such embodiments apply weights based on likelihoods of particular forms of parsable sentences. Some embodiments aggregate multiple grammar rules to detect complete parses that are prefixes of other complete parses. This is useful because some sets of words so far are parsable according to multiple grammar rules. Some embodiments replace specific entity words with generic tags in the training corpus. For example, a generic person tag replaces all people's names and a generic city tag replaces all city names. Applying such a model requires that word recognition or parsing apply a corresponding replacement of entity words with generic tags. Applying a Model Some embodiments have multiple NVAD cut-off periods, a long one when there is no complete parse (Incomplete) and a short one when there is a complete parse (Complete). Some such embodiments have another NVAD cut-off period longer than the short one for when there is a complete parse that can be a prefix to another complete parse (Prefix). Some embodiments have another NVAD cut-off period longer than the long one for the time after the system wakes up but before it detects any voice activity (Initial). FIG.2shows processing a spoken sentence that comprises a first complete parse (“what's the weather”) that is a prefix to a second complete parse (“what's the weather in Timbuctoo”). The speech begins with a wake-up key phrase “hey robot”, followed by a period of no voice activity detection (VAD)22. The system chooses a NVAD cut-off period of 5 seconds. Next, the system detects voice activity and proceeds to receive words, “what's the weather”, during which time there is no complete parse and so the system chooses a NVAD cut-off period of 2 seconds. Next, there is a pause in the speech24, during which time there is no VAD, but a complete parse. Since there is a complete parse, the system chooses a shorter NVAD period of 1 second. Next, the speech continues, so there is VAD but again no complete parse, so the system returns to a NVAD cut-off period of 2 seconds. Finally, is another period of silence26, during which there is no VAD, but a complete parse, so the system chooses a NVAD period of 1 second. Some embodiments apply the model for detecting whether a complete parse is a prefix to another longer complete parse in response to detecting the first complete parse. Some embodiments apply the model continuously, regardless of whether the words received so far constitute a complete parse. Such embodiments effectively have a continuous hypothesis as to whether the sentence is complete, the hypothesis has maxima whenever a set of words comprises a complete parse, the maxima being larger for complete parses that are less likely to be prefixes of other complete parses. In some embodiments, the model produces not a Boolean value, but a numerical score of a likelihood of a complete parse being a complete sentence. Some such embodiments, rather than having a specific Prefix cut-off period, scale the Prefix cut-off period according to the score. A higher score would cause a shorter NVAD cut-off period. Some embodiments use a continuously adaptive algorithm to continuously adapt the NVAD cut-off period. Some such embodiments gradually decrease one or more NVAD cut-off periods, such as by 1% of the NVAD cut-off period each time there is a cut-off, and, if the speaker continues a sentence after a partial period threshold, such as 80%, the NVAD cut-off period, the NVAD cut-off period increases, such as by 5% for each such occurrence of a user continuing a sentence. Some embodiments increase the NVAD cut-off period in proportion to the amount of time beyond a partial-period threshold (such as 80%) after which that the user continued the sentence. Some embodiments display information visually after detecting a complete parse but before a NVAD cut-off. Some such embodiments change the visual display as soon as they detect further voice activity before the NVAD cut-off. For example, for the sentence “what's the weather going to be tomorrow in Timbuctoo” such an embodiment would:as soon as the user finishes saying, “what's the weather” display the current weather in the present location;as soon as the user says, “going” clears the display;as soon as the user finishes saying, “to be tomorrow” displays the weather forecast for tomorrow in the present location;as soon as the user says, “in” clears the display; andas soon as the user says, “Timbuctoo” displays the weather forecast for tomorrow in Timbuctoo. Some embodiments do not cut off user speech when detecting an EOS, but instead, use the NVAD cut-off period to determine when to perform an action in response to the sentence. This supports an always-listening experience that doesn't require a wake-up event. Even for always-listening embodiments, knowing when to respond is important to avoid the response interrupting the user or the response performing a destructive activity that wasn't the user's intent. Profiling Users Some embodiments profile users as to their typical speech speed, store the user's typical speech speed in a user profile, later acquire the user's typical speech speed from the user profile, and scale one or more of the NVAD cut-off periods according to the user's typical speech speed. Some embodiments compute a user's typical speech speed by detecting their phoneme rate. That is, computing their number of phonemes per unit time. Some embodiments store a long-term average phoneme rate in the user's profile. Some embodiments compute a short-term average phoneme rate, which is useful since user phoneme rates tend to vary based on environment and mood. Some embodiments compute a user's typical speech speed by detecting their inter-word pause lengths. That is, using the time between the last phoneme of each word and the first phoneme of its immediately following word. Long-term and short-term inter-word pause length calculations are both independently useful to scale the NVAD cut-off period. FIG.3shows processing a spoken sentence that comprises a first complete parse (“what's the weather”) that is a prefix to a second complete parse (“what's the weather in Timbuctoo”). However, in comparison to the scenario ofFIG.2, based on the user profile and short-term speech speed, the system expects the user to speak 25% faster (therefore using 80% as much time for the same sentence). The speech begins with a wake-up key phrase “hey robot”, followed by a period of no voice activity detection (VAD)32. The system chooses a NVAD cut-off period of just 4 seconds. Next, the system detects voice activity and proceeds to receive words, “what's the weather”, during which time there is no complete parse and so the system chooses a NVAD cut-off period of just 1.6 seconds. Next, there is a pause in the speech34, during which time there is no VAD, but a complete parse. Since there is a complete parse, the system chooses a shorter NVAD period of just 0.8 seconds. Next, the speech continues, so there is VAD but again no complete parse, so the system returns to a NVAD cut-off period of just 1.6 seconds. Finally, is another period of silence36, during which there is no VAD, but a complete parse, so the system chooses a NVAD period of just 0.8 seconds. FIG.4shows processing speech that never achieves a complete parse. The speech begins with a wake-up key phrase “hey robot”, followed by a period of no voice activity detection (VAD)42. The system chooses a NVAD cut-off period of 5 seconds. Next, the system detects voice activity and proceeds to receive words, “what's the”, during which time there is no complete parse and so the system chooses a NVAD cut-off period of 2 seconds. No more speech is received for the following period44, so after the system detects NVAD, it cuts off after 2 more seconds. EOS Cues Some embodiments choose a short EOS when detecting certain cues such as a period of NVAD followed by “woops” or a period of NVAD followed by “cancel”. Some embodiments delay an EOS when detecting certain cues, such as “ummm” or “ahhh” or other filler words. The word “and”, “but”, “with” or phrases such as, “as well as” are also a high probability indicator of a likely continuation of a sentence. Some such embodiments, when detecting such filler words or conjunctions, reset the EOS NVAD cut-off timer. Client-Server Considerations Some embodiments perform NVAD on a client and some embodiments perform word recognition and grammar parsing on a server connected to the client through a network such as the Internet. Such embodiments send and receive messages from time to time from the server to the client indicating whether an end of sentence token is likely or a parse is complete or a prefix parse is complete. Such embodiments of clients assume an incomplete parse, and therefore a long NVAD cut-off period, from whenever the client detects NVAD until reaching a cut-off unless the client receives a message indicating a complete parse in between. Some client-server embodiments send either a voice activity indication, a NVAD indication, or both from the client to the server. This is useful for the server to determine NVAD cut-off periods. However, the amount of network latency affects the inaccuracy of the NVAD cut-off period calculation. Implementations FIG.5Ashows a rotating disk non-transitory computer readable medium according to an embodiment. It stores code that, if executed by a computer, would cause the computer to perform any of the methods described herein.FIG.5Bshows a non-volatile memory chip non-transitory computer readable medium according to an embodiment. It stores code that, if executed by a computer, would cause the computer to perform any of the methods described herein. FIG.5Cshows a computer processor chip for executing code according to an embodiment. By executing appropriate code, it can control a system to perform any of the methods described herein. FIG.6shows a human interface device62coupled to a server64in a virtual could66according to an embodiment. The human interface device62receives user speech and sends it to the server64. In some embodiments, the server64performs VAD. In some embodiments the device62performs VAD and the server performs parsing of the speech. In some embodiments, the device62works independently of a server and performs parsing and VAD. FIG.7shows a block diagram of a computer system70according to an embodiment. It comprises a central processing unit (CPU)71and a graphics processing unit (GPU)72that are each optimized for processing that parses speech. They communicate through an interconnect73with a dynamic random access memory (DRAM)74. The DRAM74stores program code and data used for processing. The CPU71and GPU72also communicate through interconnect73with a network interface (NI)75. The NI provides access to code and data needed for processing as well as communication between devices and servers such as for sending audio information or messages about voice activity or parse completion. FIG.8is a flow diagram depicting an embodiment of a method80to assign an NVAD cut-off period. At81, a processing system associated with a human-machine interface receives a spoken sentence. In some embodiments, the processing system may be realized by a system based on processor chip70or any similar processing-enabled architecture. Next, at82, the processing system identifies a beginning of the sentence. In some embodiments, the processing system identifies the beginning of the sentence by identifying a phrase such as, “hey robot,” a user tapping a microphone button, or a user gazing at a camera associated with the processing system, as described herein. Next, at83, the processing system parses speech from the beginning of the sentence according to a natural language grammar to determine whether the speech received so far constitutes a complete parse. At84, the processing system applies a model to produce a hypothesis to determine whether the speech received so far is a prefix to another complete parse. Finally, at85, the processing system assigns an NVAD cut-off period that is shorter than an NVAD cut-off period for an incomplete parse, depending on the hypothesis. FIG.9is a flow diagram depicting an embodiment of a method90to scale an NVAD cut-off period. At91, a processing system associated with a human-machine interface receives a spoken sentence. In some embodiments, the processing system may be realized by a system based on processor chip70or any similar processing-enabled architecture. In particular embodiments, the spoken sentence may be a recorded sentence, a text-to-speech input, an audio recording, or some other speech input. At92, the processing system identifies a beginning of a sentence, as discussed in the description of method80. Next, at93, the processing system parses speech from the beginning of the sentence according to a natural language grammar to determine whether the speech received so far constitutes a complete parse. At94, the processing system applies a model to produce a hypothesis as to whether the speech received so far is a prefix to another complete parse. At95, the processing system assigns an NVAD cut-off period that is shorter than an NVAD cut-off period for an incomplete parse. Next, at96, the processing system acquires a user's typical speech speed value from a user profile that may be stored on a memory unit such as DRAM74. At97, the processing system computes a short-term user speech speed. Finally, at98, the processing system scales the NVAD cut-off period based on a combination of the user's typical speech speed and the short-term user speech speed. FIG.10is a flow diagram depicting an embodiment of a method100to increase an NVAD cut-off period. At101, a processing system receives audio of at least one spoken sentence. Next, at102, the processing system detects periods of voice activity and no voice activity in the audio associated with the spoken sentence. At103, the processing system maintains an NVAD cut-off based on the detection. At104, the processing system decreases the NVAD cutoff period responsive to detecting a complete sentence. Finally, at105, the processing system increases the NVAD cut-off period responsive to detecting a period of voice activity within a partial period threshold of detecting a period of no voice activity where the partial period threshold is less than the NVAD cut-off period. FIG.11is a flow diagram depicting an embodiment of a method110for changing a duration of an NVAD cut-off period. At111, a processing system detects a wake-up event as discussed herein. At112, the processing system waits for a relatively long initial NVAD cut-off period. Finally, at113, the processing system selects a shorter NVAD cut-off period based on detecting voice activity—this shorter NVAD cut-off period is relative to the relatively long initial NVAD cut-off period. In some embodiments, the relatively long initial NVAD cut-off period is 5 seconds. In other embodiments, the relatively long initial NVAD cut-off period is 3.14159 seconds. While various embodiments of the present disclosure are described herein, it should be understood that they are presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The description herein is presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the disclosed teaching. Further, it should be noted that any or all of the alternate implementations discussed herein may be used in any combination desired to form additional hybrid implementations of the disclosure.
26,699
11862163
DETAILED DESCRIPTION It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure. In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language. The software instructions in the modules can be embedded in firmware, such as in an erasable programmable read-only memory (EPROM) device. The modules described herein can be implemented as either software and/or hardware modules and can be stored in any type of computer-readable medium or other storage device. The present disclosure, referencing the accompanying drawings, is illustrated by way of examples and not by way of limitation. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.” FIG.1illustrates a block diagram of an embodiment of a remote controller1. A remote controller controlling system10is applied on the remote controller1. The remote controller1can include, but is not limited to, a storage device11, at least one processor12, a battery13, and a microphone14. The aforementioned components of the remote controller1are interconnected via a bus, or are directly interconnected. The remote controller1can control various electrical devices, for example, air conditioner, television, set-top box, DVD player, and so on. The remote controller1is connected to the electrical devices via a communication unit. The communication unit can be a BLUETOOTH unit, or the like. In the embodiment, the remote controller1can include other one or more communication units, for example, a WI-FI unit, and the like.FIG.1illustrates only an example, in other embodiment, the remote controller1can include more or less components, or include more or different type of devices. In the embodiment, the battery13can be a chargeable battery or a non-rechargeable battery. The battery13is configured to provide power for the remote controller. For example, the battery13can be a lithium battery. In the embodiment, the microphone14is configured to receive vocal commands and recognize the same. Referring toFIG.2, a method for controlling a battery-powered remote controller is shown. The illustrated order of blocks is illustrative only and the order of the blocks can be changed. Additional blocks can be added or fewer blocks can be utilized without departing from this disclosure. The example method can begin at block S20. At block S20, detecting a drop in a voltage of a battery of a remote controller in standby mode. In the embodiment, the remote controller is a BLUETOOTH remote controller with a voice function. The voltage of the battery of the remote controller drops at regular intervals when the remote controller is in standby mode. The remote controller can include a number of keys, for example, on-off key, voice key, function key, channel switching key, number key, program schedule key, and so on. The remote controller receives a command to activate the voice function when the voice key of the remote controller is operated by the user in standby mode. Most electrical current is used by the remote controller when starting voice function. At the moment of starting the voice function, a rush of instant current arises may causes a drop in the voltage becoming greater, for example becoming from 0.2V to 0.8V. The greater drop in the voltage measured by a VBAT pin of a chip of the remote controller may cause serious problems. In the embodiment, the chip can be a RTL8762AR chip. When the drop in the voltage of the battery causes the voltage of the battery reaches a level which is lower than a minimum work voltage of the chip, an operating system of the remote controller will reset. In the embodiment there are two variables, respectively an internal resistance of the battery of the remote controller and a slew rate of a switch starting the voice function. A drop in the voltage is likely to become greater at a moment that the voice function is started. The internal resistance of the battery of the remote controller is proportional to the drop in the voltage of the battery of the remote controller which occurs when starting the voice function. The slew rate of the switch starting the voice function is proportional to the drop in the voltage of the battery of the remote controller which occurs when starting the voice function. For example, when the slew rate of the switch starting the voice function increases, the drop in the voltage of the battery of the remote controller will appear greater, such as, becoming from 0.2V to 0.8V. To avoid the consequence of the voice function failing, the internal resistance of the battery of the remote controller must be first determined. If the internal resistance of the battery of the remote controller remains small, the drop in the voltage when the voice function is started will be small. However, if the internal resistance of the battery of the remote controller is large, the slew rate of the switch starting the voice function should be lowered to decrease the drop in voltage when starting the voice function. Thus, possibility of the remote controller resetting itself can be reduced. In the embodiment, the internal resistance of the battery of the remote controller influences the drop in voltage which occurs when starting the voice function. The internal resistance of the battery of the remote controller is further proportional to the drop in voltage regularly appearing in standby mode. Thus, the internal resistance of the battery of the remote controller can be determined by examining the drops in voltage regularly appearing in standby mode. In the embodiment, the block S20comprises in detail: (a): sampling a voltage of the battery of the remote controller within a timing period from a sampling start time t1 at a first preset sampling interval t2 in standby mode. In the embodiment, the voltage of the battery appears regularly at fixed intervals. In the embodiment, the fixed interval is one second. Thus, the timing period T of the voltage of the battery is one second. In the embodiment, the first preset sampling interval t2 is one millisecond. In the embodiment, the sampling start time t1 can be 0 seconds. It can be understood that, the sampling start time t1 is not limited to 0 second, for example, it can be one second, or the like. For example, when the disclosure starts when the remote controller enters into the standby mode, the sampling start time t1 is 0 seconds, when the disclosure starts after the remote controller enters into the standby mode by 10 seconds or in response to user operation of pressing a button after 10 second, the sampling start time t1 is 10 seconds, or the like. For example, when the sampling start time t1 is 0 second, the timing period is the first timing period which is from 0 to one second. The next timing period is thus from one to two seconds, and so on. AsFIG.3Ashows, for example, when sampling the voltage of the battery of the remote controller in standby mode for the period of 0 to one second, 1000 samples can be taken. In the embodiment, the drop in the voltage appears regularly can be, for example, the drop in the voltage appears in the first timing period at 0.5 second, and another drop in the voltage appears in the second timing period at 1.5 seconds. The appearing time duration of the drop in each voltage is 100 microseconds. The drops in the voltage in the appearing time duration 100 microseconds may be different, as shown inFIG.3A, the drop in the voltage in appearing time duration 100 microseconds at point A is different from the drop in the voltage in appearing time duration 100 microseconds at point B. (b): determining whether a drop in the voltage is occurred within the sampled voltages. In the embodiment, determining whether a drop in the voltage is occurred within the sampled voltages can include: determining if one voltage of the sampled voltages is different from a remaining of the sampled voltages by a predetermined value; determining that a drop in the voltage is occurred within the sampled voltages if there is one voltage different from the remaining of the sampled voltages by the predetermined value; and determining that no drop is occurred within the sampled voltages when no voltage different from the remaining of the sampled voltages is determined. For example, when the voltages of the sampled voltages is each 3V, it can be determined that no drop in the voltage is occurred within the sampled voltages. When one voltage of the sampled voltage is 2.3V and the remaining of the sampled voltages are each 3V, it can be determined that a drop in the voltage is occurred within the sampled voltages. In the embodiment, the appearing time duration of the drop in the voltage itself may be too small relative to the overall timing period of the voltage. It may be difficult to position such drop in the voltage within timing period, namely, there will be low probability that a drop in the voltage is occurred within the timing period. (c): if no drop in the voltage is occurred in the sampled voltages, sampling the voltage of the battery of the remote controller within a next timing period from a next sampling start time t3 at the first preset sampling interval t2 in standby mode until a drop in the voltage is occurred, wherein the next sampling start time t3 is (t1+T+Δt1), where T is a timing period of the voltage of the battery of the remote controller, and Δt1 is a first time duration. For example, when no drop in the voltage is occurred in the sampled voltage within the first timing period, the voltage of the battery is sampled within the second timing period from a next sampling start time to form another 1000 samples. Then, determining whether a drop in the voltage exists in the samples within the second timing period in the manner aforesaid. When no drop in the voltage exists within the second timing period, sampling the voltage within a third timing period from a third sampling start time to obtain another 1000 samples. Step (c) stops when a voltage in voltage is occurred in the 1000 samples within the third timing period. In the embodiment, the first time duration Δt1 can be one half of the appearing time duration for the voltage to drop, namely, 50 microseconds. This can be varied to other values, for example, 25 microseconds, 12.5 microseconds, or the like. For example, when the timing period is one second and the sampling start time t1 is 0 seconds, the next sampling start time t3 can be equal to a total value of 0 seconds, one second, and 50 microseconds, namely, 1.00005 seconds. Since the ratio of the first preset sampling interval t2 and the first time duration Δ t1 is equal to 20, when no drop in the voltage exists in the new samples within the second timing period, the next sampling start time will be a total value of 1.00005 seconds, one second, and 50 microseconds, namely, 2.0001 seconds, and so on. Thus, the drop in the voltage may be not sampled within the third timing period as the aforesaid example, it may be found to be occurred within the twenty timing period. (d): determining an appearing time of the drop in the voltage in the sampled voltages t4 in the timing period when the drop in the voltage is occurred. For example, when a drop in the voltage is occurred within the first sample of the samples within fifth seconds, the method determines that the appearing time of the drop in the voltage in the sampled voltages t4 in the timing period when a drop in the voltage is occurred is 4.0002 seconds. (e): sampling a subsequent voltage of the battery of the remote controller from a start time t5 at a second preset sampling interval t6 in standby mode to form a number of sampled voltages until the number of the samples is N, wherein the start time t5 is equal to a total of the appearing time of the drop in the sampled voltages t4 and the second preset sampling interval t6. The second preset sampling interval t6 is equal to a total of the timing period T and the second time duration Δt2, N is a positive integer, and a product of (N−1) and the second time duration Δt2 is greater than or equal to the appearing time duration of each of the drops in the voltage in the samples. In the embodiment, the second time duration Δt2 is less than the first time duration Δt1. Since drops in the voltage appear regularly in the voltage of the battery, after the drop in the voltage is positioned in the measured voltage of the battery, the subsequent drop in the voltage appears at the same time in each timing period. In the embodiment, the appearing time duration of each drop in the voltage is 100 microseconds, the drop in the voltage at each point in the appearing time duration may be different, thus determining the drop in the voltage of the battery first requires that the number N is determined. For example, when the appearing time duration of each drop in the voltage is 100 microseconds, and the second time duration Δt2 is 3 microseconds, a ratio of the appearing time duration of each drop in the voltage and the second time duration Δt2 is about 33.33, thus N is equal to a total of thirty-four and one, namely thirty-five. When the appearing time duration of each drop in the voltage is 99 microseconds, and the second time duration Δt2 is 3 microseconds, a ratio of the appearing time duration of each drop in the voltage and the second time duration Δt2 is about 33, thus N is equal to a total of thirty-three and one, namely thirty-four. In the embodiment, the step (e) can be, for example, when the second time duration Δt2 is 3 microseconds and the appearing time duration of each drop in the voltage is 100 microseconds, the timing period sampling the voltage is twenty, and timing period of the voltage of the battery is one second, the method samples the voltage from 21 seconds every 1.000003 of a second to form 35 sampled voltages. In the embodiment, step (e) includes: sampling the subsequent voltage of the battery of the remote controller from the start time t5 at the second preset sampling interval t6 in standby mode to form a number of samples, wherein the start time t5 is equal to a total of the appearing time of the drop in the sampled voltages t4 and the second preset sampling interval t6. The second preset sampling interval t6 is equal to a total of the timing period T and the second time duration Δt2; determining whether a voltage of next sample is equal to a voltage of previously sample; continuously sampling the voltage of the battery of the remote controller at the second preset sampling interval t6 in standby mode until the number of samples is N and a voltage of next sample is not equal to a voltage of previously sample; determining that a time lap is (T−(N−1)×Δt2) when a voltage of next sample is equal to a voltage of previously sample; updating a sampling time of next sample to a total of the time lap and the time previously to sample the next sample; continuously sampling the subsequent voltage of the battery of the remote controller from the updated sampling time at the second preset sampling interval t6 in standby mode until the number of the samples is N. For example, as shown inFIG.4A, when a drop in the voltage is occurred within the twenty seconds, the method samples the voltage e1within twenty-one seconds at a fixed interval, for example 1.000003 of a second. When a voltage of the next sample is equal to a voltage of previously sample, the method samples the voltage from 21 seconds to 55 seconds every 1.000003 of a second. Further referring toFIG.4B, all the samples are shown on one drop in the voltage appearing time duration. In fifty-one seconds, the voltage of next sample e21is equal to the voltage of previously sample within the fifty seconds e20. The method determines the time lap of e21is (1-34*0.000003), namely, 999898 microseconds, and updates the sampling time of e21by moving forward the time lap to sample e21. Next, the method continuously samples the voltage every 1.000003 of a second. Thus, the samples can represent the drops in voltage in appearing time duration. (f): determining a drop in the voltage of each of the samples. In the embodiment, the step (f) includes: determining a standard voltage; determining a drop in the voltage of each of the samples is equal to a value of subtracting the voltage of the sample from the standard voltage. In the embodiment, the method determines the standard voltage is equal to the largest voltage of the samples. For example, the standard voltage is 3V and the voltage of one sample is 2.8V, the method determines that the drop in the voltage is equal to (3V-2.8V), namely 0.2V. (g): determining that the drop in the voltage of the battery of the remote controller in standby mode is equal to a largest drop in the voltage among the samples. For example, the samples are e1, e2, . . . , e34, and e35, the largest drop in the voltage among the samples e1, e2, . . . , e34, and e35are 0.8V, the method determines that the drop in the voltage of the battery of the remote controller is the largest drop in the voltage, namely, 0.8V. In the embodiment, the largest drop in the voltage among the samples is the smallest voltage among the samples. In the embodiment, although the probability that a drop in the voltage is occurred within the timing period is low, the drop in the voltage may be occurred within the timing period, thus, in step (b), when a drop in the voltage is occurred in the sample within the timing period, the procedure goes to step (d). At block S21, determining a voltage of the battery of the remote controller in standby mode. In the embodiment, the block S21includes: activating an ADC to sample the voltage of the battery of the remote controller at preset intervals in standby mode until sampling a preset number of voltages. For example, activating the ACD to sample the voltage of the battery of the remote controller every five seconds in standby mode to sample ten voltages. determining a largest voltage among the ten voltages; determining a smallest voltage among the ten voltages; determining an average value of the voltages excluding the largest voltage and the smallest voltage; determining that the voltage of the battery of the remote controller is equal to the average value. At block S22, receiving the voice function command. In the embodiment, the remote controller receives such command when the voice key of the remote controller is operated by the user in standby mode. At block S23, determining whether the drop in the voltage of the battery of the remote controller in standby mode is greater than or equal to a preset value. If the drop in the voltage of the battery of the remote controller in standby mode is less than a preset value, the procedure goes to step S24. If the drop in the voltage of the battery of the remote controller in standby mode is greater than or equal to a preset value, the procedure goes to step S27. In the embodiment, the preset value is 100 mv. The preset value can be any other suitable value. The drop in the voltage is proportional to the internal resistance of the battery, and the internal resistance of the battery is inversely proportional to quantity of the battery. For example, when the drop in the voltage is lower, the internal resistance of the battery is lower and the quantity of the battery is better, thus the probability of resetting the operating system of the remote controller is lower when starting the voice function. When the drop in the voltage is higher, the internal resistance of the battery is higher and the quantity of the battery is worse, thus the probability of resetting the operating system of the remote controller is higher when starting the voice function. In the embodiment, when the drop in the voltage is less than the preset value, the quality of the battery is better, thus the signal of the remote controller does not need to be modulated. When the drop in the voltage is greater than the preset value, the quality of the battery is worse, the probability of resetting the operating system of the remote controller is higher when starting the voice function. Thus, the signal of the remote controller needs to be modulated. Then, a probability of resetting the operating system of the remote controller will be lower. At block S24, determining whether the voltage is less than or equal to a smallest value in a preset range. If such voltage is less than or equal to a smallest value in the preset range, the procedure goes to step S25. If the voltage is greater than a smallest value in the preset range, the procedure goes to step S26. In the embodiment, the preset range is 2V-2.4V. The preset range can be varied according to type of the chip of the remote controller. Since the quality of the battery is better, if the voltage is less than or equal to the smallest value in the preset range, it represents that the battery should be charged or replaced. If the voltage is greater than the smallest value in the preset range, then the battery is high, a probability of resetting the operating system of the remote controller is low when starting the voice function. At this moment therefore, the voice function can be activated. At block S25, generating a prompt indicating that the battery is low, to prompt the user to charge or replace the battery. At block S26, activating the voice function of the remote controller. At block S27, determining whether the voltage is within a preset range. If the voltage is within the preset range, the procedure goes to step S28. If the voltage is greater than a largest value in the preset range, the procedure goes to step S26. If the voltage is less than a smallest value in the preset range, the procedure goes to step S25. In the embodiment, the preset range is 2V-2.4V. The preset range can be varied according to a type of the chip of the remote controller. Since the quality of the battery is worse, if the voltage is within the preset range, the signal of the remote controller needs to be modulated, for example a duty cycle of a pulse signal which is configured to activate the switch needs to be modulated, to decrease the slew rate of the switch starting the voice function. Thus, the drop in the voltage when starting the voice function is decreased. If the voltage is greater than the largest value in the preset range, namely, the voltage is greater than 2.4V, it represents that the voltage of the battery is high, for activation of the voice function of the remote controller. If the voltage is less than the smallest value in the preset range, namely, the voltage is less than 2V, it represents that the voltage of the battery is low, and the battery should be charged or replaced. At block S28, regulating a duty cycle of the pulse signal which is configured to activate the voice function of the remote controller according to the voltage. In the embodiment, an enable pin of the chip of the remote controller receives the pulse signal. The pulse signal is configured to control the slew rate of the switch starting the voice function. In the embodiment, the block S28includes: regulating a duty cycle of the pulse signal which is configured to activate the voice function of the remote controller according to the voltage and a preset relationship between the voltage and the duty cycle of the pulse signal. For example, when the preset relationship includes the voltage being 2.4V and the duty cycle of the pulse signal being 0.5, and also includes the voltage being 2V and the duty cycle of the pulse signal being 0.25, the method regulates the duty cycle of the pulse signal to 0.5 when the voltage is 2.4V. FIG.5illustrates a block diagram of an embodiment of a remote controller controlling system10. In the embodiment, the remote controller controlling system10is applied in the remote controller1. The remote controller1includes a communication unit. The communication unit can be a BLUETOOTH unit, or the like. The remote controller controlling system10can be one or more programs. The one or more programs are stored in the storage device, and executed by the at least one processor to accomplish the required function. In the embodiment, the one or more programs can be divided into one or more modules/units, for example, a drop in the voltage determining module101, a voltage determining module102, a receiving module103, a drop in the voltage comparing module104, a voltage comparing module105, a prompting module106, a processing module107, and a regulating module108. The drop in the voltage determining module101is configured to detect a drop in a voltage of a battery of the remote controller in standby mode. The voltage determining module102is further configured to determine a voltage of the battery of the remote controller. The receiving module103is configured to receive a voice function command. The drop in the voltage comparing module104is configured to determine whether the drop in the voltage of the battery of the remote controller in standby mode is greater than or equal to a preset value. The voltage comparing module105is configured to determine whether the voltage is less than or equal to a smallest value in a preset range when the drop in the voltage of the battery of the remote controller in standby mode is less than a preset value, and determine whether the voltage is within the preset range when the drop in the voltage of the battery of the remote controller in standby mode is greater than or equal to the preset value. The prompting module106is configured to generate a prompt indicating that the battery is low, to prompt the user to charge or replace the battery when the drop in the voltage of the battery of the remote controller in standby mode is less than a preset value and the voltage is less than or equal to the smallest value in the preset range or when the drop in the voltage of the battery of the remote controller in standby mode is greater than or equal to the preset value and the voltage is less than the smallest value in the preset range. The processing module107is configured to activate the voice function of the remote controller when the drop in the voltage of the battery of the remote controller in standby mode is less than a preset value and the voltage is greater than a smallest value in the preset range or after the duty cycle of the pulse signal which is configured to activate the voice function of the remote controller being regulated. The regulating module108is configured to regulate a duty cycle of the pulse signal which is configured to activate the voice function of the remote controller according to the voltage when the drop in the voltage of the battery of the remote controller in standby mode is greater than or equal to the preset value and the voltage is within the preset range. The at least one processor can be one or more central processing units, or it can be one or more other universal processors, digital signal processors, application specific integrated circuits, field-programmable gate arrays, or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, and so on. The at least one processor can be a microprocessor or the at least one processor can be any regular processor, or the like. The at least one processor can be a control center of the remote controller, using a variety of interfaces and lines to connect various parts of the entire remote controller. The storage device stores the one or more programs and/or modules/units. The at least one processor can run or execute the one or more programs and/or modules/units stored in the storage device, call out the data stored in the storage device, and accomplish the various functions of the remote controller, for example apply the methods hereinbefore described. The storage device may include a program area and a data area. The program area can store an operating system, and applications that are required for the at least one function, such as sound playback features, images playback functions, and so on. The data area can store data created according to the use of the remote controller, such as video data, audio data, photobook data, and so on. In addition, the storage device can include random access memory and non-transitory storage, such as hard disk, memory, plug-in hard disk, smart media card, secure digital, flash card, at least one disk storage device, flash memory, or other transitory storage medium. If the integrated module/unit of the remote controller is implemented in the form of or by means of a software functional unit and is an independent product sold or used, all parts of the integrated module/unit of the remote controller may be stored in a computer-readable storage medium. The remote controller can use one or more programs to control the related hardware to accomplish all parts of the methods of this disclosure. The one or more programs can be stored in a computer-readable storage medium. The one or more programs can be accomplish the block of the exemplary method when executed by the at least one processor. The one or more stored programs can include program code. The program code can be in the form of source code, object code, executable code file, or in some intermediate form. The computer-readable storage medium may include any entity or device capable of recording and carrying the program codes, recording media, USB flash disk, mobile hard disk, disk, computer-readable storage medium, read-only memory, Random access memory, electrical carrier signals, telecommunications signals, and software distribution package. The content stored in the computer-readable storage medium can be increased or decreased in accordance with legislative requirements and regulations of patent practice jurisdictions, for example, in some jurisdictions, legislation and patent practice stipulating that a computer-readable storage medium does not include electrical carrier signals or telecommunications signals. In the present disclosure, it should be understood that the disclosed methods and electronic devices can be employed or achieved in other ways. The electronic device exemplified is only illustrative. In addition, function units in the embodiments of the present disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units can be integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of hardware in addition to a software function unit. It should be emphasized that the above-described embodiments of the present disclosure, including any particular embodiments, are merely possible examples of implementations, set forth for a clear understanding of the principles of the disclosure. Many variations and modifications can be made to the above-described embodiment(s) of the disclosure without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
32,230
11862164
DETAILED DESCRIPTION The subject matter of the present invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. As one skilled in the art will appreciate, embodiments of the invention may be embodied as, among other things: a method, system, or set of instructions embodied on one or more computer readable media. Accordingly, the embodiments may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. In one embodiment, the invention takes the form of a computer-program product that includes computer-usable instructions embodied on one or more computer readable media, as discussed further with respect toFIGS.1A-1B. Accordingly, at a high level, natural language processing/understanding (NLP/NLU) may be used to identify and extract, from a voice conversation, one or more clinical concepts. Once the clinical concepts are identified and extracted, one or more clinical ontologies may be used to identify one or more clinical concepts related to the clinical conditions identified from the voice conversation. The ontologies may be used to intelligently classify the one or more clinical conditions/concepts from the voice conversation into one or more classification groups. A scribe output may also be generated by the system/scribe. The scribe output may include, but is not limited to, a transcription of the voice conversation, documents, documentation items (to be documented into a patient's record), orders/action items, and the like. In embodiments, the scribe output is validated with corroborating evidence from the patient's EHR, for example. For instance, a patient may say they are currently on no medications. The clinical concept “medications” is identified and the negative response is noted by the computerized scribe. Upon validation, however, the scribe notes that there are medications listed in the patient's EHR (or any other validating source). The system would identify the error and notify the user. The notification can be a real-time notification/alert (e.g., a pop-up alert) including a visual indicator that would indicate an error (e.g., an exclamation mark, changing the font color or type, highlighting, etc.), an audio notification, and the like. Further, in addition to the notification, the contradicting information (e.g., the output of the scribe and the non-corroborating information from the validation source) may be provided in association with the notification. In this way, the voice conversation and the patient's EHR are utilized to identify and incorporate information about the clinical conditions or concepts into the patient's EHR, identify potential errors generated by the scribe during the encounter, identify potential errors collected from the voice conversation, generate documents and/or documentation items from the voice conversation, and the like. As used herein, the term “EHR” or “longitudinal EHR” refers to an electronic health record for an individual with documentation spanning across multiple encounters for the individual or at least one encounter prior to the current one for which the current electronic document is created. Accordingly, the documentation within the longitudinal EHR may be recorded at different times. The longitudinal EHR may also comprise at least some structured data. The data therein may be time and date stamped such that, in addition to providing the substance of those previous encounters, the longitudinal EHR provides a time line of the patient's care and, in some instances, one or more time series of physiological variables and clinical concepts related to the patient. Accordingly, one aim of embodiments of this disclosure relates to applying NLP/NLU systems and clinical ontologies to voice conversations to provide validated clinical outputs. Current technologies fail to capture, recognize, or incorporate into structured, usable data, valuable longitudinal patient information from a voice conversation. The present disclosure seeks to extract information from a voice conversation, using NLP/NLU and a clinical ontology, and utilize information from the patient's electronic health record to validate the output. Embodiments perform NLP/NLU on unstructured voice data to parse and extract discrete clinical elements, including a clinical condition associated with the patient. Additional information may be parsed from the voice conversation such as the number of speakers, the role of the speakers, who is speaking at what time, a specialty of the speaker, and the like. Additionally, the system can apply a time function such that concepts identified are classified as a past issue or a present issue (e.g., “I had some stomach pain but it seems better now. Headaches are still a concern” would result in a past stomach pain problem and a present headache problem). A clinical ontology associated with the clinical condition that is extracted from the voice conversation is retrieved, and one or more related clinical concepts (i.e., related to the clinical conditions), such as clinical findings, symptoms, problems, observations, medications, and procedures, are identified using the clinical ontology. The information extracted from the voice conversation is then classified into one or more classification groups. Today, well-formatted documents are the sources from which clinical concepts are extracted. This makes it very easy to identify a problem, a reason for a visit, etc., because they are organized in a document based on those classifications. This cannot be said for voice conversations. The voice data is unstructured and subject to additional difficulties associated with conversations that does not apply to documents such as, slang terminology, interruptions, unfinished sentences, dialects, speaking preferences or differences, etc. Existing technology is unable to capture context from clinical voice conversations for at least these reasons. Furthermore, clinical context is vastly different from typical “utterances” that are captured by today's voice assistants. For instance, there are only a few ways to ask “what is the weather for today” and the response is predetermined but there are numerous ways to ask “how are you feeling today” and even more ways to respond to that question. Furthermore, many terms used in clinical conversations may be referred to as many different things. For instance, “cold” may refer to a chill (i.e., temperature) or an upper respiratory infection, which also goes by many different names. Even once a correct term is identified in a clinical conversation, it can then be associated with many different options. For example, “pneumonia” may trigger numerous coding options in ICD-10, as shown in the below table. J18, Pneumonia, Unspecified OrganismNon-BillableJ18.0Bronchopneumonia, unspecified organismBillableJ18.1Lobar pneumonia, unspecified organismBillableJ18.2Hypostatic pneumonia, unspecified organismBillableJ18.8Other pneumonia, unspecified organismBillableJ18.9Pneumonia, unspecified organismBillable In addition to many different types of pneumonia triggered by the use of the word “pneumonia” there are several exceptions as well. For instance, there are special codes for aspiration pneumonia due to anesthesia during pregnancy (use Code 029), aspiration pneumonia due to solids and liquids (use Code J69), congenital pneumonia (use Code P23.0), and the like. The list goes on with various coding options for pneumonia. While coding is not the only application for the present invention—far from it—it is indicative of the vast vocabulary associated with clinical settings and clinical concepts. Besides the expansive clinical vocabulary generally, many situations call for specific terms and will result in different concepts. For instance, a conversation in an oncology setting is going to be different than a conversation in a pathology setting. This is yet another example of the expansive clinical vocabulary that must be processed correctly to obtain accurate outputs. Thus, conventional speech-to-text technologies are not capable of extracting context from clinical voice conversations, at least, because they fail to integrate voice conversations or commands with a patient's electronic health record (EHR). Additionally, current speech-to-text technologies fail to capture, recognize, and transcribe voice conversations into structured, usable data that may be incorporated into the patient's EHR. Referring now to the drawings in general and, more specifically, referring toFIG.1A, an aspect of an operating environment100is provided suitable for practicing an embodiment of this disclosure. Certain items in block-diagram form are shown more for being able to reference something consistent with the nature of a patent than to imply that a certain component is or is not part of a certain device. Similarly, although some items are depicted in the singular form, plural items are contemplated as well (e.g., what is shown as one data store might really be multiple data-stores distributed across multiple locations). But showing every variation of each item might obscure aspects of the invention. Thus, for readability, items are shown and referenced in the singular (while fully contemplating, where applicable, the plural). Further, as with operating environment100, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. As described above, some embodiments may be implemented as a system, comprising one or more computers and associated network and equipment, upon which a method or computer software application is executed. Accordingly, aspects of the present disclosure may take the form of an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Further, the methods of the present disclosure may take the form of a computer application embodied in computer readable media having machine-readable application software embodied thereon. In this regard, a machine-readable storage media may be any tangible medium that can contain, or store a software application for use by the computing apparatus. As shown inFIG.1A, example operating environment100provides an aspect of a computerized system for compiling and/or running an embodiment for providing natural language processing or understanding of voice conversations. Computer application software for carrying out operations for system components or steps of the methods of the present disclosure may be authored in any combination of one or more programming languages, including an object-oriented programming language such as Java, Python, R, or C++ or the like. Alternatively, the application software may be authored in any or a combination of traditional non-object-oriented languages, such as C or Fortran. The application may execute entirely on the user's computer as an independent software package, or partly on the user's computer in concert with other connected co-located computers or servers, or partly on the user's computer and partly on one or more remote computers, or entirely on a remote computer or collection of computers. In the latter cases, the remote computers may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, via the internet using an Internet Service Provider or ISP) or an arbitrary, geographically-distributed, federated system of computers, such as a cloud-based system. Moreover, the components of operating environment100, the functions performed by these components, or the services carried out by these components may be implemented at appropriate abstraction layer(s), such as the operating system layer, application layer, hardware layer, etc., of the computing system(s). Alternatively, or in addition, the functionality of these components and/or the embodiments described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. Additionally, although functionality is described herein with regards to specific components shown in example operating environment100, it is contemplated that, in some embodiments, functionality of these components can be shared or distributed across other components. Environment100includes one or more electronic health record (EHR) systems, such as EHR system(s)160communicatively coupled to network175, which is communicatively coupled to computer system120. In some embodiments, components of environment100that are shown as distinct components may be embodied as part of or within other components of environment100. For example, EHR system(s)160may comprise one or a plurality of EHR systems such as hospital EHR systems, health information exchange EHR systems, clinical genetics/genomics systems, ambulatory clinic EHR systems, psychiatry/neurology EHR systems, insurance, collections or claims records systems, and may be implemented in computer system120. Similarly, EHR system160may perform functions for two or more of the EHR systems (not shown). Continuing withFIG.1A, network175may comprise the Internet, and/or one or more public networks, private networks, other communications networks, such as a cellular network or similar network(s) for facilitating communication among devices connected through the network. In some embodiments, network175may be determined based on factors such as the source and destination of the information communicated over network175, the path between the source and destination, or the nature of the information. For example, intra-organization or internal communication may use a private network or virtual private network (VPN). Moreover, in some embodiments, items shown communicatively coupled to network175may be directly communicatively coupled to other items shown communicatively coupled to network175. In some embodiments, operating environment100may include a firewall (not shown) between a first component and network175. In such embodiments, the firewall may reside on a second component located between the first component and network175, such as on a server (not shown), or reside on another component within network175, or may reside on or as part of the first component. Embodiments of EHR system160include one or more data stores of health-related records, which may be stored on storage121, and may further include one or more computers or servers that facilitate the storing and retrieval of the health records. In some embodiments, EHR system160and/or other records systems may be implemented as a cloud-based platform or may be distributed across multiple physical locations. EHR system160may further include record systems that store real-time or near real-time patient (or user) information, such as wearable sensor or monitor, bedside, or in-home patient monitors or sensors, for example. AlthoughFIG.1Adepicts an example EHR system160, it is contemplated that an embodiment relies on natural language process (NLP) application140for storing and retrieving patient record information. Example operating environment100further includes a user/clinician interface142and NLP application140, each communicatively coupled through network175to an EHR system160. Although environment100depicts an indirect communicative coupling between interface142and application140with EHR system160through network175, it is contemplated that an embodiment of interface142or application140may be communicatively coupled to EHR system160directly. An embodiment of NLP application140comprises a software application or set of applications (which may include programs, routines, functions, or computer-performed services) residing on a client computing device, such as a personal computer, laptop, smartphone, tablet, or mobile computing device or application140may reside on a remote server communicate coupled to a client computing device. In an embodiment, application140is a Web-based application or applet and may be used to provide or manage user services provided by an embodiment of the technologies described herein, which may be used to provide, for example, semantic analysis on voice conversations. In some embodiments, application140includes or is incorporated into a computerized decision support tool. Further, some embodiments of application140utilize user/clinician interface142. In some embodiments, application140and/or interface142facilitate accessing and receiving information from a user or healthcare provider about a specific patient or set of patients, according to the embodiments presented herein. Embodiments of application140also may facilitate accessing and receiving information from a user or healthcare provider about a specific patient, caregiver, or population including historical data; healthcare resource data; variables measurements; time series information; reference information, including clinical ontologies; and relational databases, as described herein; or other health-related information, and facilitates the display of results of the enhanced language process as described herein. In some embodiments, user/clinician interface142may be used with application140, such as described above. One embodiment of user/clinician interface142comprises a user interface that may be used to facilitate access by a user (including a healthcare provider or patient) to an assigned clinician, patient, or patient population. One embodiment of interface142takes the form of a graphical user interface and application, which may be embodied as a software application (e.g., NLP application140) operating on one or more mobile computing devices, tablets, smartphones, front-end terminals in communication with back-end computing systems, laptops, or other computing devices. In an embodiment, the application includes the PowerChart® software manufactured by Cerner Corporation. In an embodiment, interface142includes a Web-based application, which may take the form of an applet or app, or a set of applications usable to manage user services provided by an embodiment of the technologies described herein. In some embodiments, interface142may facilitate providing output of the scribe; providing instructions or outputs of other actions described herein; providing notifications; and logging and/or receiving other feedback from the user/caregiver, in some embodiments. Example operating environment100further includes computer system120, which may take the form of one or more servers and which is communicatively coupled through network175to EHR system160, and storage121. Computer system120comprises one or more processors operable to receive instructions and process them accordingly and may be embodied as a single computing device or multiple computing devices communicatively coupled to each other. In one embodiment, processing actions performed by computer system120are distributed among multiple locations, such as one or more local clients and one or more remote servers, and may be distributed across the other components of example operating environment100. For example, aspects of NLP application140or user/clinician interface142may operate on or utilize computer system120. Similarly, a portion of computing system120may be embodied on user/clinician interface142, application140, and/or EHR system160. In one embodiment, computer system120comprises one or more computing devices, such as a server, desktop computer, laptop, or tablet, cloud-computing device or distributed computing architecture, a portable computing device such as a laptop, tablet, ultra-mobile P.C., or a mobile phone. Embodiments of computer system120include computer software stack125, which, in some embodiments, operates in the cloud, as a distributed system on a virtualization layer within computer system120, and includes operating system129. Operating system129may be implemented as a platform in the cloud and is capable of hosting a number of services such as122. Some embodiments of operating system129comprise a distributed adaptive agent operating system. Embodiments of services may run as local services or may be distributed across one or more components of operating environment100, in the cloud, on one or more personal computers or servers such as computer system120, and/or a computing device running interface142or application140. In some embodiments, interface142and/or application140operate in conjunction with software stack125. Computational services122may perform statistical or computing operations such as computing functions or routines for processing of extracted information, as further described herein. Computational services122also may include natural language processing services (not shown) such as Discern nCode™ developed by Cerner Corporation, or similar services. In an embodiment, computational services122include the services or routines that may be embodied as one or more software agents or computer software routines. Computational services122also may include services or routines for utilizing one or more models, including logistic models. Some embodiments of the invention also may be used in conjunction with Cerner Millennium®, Cerner CareAware® (including CareAware iBus®), Cerner CareCompass®, or similar products and services. Example operating environment100also includes storage121(or data store121), which in some embodiments includes patient data for a patient (or information for multiple patients), including raw and processed patient data; variables associated with patient diagnoses; and information pertaining to clinicians and staff, include user preferences. It is contemplated that the term “data” includes any information that can be stored in a computer-storage device or system, such as user-derived data, computer usable instructions, software applications, or other information. In some embodiments, data store121comprises the data store(s) associated with EHR system160. Further, although depicted as a single storage data store, data store121may comprise one or more data stores, or may be in the cloud. Turning briefly toFIG.1B, there is shown one example embodiment of computing system180representative of a system architecture that is suitable for computer systems such as computer system120. Computing device180includes a bus196that directly or indirectly couples the following devices: memory182, one or more processors184, one or more presentation components186, input/output (I/O) ports188, input/output components190, radio194, and an illustrative power supply192. Bus196represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various blocks ofFIG.1Bare shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component, such as a display device, to be an I/O component. Also, processors have memory. As such, the diagram ofFIG.1Bis merely illustrative of an exemplary computing system that can be used in connection with one or more embodiments of the present invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “hand-held device,” etc., as all are contemplated within the scope ofFIG.1Band reference to “computing system.” Computing system180typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing system180and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing system180. Computer storage media does not comprise signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. Memory182includes computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing system180includes one or more processors that read data from various entities such as memory182or I/O components190. Presentation component(s)186present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc. In some embodiments, computing system194comprises radio(s)194that facilitates communication with a wireless-telecommunications network. Illustrative wireless telecommunications technologies include CDMA, GPRS, TDMA, GSM, and the like. Radio194may additionally or alternatively facilitate other types of wireless communications including Wi-Fi, WiMAX, LTE, or other VoIP communications. As can be appreciated, in various embodiments, radio194can be configured to support multiple technologies and/or multiple radios can be utilized to support multiple technologies. I/O ports188allow computing system180to be logically coupled to other devices, including I/O components190, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. The I/O components190may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition (as described in more detail below) associated with a display of the computing system180. The computing system180may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, touchscreen technology, and combinations of these, for gesture detection and recognition. Additionally, the computing system180may be equipped with accelerometers or gyroscopes that enable detection of motion. The architecture depicted inFIG.1Bis provided as one example of any number of suitable computer architectures, such as computing architectures that support local, distributed, or cloud-based software platforms, and are suitable for supporting computer system120. Returning toFIG.1A, in some embodiments, computer system120is a computing system made up of one or more computing devices. In some embodiments, computer system120includes one or more software agents and, in an embodiment, includes an adaptive multi-agent operating system, but it will be appreciated that computer system120may also take the form of an adaptive single agent system or a non-agent system. Computer system120may be a distributed computing system, a data processing system, a centralized computing system, a single computer such as a desktop or laptop computer or a networked computing system. In application, the systems described herein apply NLP and clinical ontologies to voice conversational sources to provide structured, usable output. Initially, a voice conversation is captured. A voice conversation can include one or more voice inputs. The voice inputs can be separated based on, for example, speaker, role of speaker, location of speaker, specialty of speaker, and the like. The voice input(s) can be captured automatically by a system that is, for instance, continuously listening. The voice input(s) can also be captured upon receiving an initiation cue to begin listening to the environment. The voice conversation (and inputs therein) may be transformed to text (e.g., transcript) using speech recognition software currently available. The transcript may be searchable. The transcript may be dynamically generated in real-time or near real-time. The conversation is collected and stitched together. Memory for speech-to-text technology is only capable of holding a predetermined amount of data so the recordings are typically smaller chunks. There is a technological cap on the memory space, resulting in the conversation being chopped at predetermined intervals (e.g., segment thresholds). The present technology, however, segments the conversation into much smaller intervals than the predetermined segment threshold. For instance, if the memory threshold is 1 minute, the present technology segments the conversation into smaller pieces, such as 15-20 second intervals. This is a configurable period of time. By segmenting the conversation to be transcribed into much smaller parts, the output is provided much quicker. It is not ideal to wait an entire minute for output. The present technology also accounts for potential loss of conversation at segment thresholds. For example, if a recording stops at 1 min and then restarts, there is inevitably data lost in the time it takes to start and restart. The present invention stitches together various segments to avoid data loss. A period of time prior to the segment ending may be identified and added to a next segment (e.g., one second prior to the segment end time may be added to the segment). Alternatively, the last audio spoken may be identified and the segment remaining after the last audio spoken may be added to the next segment. This information is identified to stitch to the next segment to avoid loss of audio. In other words, if a segments ends at 17 seconds, the last piece of audio where a word ended is stitched onto or added to the next segment and then transcribed. Alternatively, if a segment ends at 17 seconds, the audio after 16 seconds may be stitched onto the next segment. Once transcribed, the unstructured transcript of the voice conversation is then processed by NLP/NLU to identify/extract one or more clinical concepts from the voice conversation. Clinical concepts, as used herein, generally refers to any clinical issue associated with a clinical encounter including, but not limited to, a diagnosis, a problem, a symptom, etc. For instance, a patient stating that they have Alzheimer's disease would trigger identification of Alzheimer's as a problem or diagnosis. The one or more clinical concepts is parsed and extracted from the transcript of the voice conversation including unstructured clinical data in conversational format. Put simply, the transcript is a transcription of the spoken words of the voice conversation. There are no headings, no mention of appropriate clinical concepts or classifications to use for documentation, etc. It is a transcript of a conversation that is occurring during a patient encounter between, for instance, a patient and a provider. The transcript is not a structured document and is not provided in a structured, useable format for, for instance, documentation. In addition to extraction of clinical conditions/concepts, NLP/NLU may also be utilized to identify context within the voice conversation. Context may be identified using a role of a speaker, a number of speakers, the specialty of the speaker, etc. For example, if a speaker is identified as an oncology clinician, a different context would apply than if the speaker were identified as, for example, a dermatologist. Additionally, voice inputs made by a surgeon would apply a different context than if identified from, for instance, a primary care provider. The extracted concepts can be positive or negative. For instance, a conversation including the phrase “I was recently diagnosed with Alzheimer's and require a lot of assistance at home” would trigger identification of the concept “Alzheimer's” with a POSITIVE note as the patient has identified being identified with Alzheimer's. Conversely, a conversation regarding a wound, for example, where a clinician notes that “it doesn't look infected” may trigger identification of an “infectious disease” concept and a NEGATIVE note as the clinician verbally stated it did not look like an infection. Additionally, as previously mentioned, NLP/NLU can apply a time function to the concepts to identify if the concept is a present issue or a past issue. For instance, a statement that “my other doctor sent me for a chest X-ray” may be identified as a past test. This temporal analysis can be performed on the extracted concepts of the voice inputs. In some embodiments, the natural language processing is automatically performed while a user, such as a clinician, is having the voice conversation. In other embodiments, an indication to start natural language processing is received from an activate indication (e.g., “Hello, Scribe”), also referred to herein as an initiation cue. In either situation, a relevant patient/individual should be identified to associate with the captured voice inputs. An exemplary user interface200for selecting a relevant patient is provided inFIG.2. As is shown, a patient201is selected from a list of patients. Upon selection, the electronic health record of the patient may be provided, as shown in interface300ofFIG.3. In the event the virtual scribe is not already listening to any voice inputs within an environment, an initiation cue can be provided, as illustrated inFIG.4. A user can select a voice indicator401to provide an option to capture voice. An initiation cue can be provided by selection of an activate voice indicator402. Once activated and transcribed (speech to text functions), NLP is utilized to identify the clinical conditions within the voice input(s) and the system utilizes one or more clinical ontologies for the clinical conditions to identify one or more clinical concepts related to the clinical conditions. The clinical concepts are then classified into one or more classification groups. Classification groups, as used herein, refers generally to groupings of clinical concepts as defined in standardized formats. Standardized forms (including standard formats) are utilized today with standard locations including problems, diagnoses, medications, symptoms, procedures, etc. The standardized form locations may be used as a guide for the system to use to generate classification groups. As used herein, a clinical ontology provides contextual relationships between a particular clinical condition and clinical concepts, such as evidence or symptoms of a clinical condition, treatment for the clinical condition (including procedures and medications), commonly co-existing conditions, risk factors for the clinical condition, and/or disqualifying evidence. The term “clinical ontology” as used herein is not intended to merely define a semantic hierarchy between concepts. Rather, a clinical ontology may provide one or more classifications comprising a set of clinical concepts that occur together within a patient's EHR as determined through one or more machine learning processes. The classifications may be the presence of clinical concepts that appear in association with a classification. For example, when a patient presents with a sore throat, the patient's record may reflect that the sore throat was the chief complaint or reason for the visit. The identified classifications are identified based on context in the voice conversation. In some embodiments, multiple clinical conditions may be extracted from the voice conversation. A separate ontology may be used for each condition to identify additional concepts related to one particular concept. Accordingly, when multiple conditions are extracted from a voice conversation using NLP, multiple ontologies may be retrieved to identify concepts and classifications relevant to each condition. In an embodiment, once the clinical conditions are extracted and ontologies utilized to identify concepts, the one or more clinical concepts are “bucketized” into their respective classification groups and provided to a user. Additionally, the clinical concepts may be provided in an area of a user interface that illustrates a location within a structured document where the clinical concept may be documented (e.g., History of Present Illness (HPI), Exams, Review of Symptoms, Current Medications, Labs, Vital Signs, Past Medical History, Family History, Assessment and Plan, etc.). The items in the structured document area of the interface may be documented directly into the portion of the EHR that is designated within the structured document either automatically or manually (e.g., upon approval of a clinician). This is illustrated inFIG.5, where an exemplary interface500is provided. The interface includes selection of a scribe at a scribe indicator501. Once in the scribe interface, a transcript area502is provided that provides the transcript of the voice inputs. As previously mentioned, the transcript can be populated in real-time. Any clinical concepts identified within the transcript can be identified by, for instance, highlighting the concept, underlining or bolding the concept, adding an indicator next to the concept, or any other means that could visually mark the concept. Concept502ahas been highlighted to illustrate the meaning, but highlighting is not meant to be construed as the only way to depict a concept. As previously described, the classification groups can be provided and are shown in classification area512as classifiers503-509. Classifier504(problems) has been populated with a concept510identified from the transcript. Finally, a location area511is provided that provides a location within a structured document where the clinical concept may be documented. A location can be provided for each clinical concept identified within the transcript. Here, Alzheimer's was identified as a problem and location area511provides the location where the identified concept can be documented within a patient's record.FIG.6provides an additional interface600illustrating that the scribe continues to add information to the transcript as additional voice inputs are received. As shown, an additional concept601has been identified in subsequent voice inputs and populated as items602in the problems area. Additionally, a location603is provided for the newly identified concept601. Alternative views are provided inFIGS.7and8. InFIG.7, an exemplary interface700is provided. This interface700provides for integration of the scribe interface in the workflow. A scribe indicator701can be selected to identify information identified from the scribe. The transcript indicator702can be selected to view a transcript of the voice inputs and a documentation indicator703can be selected to view one or more items to be documented. InFIG.7, the scribe indicator701is currently selected. As is shown, items704-708are provided and were extracted from the voice inputs. Items704-708can include one or more clinical concepts and may further include a designation of one or more classification groups to which the clinical concepts belong. For example, item704is noted to potentially add to the “problems” list for the current visit. Each item includes a transcript expander such as expander709. Selection of the expander results in navigation to the full transcript or at least a portion of the transcript related to the item with which the expander was found. A user has an option to submit selected items for consideration of documentation by selecting submit indicator712. Selection of the submit indicator712will result in the system identifying one or more clinical concepts associated with the item (items704-708). A user can also save items for later with selection of indicator713. FIG.8depicts an exemplary interface800illustrating selection of items for addition to one or more of a patient's record, a workflow, and the like. As withFIG.7, documentation indicator801, scribe indicator802, and transcript indicator803are all available for selection to toggle between the three views. A documentation view is currently selected illustrating classification group area804where one or more items would be classified. Items805have been added to the problems note area for consideration for documentation. Items805, as is shown, now include structured information along with the clinical concept identified. These can be selected to be added to the “problems” area in the documentation801screen for documentation in a patient's record. Continuing on, validation sources, such as a patient's EHR, are used to verify that the conversation captured and output generated are complete and accurate. The one or more clinical concepts may be utilized with the patient's EHR to identify whether the scribe output is valid. By way of example, when asking a patient if they're taking any medications and they reply with “Yes, I'm taking Tylenol once daily”, the medication section of the patient's EHR is analyzed to identify whether Tylenol is listed as a medication. If no, a notification that Tylenol is not currently listed may be provided. An indicator to add Tylenol to the patient's EHR may be provided in the notification. If yes, nothing may be provided or a notification that Tylenol is already listed and no changes are needed at this time may be provided. In embodiments, when a discrepancy is identified between the scribe output and the patient's EHR data, actual values or data from the EHR may be provided so a clinician can easily review the discrepancy (rather than simply being notified that something is wrong). For example, an encounter having a patient reporting that they take a medication once daily that is noted in the chart as twice daily may be provided with a notification that the scribe data is not validated and the reason why is due to the frequency, while the portion of the record indication a twice daily dosage may be provided in the record for immediate viewing without navigating to any separate portions of the patient's record. The EHR may be used to confirm or validate the scribe output of the voice conversation with data found in the current electronic documentation. The system may search for indicators of the one or more clinical concepts within the voice conversation and the EHR to determine whether the clinical concept within the voice conversation can be verified. In exemplary aspects, searching for indicators of the one or more clinical concepts comprises searching for structured data for the clinical concepts, such as measurements for physiological values or presence of a specific medication, laboratory, or procedure within the EHR. In additional embodiments, various document formats may be generated from the voice conversation. One example document is structured and usable by the clinician with an aim to persist as part of the patient's record (e.g., doctor's notes). A second example document is transformed to a format consumable by the patient. The language and content may be tailored to the needs of the patient. A third example document may be tailored to the needs of referrals. For instance, if, during the voice conversation, a clinician recommends the patient meet with additional providers, a referral document may be generated. In addition to documents, documentation items or action items may also be generated by the system. A documentation item or action item, as used herein, refers generally to data that would typically need to be documented in the patient's record either during or after the encounter. For example, patient's vital signs or other clinical findings need to be documented in the patient's record during a visit. Additionally, any orders or prescriptions a clinician provides need to be documented in the patient's record. The present system automatically generates these documentation items. For instance, if a clinician says “I'm putting you on a Z-pack” the system intelligently knows that the clinician is placing an order (“putting you on” may be a cue that an order is to follow) for a medication. The prescription may be automatically generated by the scribe/system and populated on the user interface. From there, it may be automatically documented in the patient's record or it may be pending until signed or manually approved by the clinician. In additional embodiments, the system is linked to other programs such that it may be communicated automatically or post-approval to an appropriate destination. For example, a medication prescription may be sent to the pharmacy or an order for a chest x-ray may be sent to radiology. In embodiments, the system identifies relevant information from an external source and provides that information upon identifying, within the voice conversation, a reference to the information. For instance, if a clinician states “I reviewed your vitals, they look good,” then the system may identify the clinical concept “vitals” and provide the collected vital signs on a user interface, within a document, and the like. This information may be extracted from the patient's EHR. The information may also be identified from other devices, such as a heart monitor, etc., that may be associated with a patient. Direct voice commands may also be utilized with the present system. For instance, a clinician may state “show me their vaccinations” to view a patient's vaccinations. Again, the system may extract this information from a patient's EHR or any other records associated with the patient. The system may be integrated with the patient's EHR such that the portion of the record is directly shown or a link thereto may be provided. In additional embodiments, data other than voice may be captured during the encounter, such as movement, images, sensor data, videos, etc. This data may be captured and incorporated directly into the EHR and, thus, can be referenced during subsequent visits. For example, movement data (e.g., via sensors or a video) may be captured and used at a follow-up visit in three months to compare a gait. Various in-room sensors may be used to capture data and include, but are not limited to, cameras, speakers, microphones, 3D cameras, wearable sensors, connected devices, and the like. Turning now toFIG.9, an exemplary flow diagram of a method900for performing natural language understanding on voice conversations is provided. Initially, at block910, a voice conversation associated with an individual is received. The voice conversation can include a plurality of voice inputs. At block920, at least one clinical condition within the voice conversation is parsed and extracted using one or more natural language processing techniques. One or more clinical concepts related to the clinical condition is identified at block930using one or more clinical ontologies for the at least one clinical condition. Each clinical ontology can provide contextual relationships between the clinical condition and the one or more clinical concepts. At block940, the one or more clinical concepts within the voice conversation is verified utilizing data from one or more validation sources. A validated output is generated based on the one or more validation sources and the one or more clinical concepts at block950. Turning now toFIG.10, an exemplary flow diagram of a method1000for performing natural language understanding on voice conversations is provided. Initially, at block1010, one or more voice inputs is received. A transcript with the one or more voice inputs in an unstructured format is populated at block1020. At block1030, at least one clinical condition is extracted from the one or more voice inputs. At block1040, one or more clinical concepts related to the clinical condition is identified using one or more clinical ontologies for the at least one clinical condition, each clinical ontology providing contextual relationships between the clinical condition and the one or more clinical concepts. At block1050, utilizing the one or more clinical concepts, a graphical user interface is populated with the one or more clinical concepts into one or more classification groups, the one or more classification groups corresponding to standard classifications. At block1060, the graphical user interface is provided comprising the one or more clinical concepts in the one or more classification groups and a recommended location within an electronic health record where each of the one or more clinical concepts is to be documented. Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the spirit and scope of the present invention. Embodiments of the present invention have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to those skilled in the art that do not depart from its scope. A skilled artisan may develop alternative means of implementing the aforementioned improvements without departing from the scope of the present invention. It will be understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims. Not all steps listed in the various figures need be carried out in the specific order described. Accordingly, the scope of the invention is intended to be limited only by the following claims.
51,794
11862165
DETAILED DESCRIPTION Certain aspects and examples of the present disclosure relate to optimizing a virtual assistant for connecting a user to a live agent. The virtual assistant can be a software or computer program that can simulate a human conversation. In some examples, the virtual assistant can interact with the user via spoken or written communication and the interaction can be displayed in a chat window on a multi-modal user interface. The multi-modal user interface can be accessed by the user on a user device. The user device may be a mobile phone, a smart phone, a tablet, a personal computer, etc. The multi-modal user interface can be a user interface that enables the user to interact with the virtual assistant using two or more different modes of communication. The multi-modal user interface can further process two or more user inputs provided by the two or more different modes of communication. Examples of different modes of communication for providing input can include the user providing the user input via text, touch, speech, manual gestures, or other suitable modes that can be processed by the multi-modal user interface. Optimizing the virtual assistant can include performing natural language understanding, natural language processing, and the like on the user input. Natural language processing can be algorithms or other suitable tools or techniques for enabling the virtual assistant to recognize and understand the user input. Similarly, natural language understanding can be algorithms or other suitable tools and techniques for enabling the virtual assistant to understand the meaning of the user input. In some examples, utterance learning can be a tool for processing the user input. The utterance learning can include intents, which can be the various, broad categories in which the inquiries can fall into. Additionally, an utterance can be used in the utterance learning to learn, predict, or a combination thereof the various words, phrases, sentences, etc. that the user may provide in relation to the intents. Furthermore, entities can be the most relevant words, phrases, sentences, etc. in the utterance for determining the intent. The utterance learning can improve the virtual assistant's ability to understand the user input, process the user input, respond to the user input, or a combination thereof. The utterance learning can further improve the efficiency of connecting the user to the live agent. Current systems can require excess processing time to determine the information desired by the user and thus require additional processing time to connect the user and the live agent. Additionally, current systems may exhibit memory management issues, in which the system cannot save chat history, user activity, etc. Therefore, the user may not be able to leave a chat window, application, website, phone call, or the like until the user is connected to the live agent. The use of a virtual assistant that can receive inputs from the user by various modes of communication and can process the various modes of communication can improve the efficiency of connecting the user and the live agent by decreasing the processing time for determining the information required by the user. Furthermore, performing utterance learning on the inputs from the user can decrease processing time by enabling the system to quickly comprehend and determine the information required by the user. Additionally, by detecting user activity and storing the interaction between the user and the virtual assistant memory management can be improved. Illustrative examples are given to introduce the reader to the general subject matter discussed herein and are not intended to limit the scope of the disclosed concepts. The following sections describe various additional features and examples with reference to the drawings in which like numerals indicate like elements, and directional descriptions are used to describe the illustrative aspects, but, like the illustrative aspects, should not be used to limit the present disclosure. FIG.1is an example of a multi-modal user interface100that can display output from a virtual assistant110for connecting a user and a live agent according to one example of the present disclosure. As illustrated, the user can provide an input106to the virtual assistant110via the multi-modal user interface100. In some examples, the user can provide the input106via a chat box118in a chat window102. The virtual assistant110can provide a response114to the input106. The response114can be a response114requesting additional information from the user for accurately connecting the user and the live agent. The user can provide a second input116with the additional information. Additionally or alternatively, the user can provide the input106or the second input116by additional modes of communication such as speech or the user can press or otherwise select options104a-f. The options104a-fcan help the virtual assistant110determine the live agent best suited to assist the user, connect the user to resources, automatically connect the user and the live agent, or provide other suitable functions related to assisting the user. For example, the options104a-fcan be related to banking operations. Examples of banking operations can include issuing loans, client service, investment analysis, risk analysis and mitigation, technical operations, or any other suitable operation related to a banking environment. As illustrated, the options104a-fcan include options related to subscriptions104a, spending104d, frequently asked questions104c, loans104, or other suitable banking operations or the options can be actions such as sending money104e, ordering checks104f, etc. The chat window102can further include at least one visual indicator108. As illustrated, the at least one visual indicator108can show the user that the virtual assistant110is responding. The chat window102may include additional visual indicators, for example, to show that the virtual assistant110is processing a query or determining the relevant live agent. Additionally, the user may interact with the at least one visual indicator108. For example, the user may be able to cancel an input or an interaction with the virtual assistant110or the live agent, or the user may be able to undo an input106to the virtual assistant110via the at least one visual indicator108. Undoing or canceling input106to the virtual assistant110can decrease processing time and decrease wasted resources. For example, an input to the virtual assistant can be misinterpreted by the system. Therefore, rather than requiring additional inputs from the user to fix the misinterpreted input, the user can start over or go back and try again with a new input for the virtual assistant110. Thus, the at least one visual indicator108can improve the efficiency of the interaction between the virtual assistant110and the user for connecting the user and the live agent. FIG.2is an example of a multi-modal user interface200that can display output from a virtual assistant210for connecting a user and a live agent222according to one example of the present disclosure. As illustrated, the virtual assistant210can provide a response214in chat window202, which can include a statement with information for the user. The response214can notify the user that the user can leave the chat, can notify the user that the user is being connected to the live agent222, and can provide an amount of time for connecting the user to the live agent222. The response214may further include additional information such as information related to an input, the live agent222, or additional information related to client services. The virtual assistant210can connect the user to the live agent222via the chat window202. The live agent222can provide a response220to the user related to the input. Additionally or alternatively, the live agent222may connect with the user via email, phone, or other suitable communication method. The user can communicate the preferred communication method to the virtual assistant210, and the virtual assistant210can automatically connect the user and live agent222via the preferred communication method or provide information about the preferred communication method to the live agent222. In some examples, the live agent222is related to banking operations. For example, the live agent222can be a banker, bank teller, loan processor, mortgage consultant, loan officer, internal auditor, or other suitable live agent related to banking operations. FIG.3is a flowchart of a process300for connecting a user to a live agent222via a virtual assistant110according to one example of the present disclosure. The process300can connect the user to the live agent222efficiently by quickly recognizing, processing, and understanding an input106from the user. The process300can further include quickly determining the live agent222that can satisfy inputs106from the user. At block302, the process300can involve providing a virtual assistant110that can receive inputs from a user and provide responses to the user. The virtual assistant110can be a software or computer program integrated with the multi-modal user interface100for simulating human interaction. The virtual assistant110can simulate human interaction by communicating to the user via text, speech, or a combination thereof. The interaction between the virtual assistant110and the user can be displayed in a chat window102on a multi-modal user interface100. The multi-modal user interface100can be accessed and used by the user via a user device. In some examples, the user device can be a tablet, smart phone, laptop, etc. The multi-modal user interface can allow a user to provide the inputs by various modes of communication. For example, the modes of communication can include text in a chat window102, tapping a button or other suitable display displayed on the multi-modal user interface100, speech, other suitable modes of communication, or a combination thereof. The modes of communication can be processed by the multi-modal user interface100, further processed the system performing natural language processing, and can be received by the virtual assistant110. In some examples, the user can provide the inputs in more than one mode of communication substantially simultaneously. Additionally, or alternatively process300, can include providing the user at least one option104a-fvia the multi-modal user interface100. The at least one option104a-fcan be provided to the user via the chat window102prior to receiving the inputs, while the virtual assistant110is interacting with the user, while the virtual assistant110is connecting the user and the live agent222, while the user is interacting with the live agent222, or a combination thereof. The at least one option104a-fcan be provided on the multi-modal user interface100a-bas a display for the user to tap or otherwise select. The virtual assistant110can receive an input from the user corresponding the option the user selects. In some examples, the at least one option104a-fcan automatically connect the user and the live agent222. For example, a security or fraud option can connect the user to a live agent222that can handle suspicious transactions or other suspicious acitivities. Additional examples of the at least one option104a-fcan include options regarding loans, subscriptions, credit card information, transactions, frequently asked questions, etc. Additionally, the at least one option may include an option to send money, order checks, or other suitable actions. At block304, the process300can involve performing natural language processing on the inputs to process the inputs into a result comprehendible by the virtual assistant110. Natural language processing can be a machine learning model or other suitable tool or technique for transforming the inputs106into inputs that the virtual assistant110can understand. The natural language processing can further include processing inputs received by at least two different modes of communication. For example, natural language processing can be performed on speech from the user and text from the user. The natural language processing can be performed on different modes of communication in the order received or substantially simultaneously. In additional examples, a mode of communication, such as text, can be prioritized in the natural language processing. Then, the result of the natural language processing on the prioritized mode of communication can be used to improve the natural language processing of subsequent inputs in the same or alternative modes of communication. Additionally or alternatively, the process300can further include performing natural language understanding on the inputs. Natural language understanding can be a machine learning model or other suitable tool or technique for enabling the virtual assistant110to understand the meaning of the input106. Natural language understanding can further assist with generating an input comprehendible by the virtual assistant110and can improve the efficiency of connecting the user and the live agent222. For example, utterance learning can be a natural language understanding technique. The utterance learning can involve training a machine learning model with various utterances. The various utterances can be words, phrases, sentences, etc. that can be part of the inputs from the user. The various utterances can be classified into intents. In some examples, the inputs can include utterances related to banking operations. Therefore, the intents may include loans, accounts, investments, or other suitable intents related to banking operations. The utterance learning can further include entities, slots (e.g., keywords that are used to trigger a person best suited to assist the user), or the like, which can be learned from the utterances. The entities, slots, or the like can be words, phrases, sentences, and the like that can be derived from the utterances and can be the most important words, phrases, sentences, and the like for determining the intent. In some examples, utterance learning can be performed on more than one mode of communication substantially simultaneously to improve processing of the inputs. At block306, the process300can involve predicting, based on the inputs at least one objective of the user. The at least one objective can include a first objective, which can indicate to the virtual assistant110that the user requires communication with a live agent222. The at least one objective can also include one or more additional objectives for the purpose of the communication with the live agent222. The one or more additional objectives can be the intents, additional classifications, or other suitable categories of issues, questions, tasks, etc. or the one or more additional objective can be other suitable client service matters that the user may be contacting the virtual assistant110about. Thus, the one or more additional objectives can further be used to determine the live agent222. At block308, the process can involve determining the live agent222that is best suited to assist the user. The live agent222best suited to assist the user can be determined based on the one or more additional objectives, which can be related to the purpose or intent of the user contacting the virtual assistant110. In some examples, slotting is used as a technique for determining the live agent222. The technique can include triggering, alerting, or otherwise communicating the live agent222best suited to assist the user based on keywords, slots, entities, or other suitable portions of the inputs from the user. The virtual assistant110can seamlessly determine and connect the user to the live agent using slotting or other suitable techniques. The live agent222can be an employee or other suitable live agent222that can engage with the user, answer questions, provide information, resolve issues, or otherwise assist the user. In some examples, a company or other suitable entity may include various live agents, and therefore it can be necessary to determine the live agent222best suited to assist the user. For example, in a banking operation the various live agents may include bank tellers, bankers, loan processors, mortgage consultants, investment representatives, credit analysts, etc. The various live agents can have specific skills, knowledge, or the like that can enable a live agent of the various live agents to help the user with specific questions, tasks, etc. Human skill IDs can be used to associate the various live agents and the types of questions, tasks, etc. that the various live agents can assist users with. Thus, in some examples, human skill IDs can be used to identify the live agent best suited to assist the user. For example, the intents identified in the utterance learning can be further tagged, sorted, or otherwise classified based on the human skill IDs. The entities, slots, keywords, etc. that can be used to determine intents can also be used to determine human skill IDs related to the inputs from the user. Therefore, in some examples, the input from the user can be classified by the intent, which can be tagged or sorted by human skill IDs for automatically or otherwise seamlessly identifying the live agent best suited to assist the user. The inputs comprehendible by the virtual assistant110can enable the virtual assistant110to connect the user to the live agent222most closely related to the issue or other suitable client matter for which the user contacted the virtual assistant110. Additionally or alternatively, the process300can involve executing a machine learning model to determine the live agent222. In some examples, the machine learning model is used to determine the live agent222by extracting, from the input106, the entities. The machine learning model can further predict, based on the entities, the intent related to the input106and the machine learning model can determine, based on the intent, the live agent222. For example, the input106from the user can be processed into an input in which the virtual assistant110recognizes the entities “lend”, “borrow”, and “house. The entities can indicate to the virtual assistant110that the intent is mortgage related. Thus, the virtual assistant110can connect the user to a live agent that can be a mortgage consultant. The virtual assistant110can further determining an amount of time for connecting the live agent222and the user and can provide the amount of time to the user via the multi-modal user interface100. Additionally or alternatively, the process300can involve determining an amount of time for connecting the live agent222and the user. The amount of time can be determined by accessing a schedule or by accessing additional resources or data related to the availability of the live agent222. In some examples, a machine learning system can be implemented to predict the amount of time before the live agent222will be available. The amount of time can be estimated based on the schedule or additional resources or data. The amount of time can be compared to a threshold time. For example, the threshold time can be one hour. If the amount of time for connecting the live agent222and the user is longer than the threshold time, the virtual assistant110can provide various courses of action for the user. For example, the courses of action can include providing access to the multi-modal user interface on an additional user device. For example, the user can switch from accessing the multi-modal user interface on a laptop to accessing the multi-modal user interface on a phone or a tablet to improve the convenience and accessibility of connecting the user and the live agent222. Another example of a course of action can include providing a notification to the user device or the additional user device. The notification can cause the device to make noise, vibrate, or otherwise alert the user that the live agent222is ready to connect. Additionally, the courses of action can include the user providing an alternative communication method such as a phone number, email address, or the like. The virtual assistant110can provide the alternative communication method to the live agent222for the live agent222and the user to connect via the alternative communication method. Additionally, a timer can be displayed on the multi-modal user interface with the time such that the user can visualize the amount of time before the user will be connected to the live agent222. The timer can be displayed in a chat window or separate from the chat window. Additionally or alternatively, the process300can involve providing, via the virtual assistant110, a response to the input106. The response can include a response114requesting additional information. For example, the virtual assistant110may require additional details or information to connect the user to an applicable live agent222. In some examples, the response114can include a statement to provide information to the user. The statement can include the amount of time before the user will be connected to the live agent222, the name or job title of the live agent222, or other pertinent information related to the input106, live agent222, etc. The statement can further notify the user that the user can leave the application or website without losing the chat window or chat history. At block310, the process300can involve connecting, via the virtual assistant110, the user and the live agent222. The user and live agent222can be connected via the chat window102, phone call, email, video call, or other suitable communication methods. In some examples, the multi-modal user interface100can include the an option for the user to choose a preferred communication method. The virtual assistant110can facilitate the connection and improve the efficiency of communication between the live agent222and the user by providing both the user and the live agent222information. For example, the virtual assistant can provide the user information about the live agent222, the amount of time before the user will connect with the live agent222, etc. Additionally, the virtual assistant can provide the live agent222with information received from the inputs from the user and any additional information on the user that may be stored and accessible by the virtual assistant. Additionally or alternatively, process300can involve storing the interaction between the user and the virtual assistant110and can involve detecting the user interacting with the multi-modal user interface or detecting the user is not interacting with the multi-modal user interface. The user interacting with the multi-modal user interface can be determined by tracking user activity on the multi-modal user interface. In an example, a lack of user activity for a certain time period can indicate that the user is not interacting with the system. Additionally, a notification can be provided to the user as a display on the multi-modal user interface or otherwise communicated to the user by the virtual assistant110. If the user activity does not increase from the notification, it can be determined that the user is not viewing the multi-modal user interface. Therefore, a second notification can be provided to the user device. Additionally, the user can be provided access to the stored interaction between the user and the virtual assistant110. Therefore, the user and the live agent can be connected when a user closes or otherwise leaves the multi-modal user interface without losing the history and data from the interaction between the virtual assistant110and the user. Thus, the process300can improve the efficiency of connecting the user to the live agent222by performing natural language processing, natural language understanding, or a combination thereof to decrease the number interactions between the virtual assistant110and the user prior to connecting the user and live agent222. The process300further improves the efficiency of determining which live agent222is best suited to assist the user with the input106based on the result of the natural language process, natural language understanding, or combination thereof. Moreover, the process300can improve the user's experience by not requiring the user to spend additional time communicating with the virtual assistant110and the user may not have to wait in a chat window102, phone call, or other communication method prior to connecting with the live agent122. The multi-modal user interface100further allows the user to interact with the virtual assistant110by various modes of communication, which can improve the accuracy and efficiency of communication between the user and the virtual assistant110. The multi-modal user interface100can further enable the user to interact with the live agent222in a variety of formats. FIG.4is a block diagram of an example of a computing device402for connecting a user to a live agent via a virtual assistant412according to one example of the present disclosure. The components shown inFIG.3, such as a processor404, a memory407, a power source420, an input/output408, and the like may be integrated into a single structure such as within a single housing of the computing device402. Alternatively, the components shown inFIG.4can be distributed from one another and in electrical communication with each other. The computing device402can include the processor404, the memory407, and a bus406. The processor404can execute one or more operations for controlling the hydraulic fracturing operation using one or more optimization models subject to one or more constraints. The processor404can execute instructions410stored in the memory407to perform the operations. The processor404can include one processing device or multiple processing devices or cores. Non-limiting examples of the processor404include a Field-Programmable Gate Array (“FPGA”), an application-specific integrated circuit (“ASIC”), a microprocessor, etc. The processor404can be communicatively coupled to the memory307via the bus406. Non-volatile memory may include any type of memory device that retains stored information when powered off. Non-limiting examples of the memory407may include EEPROM, flash memory, or any other type of non-volatile memory. In some examples, at least part of the memory407can include a medium from which the processor404can read instructions410. A computer-readable medium can include electronic, optical, magnetic, or other storage devices capable of providing the processor404with computer-readable instructions or other program code. Nonlimiting examples of a computer-readable medium include (but are not limited to) magnetic disk(s), memory chip(s), ROM, RAM, an ASIC, a configured processor, optical storage, or any other medium from which a computer processor can read instructions410. The instructions410can include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C#, Perl, Java, Python, etc. In some examples, the memory407can be a non-transitory computer readable medium and can include computer program instructions410. For example, the computer program instructions410can be executed by the processor404for causing the processor404to perform various operations. For example, the processor404can provide a virtual assistant412that can receive inputs414from a user and provide responses418to the user. The processor404can further perform natural language processing, natural language understanding, or a combination thereof on the inputs414to generate inputs that can be understood by the virtual assistant412. Additionally, the processor404can determine the live agent based on objectives416that can be predicted based on the inputs414. The processor404can also connect the user and the live agent device424via the multi-modal user interface422or other suitable communication method. The computing device402can additionally include an input/output408. The input/output408can connect to a keyboard, a pointing device, a display, other computer input/output devices or any combination thereof. A user may provide input using a multi-modal user interface422that can be part of or communicatively coupled to input/out408. The virtual assistant412, a chat window, the inputs414, the response418, or a combination thereof can be displayed to the user, the live agent, or other suitable user a display, such as the multi-modal user interface422, that is connected to or is part of the input/output408. The input/output can further connect to a live agent device424to connect the user and the live agent via the input/output408or the multi-modal user interface422. Alternatively, the computing device402can, instead of displaying the interaction between the virtual assistant412and the user can automatically connect the live agent device424and the user via a phone call or other suitable communication method. The foregoing description of certain examples, including illustrated examples, has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Numerous modifications, adaptations, and uses thereof will be apparent to those skilled in the art without departing from the scope of the disclosure.
29,492
11862166
DETAILED DESCRIPTION Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. The embodiments are described below to explain the present disclosure by referring to the figures. The exemplary embodiments of the present disclosure may be diversely modified. Accordingly, specific exemplary embodiments are illustrated in the drawings and are described in detail in the detailed description. However, it is to be understood that the present disclosure is not limited to a specific exemplary embodiment, but includes all modifications, equivalents, and substitutions without departing from the scope and spirit of the present disclosure. Also, well-known functions or constructions are not described in detail since they would obscure the disclosure with unnecessary detail. The terms “first”, “second”, etc. may be used to describe diverse components, but the components are not limited by the terms. The terms are only used to distinguish one component from the others. The terms used in the present application are only used to describe the exemplary embodiments, but are not intended to limit the scope of the disclosure. The singular expression also includes the plural meaning as long as it does not differently mean in the context. In the present application, the terms “include” and “consist of” designate the presence of features, numbers, steps, operations, components, elements, or a combination thereof that are written in the specification, but do not exclude the presence or possibility of addition of one or more other features, numbers, steps, operations, components, elements, or a combination thereof. In the exemplary embodiment of the present disclosure, a “module” or a “unit” performs at least one function or operation, and may be implemented with hardware, software, or a combination of hardware and software. In addition, a plurality of “modules” or a plurality of “units” may be integrated into at least one module except for a “module” or a “unit” which has to be implemented with specific hardware, and may be implemented with at least one processor (not shown). Hereinafter, the present disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a block diagram of a voice recognition system according to an exemplary embodiment of the present disclosure. As illustrated inFIG.1, the voice recognition system includes a display apparatus100, an input apparatus200, and a web server300. The display apparatus100, which is an apparatus recognizing a spoken voice of a user to perform an operation intended by a user, may be implemented with various electronic apparatuses such as a smart TV, a smart phone, a tablet PC, and the like. The input apparatus200, which is an apparatus performing data communication with the display apparatus100to control an operation of the display apparatus100, may be, for example, a remote controller, a keyboard, or the like. Specifically, a first user may speak to operate the display apparatus100in a voice recognition mode. In response to a spoken voice of the user described above being input to the display apparatus100, the display apparatus100analyzes a voice signal for the input spoken voice to determine whether or not the corresponding voice signal is a trigger command for entering the display apparatus100into the voice recognition mode. As the determination result, in response to the corresponding voice signal being the command for operating the display apparatus100in the voice recognition mode, the display apparatus100enters the voice recognition mode. As such, in a state in which the display apparatus100enters the voice recognition mode, in response to an additional spoken voice of the user being input to the display apparatus100, the display apparatus100internally converts the additionally spoken voice into a text. However, the present disclosure is not limited thereto. For example, in the state in which the display apparatus100enters the voice recognition mode, in response to the spoken voice of the user being input through the input apparatus200or the voice recognition for the spoken voice of the user is impossible, the display apparatus100may receive a text for the spoken voice of the user through a voice recognition apparatus300. Here, the voice recognition apparatus300may be an apparatus performing the data communication with the display apparatus100to perform the voice recognition for the spoken voice of the user from the display apparatus100and transmitting a recognized voice recognition result to the display apparatus100. Thereafter, the display apparatus100may control the operation of the display apparatus100based on the text for the spoken voice of the user or receive and display response information corresponding to the spoken voice of the user from the web server (not illustrated). Here, the web server (not illustrated) is a server providing content related information. For example, if the speaking “please retrieve ∘∘∘” is input from the user, a communication unit160may receive retrieved results associated with “∘∘∘” from the web server (not illustrated). Meanwhile, an execution command controlling the operation of the display apparatus100with regard to the spoken voice of the user may be registered and set by the user. Hereinafter, an execution command intended to be registered and set by the user is referred to as a user command. Specifically, the user may input the user command intended to be registered and set by himself or herself through the input apparatus200. If the user command described above is input to the display apparatus100, the input apparatus200transmits user command registration request information including a user command of a text type to the display apparatus100. However, the present disclosure is not limited thereto. For example, in a state in which the display apparatus100is set in a user command registration mode, the display apparatus100may receive the spoken voice for the user command through a microphone. In response to the spoken voice for the user command as described above being input to the display apparatus100, the display apparatus100may transmit the input spoken voice to the voice recognition apparatus300and may receive a user command converted into the text type from the voice recognition apparatus300. In response to the user command of the text type as described above being received from the input apparatus200or the voice recognition apparatus300, the display apparatus100generates phonetic symbols for the user command of the text type. Thereafter, the display apparatus100analyzes the phonetic symbols for the user command by a predetermined suitability determination condition to determine registration suitability of the user command requested by the user. Here, the suitability determination condition may be at least one of a total number of phonetic symbols, whether or not vowels and consonants of the phonetic symbols are successive, a configuration form of the phonetic symbols, the number of phonetic symbols for each word, and whether or not predefined weak phonetic symbols are included. Therefore, the display apparatus100analyzes the phonetic symbols for the user command by the suitability determination condition as described above to determine registration suitability of the user command and outputs the determination result to at least one of a user interface (UI) and an audio device. In a case in which it is determined that the registration of the user command is unsuitable, the user may re-input a registrable user command, and the display apparatus100may re-perform the above-mentioned operations to re-perform a registration suitability determination for the re-input user command. Meanwhile, in a case in which it is determined that the registration of the user command is suitable, the display apparatus100registers the user command according to a registration request for the corresponding user command. Therefore, the user may control the operation of the display apparatus100using the user command set by himself or herself. Hereinabove, the respective configurations of the voice recognition system according to the present disclosure have been schematically described. Hereinafter, the respective configurations of the display apparatus100described above will be described in detail. FIG.2is a block diagram of the display apparatus according to an exemplary embodiment of the present disclosure andFIG.3is a detailed block diagram of the display apparatus according to the exemplary embodiment of the present disclosure. As illustrated inFIG.2, the display apparatus100includes an input unit110, an output unit120, and a processor140. Additionally, the display apparatus100may further include a voice processing unit150, a communication unit160, and a storing unit170as illustrated inFIG.3, in addition to the configuration of the input unit110, the output unit120, and the processor140. The input unit110, which is an input for receiving various user manipulations and transferring various user manipulations to the processor140, may be implemented as an input panel. Here, the input panel may be formed in a touch pad, or a key pad or touch screen type including a variety of function keys, number keys, special keys, letter keys, and the like. As well, the input unit170may receive a control command transmitted from a remote control apparatus200such as a remote controller or a keyboard for controlling the operation of the display apparatus100. As well, the input unit110may receive the spoken voice of the user through a microphone (not illustrated). The input unit110as described above may receive a user command of a text type from the remote control apparatus200or may receive the spoken voice for the user command through the microphone (not illustrated). Here, the user command, which is an execution command defined by the user to control the operation of the display apparatus100, may be at least one of a trigger command for entering the display apparatus100into the voice recognition mode and a control command for controlling the operation of the display apparatus100. The output unit120outputs the registration suitability determination result for the user command input through the input unit110. The output unit120as described above may include a display unit121and an audio output unit123as illustrated inFIG.3. Therefore, the output unit120may output the registration suitability determination result for the user command through at least one of the display unit121and the audio output unit123. Meanwhile, the processor140, which is a configuration generally taking charge of the control of the apparatus, may be used interchangeably with a central processing unit, a microprocessor, a controlling unit, and the like. In addition, the processor140, which is to control a general operation of the apparatus, may be implemented as system-on-a-chip (SOC) or system on chip (SoC) with other function units. Such processor140generally controls operations of all of configurations constituting the display apparatus100. Particularly, the processor140may copy a phonetic symbol generation related program pre-stored in the storing unit170in a random access memory (RAM) according to the user command for a user command registration and may generate phonetic symbols for the user command of the text type using the phonetic symbol generation related program copied in the RAM. More specifically, the processor140may generate the phonetic symbols for the user command of the text type based on a predefined phonetic symbol set. Here, the predefined phonetic symbol set may include at least one of vowels, diphthongs, consonants, affricates, accents, and symbols. If such phonetic symbols for the user command are generated, the processor140analyzes a pre-generated phonetic symbol based on a predetermined suitability determination condition to determine registration suitability for the user command. Thereafter, the processor140controls the output unit120to output the registration suitability determination result for the user command. Specifically, if the registration request information for the user command defined by the user is input through the input unit110, the processor140enters a registration performing mode for the user command. Here, the registration request information may be request information for registering the user command associated with the trigger command for entering the voice recognition mode or request information for registering the user command associated with the control command for controlling the operation of the display apparatus100. In response to the user command corresponding to the registration request of the user being input through the input unit110after such registration request information is input, the processor140generates the input user command in a phonetic symbol form. According to an exemplary embodiment, in response to the spoken voice associated with the user command output from a microphone (not illustrated) being input through the input unit110, the processor140performs a control so that the voice processing unit150performs a voice recognition for the spoken voice of the user. According to such control command, the voice processing unit150may convert the spoken voice of the user into the text using a speech to text (STT) algorithm. According to an exemplary embodiment, in response to the spoken voice associated with the user command output from a microphone (not illustrated) being input through the input unit110, the processor140transmits the spoken voice associated with the user command to the voice recognition apparatus300. Thus, the voice recognition apparatus300performs the voice recognition for the received spoken voice and transmits the voice recognition result of the text type to the display apparatus100. In this case, the voice recognition apparatus300may transmit at least one voice recognition result of the text type with regard to the spoken voice of the user to the display apparatus100. Therefore, in a case in which a plurality of texts for the spoken voice of the user are received from the voice recognition apparatus300, the processor140controls the output unit120to display a list for the plurality of texts. Thus, the output unit120displays the list for the plurality of texts through the display unit121. In a state in which such list is displayed, in response to a selection command for one text being input, the processor140may determine a text corresponding to the input selection command as a text for the spoken voice of the user. According to an exemplary embodiment, the processor140may determine the subject to perform the voice recognition for the spoken voice of the user depending on whether data communication with the voice recognition apparatus300may be performed through the communication unit160. That is, if the data communication with the voice recognition apparatus300may be performed, the processor140may receive the voice recognition result for the spoken voice of the user from the voice recognition apparatus300, and if the data communication with the voice recognition apparatus300may not be performed, the processor140may perform the voice recognition for the spoken voice of the user by the voice processing unit150. Here, the communication unit160performs the data communication with the voice recognition apparatus300and receives the voice recognition result for the spoken voice of the user from the voice recognition apparatus300. As well, the communication unit160may perform the data communication with the input apparatus200and may receive at least one of the user command for controlling the operation of the display apparatus100and the spoken voice of the user. Additionally, the communication unit160may perform the data communication with a web server (not illustrated) and may receive the response information corresponding to the spoken voice of the user. Such communication unit160may include various communication modules such as a local area wireless communication module (not illustrated), a wireless communication module (not illustrated), and the like. Here, the local area wireless communication module (not illustrated), which is a communication module performing wireless communication with at least one of the input apparatus200and the web server (not illustrated) located at a local area, may be, for example, Bluetooth, Zigbee, or the like. The wireless communication module (not illustrated) is a module connected to an external network according to a wireless communication protocol such as WiFi, IEEE, or the like, to perform communication. The wireless communication module may further include a mobile communication module connected to a mobile communication network according to various mobile communication standards such as 3rd generation (3G), 3rd generation partnership project (3GPP), Long Term Evolution (LTE), and the like to perform communication. Meanwhile, if the spoken voice associated with the user command is converted into the text type or is received from the voice recognition apparatus300according to various exemplary embodiments described above, the processor140generates the phonetic symbols for the user command of the text type based on the predefined phonetic symbol set. For example, in response to a user command of a text type called “kangazi” being input, the processor140may generate phonetic symbols [k:ang_a:_zi] from the user command of the text type called “kangazi”. If such phonetic symbols are generated, the processor140analyzes the generated phonetic symbols based on the predetermined suitability determination condition to determine registration suitability for the user command. As the determination result, if it is determined that the registration of the user command is suitable, the processor140registers and stores the user command defined by the user in the storing unit170. Thereafter, in response to the speaking for the user command registered and stored in the storing unit170being input, the processor140may control the operation of the display apparatus100based on the user command associated with the input speaking.FIG.4is a view illustrating a module determining suitability according to an exemplary embodiment of the present disclosure. As illustrated inFIG.4, the module410determining registration suitability may include at least one of a module411analyzing a total number of phonetic symbols, a module413analyzing a configuration of vowels and consonants configuring the phonetic symbols, a module415analyzing a configuration form of the phonetic symbols, a module417analyzing the phonetic symbols for each word configuring the user command, and a module419detecting weak phonetic symbols. Here, the module analyzing the total number of phonetic symbols (hereinafter, referred to as a first condition) is a module determining whether or not the total number of phonetic symbols for the user command includes a predetermined number or more. In addition, the module analyzing the configuration of the vowels and consonants configuring the phonetic symbols (hereinafter, referred to as a second condition) is a module determining whether or not the vowels or the consonants are successively overlapped on the phonetic symbols for the user command. In addition, the module analyzing the configuration form of the phonetic symbols (hereinafter, referred to as a third condition) is a module detecting whether the configuration of the phonetic symbols for the user command is listed in which form based on the predefined phonetic symbol set. In addition, the module analyzing the phonetic symbols for each word (hereinafter, referred to as a fourth condition) is a module determining whether or not the number of respective words configuring the user command and the number of phonetic symbols corresponding to each word are the predetermined number or more, or are less than the predetermined number. In addition, the module detecting the weak phonetic symbols (hereinafter, referred to as a fifth condition) is a module determining whether or not phonetic symbols of a beginning and an end among the phonetic symbols configuring the user command are predefined weak phonetic symbols. Here, the predefined weak phonetic symbols may be phonetic symbols for a specific pronunciation of which a frequency band or energy magnitude is decreased or lost by a surrounding environment such as living noise, or the like, such that a recognition rate thereof is degraded. Therefore, the processor140may analyze the phonetic symbols for the user command using at least one of the first to fifth conditions included in the module determining registration suitability to determine registration suitability for the user command. According to an exemplary embodiment, the processor140may analyze the phonetic symbols generated from the user command using the modules corresponding to the first and second conditions among the modules included in the module determining registration suitability to determine registration suitability for the user command. For example, if the user command of the text type called “kangazi” is input, the processor140may generate phonetic symbols [k:ang_a:_zi] from the user command of the text type called “kangazi”. If such phonetic symbols are generated, the processor140determines whether or not the total number of phonetic symbols [k:ang_a:_zi] is the predetermined number or more using a module corresponding to the first condition among the modules included in the module determining registration suitability. For example, if the predetermined number matching the first condition is five and the total number of phonetic symbols [k:ang_a:_zi] is seven, the processor140determines that the total number of phonetic symbols is the predetermined number or more and determines that the user command is matched to the first condition. If the user command is matched to such first condition, the processor140determines whether or not at least one of the vowels and the consonants on the phonetic symbols [k:ang_a:_zi] is configured in a successive form using a module corresponding to the second condition among the modules included in the module determining registration suitability. As the determination result, if at least one of the vowels and consonants is not configured in the successive form, the processor140determines that the user command is matched to the second condition. As such, if the user command is matched to the first and second conditions, the processor140may determine that the registration for the user command “kangazi” is suitable. As another example, if a user command of a text type called “a a a a a” is input, the processor140may generate phonetic symbols [a_a_a_a_a] from the user command of the text type called “a a a a a”. In this case, the processor140determines that vowels of the phonetic symbols [a_a_a_a_a] are successive. As such, if the user command is not matched to at least one of the first and second conditions, the processor140may determine that the registration for the user command “a a a a a” is not suitable. That is, the user command having successive vowels has a problem that the spoken voice of the user spoken with regard to the registered user command may be recognized to be different from the corresponding user command. Therefore, as in the example described above, the processor140may determine that the user command having the successive vowels is not suitable as the user command. According to an exemplary embodiment, the processor140may determine the registration suitability for the user command using the modules corresponding to the first and second conditions and the modules corresponding to at least one of the third to fifth conditions among the modules included in the module determining registration suitability. For example, if phonetic symbols [skaip_TV] are generated from a user command of a text type “skype TV”, the processor140analyzes the phonetic symbols [skaip_TV] using the modules corresponding to the first and second conditions among the modules included in the module determining registration suitability to determine registration suitability for the corresponding user command. As the determination result, if the total number of phonetic symbols [skaip_TV] is the predetermined number or more and at least one of the vowels and consonants is not successive, the processor140determines that the user command “skype TV” is matched to the first and second conditions. As such, if the user command is matched to the first and second conditions, the processor140analyzes the phonetic symbols [skaip_TV] using the module corresponding to at least one of the third to fifth conditions among the modules included in the module determining registration suitability to determine registration suitability for the corresponding user command. Specifically, the processor140analyzes a configuration form of the phonetic symbols [skaip_TV] by the module corresponding to the third condition to determine whether or not components configuring the corresponding phonetic symbols are distributed in the order corresponding to a predefined pattern. For example, a first pattern which is predefined may be defined in the order of a consonant, a consonant, a vowel, a vowel, a consonant, a consonant, and the like, a second pattern may be defined in the order of a vowel, a consonant, a symbol, an affricate, a vowel, a consonant, and the like, and a third pattern may be defined in the order of a consonant, a vowel, a consonant, a vowel, a consonant, a vowel, a consonant, the like. In this case, the processor140may determine that the components configuring the phonetic symbols [skaip_TV] are listed based on the first pattern among the first to third patterns. Meanwhile, as in the example described above, the phonetic symbols [k:ang_a:_zi] may be generated from the user command of the text type “kangazi” In this case, the processor140may determine that the components configuring the phonetic symbols [k:ang_a:_zi] are listed based on the third pattern among the first to third patterns. As such, if it is determined that the components configuring the phonetic symbols generated from the user command of the text type are listed based on the predefined pattern, the processor140determines that the user command is matched to the third condition. If the user command is matched to the third condition, the processor140determines whether or not the number of words configuring the user command and the number of phonetic symbols for each word are the predetermined number or more, or are less than the predetermined number by the module corresponding to the fourth condition. As in the example described above, the phonetic symbols [skaip_TV] generated with regard to the user command “skype TV” may be matched to the third condition. In this case, the processor140determines whether or not the number of words configuring the user command and the number of phonetic symbols for each word among the phonetic symbols [skaip_TV] are the predetermined number or more, or are less than the predetermined number by the module corresponding to the fourth condition. For example, the user command which is suitable for registration may be combined of two or more words, and the phonetic symbols for each word may be predetermined to be two or more. Meanwhile, the user command “skype TV” may be configured of two words “skype” and “TV”, and the phonetic symbols for each of “skype” and “TV” may be [skaip] and [TV]. In this case, the user command “skype TV” may be configured of the two words and the number of phonetic symbols of each word may be two or more. As such, if the number of words configuring the user command “skype TV” and the number of phonetic symbols for each word are the predetermined number or more or are less than the predetermined number, the processor140may determine that the user command “skype TV” is matched to the fourth condition. If the user command is matched to the fourth condition, the processor140determines whether or not phonetic symbols of a beginning and an end of the phonetic symbols for each word configuring the user command include the predefined weak phonetic symbols by the module corresponding to the fifth condition. Here, the predefined weak phonetic symbol may be a phonetic symbol for a specific pronunciation of which a frequency band or energy magnitude is decreased or lost by a surrounding environment such as living noise, or the like, such that a recognition rate thereof is degraded. In general, in a case in which a pronunciation begins or ends with phonetic symbols such as [s], [p], [f], and [k], the pronunciation associated with the corresponding phonetic symbols has a frequency band or energy magnitude which is decreased or lost by a surrounding environment, such that a recognition rate thereof may be degraded. Therefore, the processor140analyzes the phonetic symbols for each word for each of “skype” and “TV” configuring the user command “skype TV” to determine whether or not the beginning or the end of the phonetic symbols includes the predefined weak phonetic symbol. As described above, the phonetic symbols of a word “skype” may be “[skaip]”, and the beginning and the end of the above-mentioned phonetic symbols may include [s] and [p]. Therefore, the processor140may determine that the user command “skype TV” is not matched to the fifth condition. As such, if the user command which is determined that the registration thereof is suitable by the modules corresponding to the first and second conditions is determined that the registration thereof is not suitable by the module corresponding to at least one of the third to fifth conditions, the processor140may finally determine that the registration of the corresponding user command is not suitable. According to an exemplary embodiment, the processor140may determine registration suitability for the user command for the respective modules corresponding to the first to fifth conditions included in the module determining registration suitability and may finally determine registration suitability for the user command based on a result value according to the determination result. As described above, the processor140determines registration suitability for the user command for the respective modules corresponding to the first to fifth conditions included in the module determining registration suitability. Thereafter, the processor140may calculate a result value for the user command based on the registration suitability determination result for each module and may finally determine registration suitability for the user command based on the calculated result value. According to the exemplary embodiment, the processor140determines registration suitability for the user command for the respective modules corresponding to the first to fifth conditions included in the module determining registration suitability. If it is determined that the registration with regard to at least one condition of the first to fifth conditions is not suitable, the processor140may sum predetermined reference values for the respective modules corresponding to other conditions except for the condition in which the registration is not suitable, among the first to fifth conditions to calculate the result value for the user command. Here, the reference values set for the respective modules corresponding to the first to fifth conditions may be set to be equal to each other or to be different from each other. In a case in which different reference values are set for the respective modules corresponding to the first to fifth conditions, a reference value of a module corresponding to the highest priority in a registration suitability determination reference among the modules corresponding to the first to fifth conditions may be set to be highest and a reference value of a module corresponding to the lowest priority may be set to be lowest. If the result value corresponding to an analysis result of the user command is calculated through the exemplary embodiment described above, the processor140may finally determine registration suitability for the user command based on the calculated result value. FIG.5is an illustrative view determining registration suitability for the user command based on the result value calculated by the module determining registration suitability in the display apparatus according to the exemplary embodiment of the present disclosure. The processor140may determine registration suitability for the user command for the respective modules corresponding to the first to fifth conditions included in the module determining registration suitability and may calculate the result value for the user command based on the registration suitability determination result for each module. If the result value for the user command is calculated, the processor140may determine registration suitability for the user command depending on sections to which the calculated result value belongs, with reference to a registration determination reference model500illustrated inFIG.5. Specifically, if the result value corresponding to the analysis result of the user command belongs to a first threshold section510, the processor140determines that the registration for the user command is not suitable. Meanwhile, if the result value corresponding to the analysis result of the user command belongs to a second threshold section530, the processor140determines that the registration for the user command is suitable. Meanwhile, if the result value corresponding to the analysis result of the user command belongs to a third threshold section520between the first and second threshold sections, the processor140may determine that the registration for the user command is suitable according to a selection command of the user for the user command. Meanwhile, if the result value corresponding to the analysis result of the user command belongs to the second threshold section530, the processor140may determine whether or not the registration for the user command is suitable as the control command or is suitable as the trigger command according to the registration request information of the user. Specifically, in a state in which the registration request information for controlling the operation of the display apparatus100is input, the result value corresponding to the analysis result of the user command may belong to a 2-1-th threshold section531of the second threshold section530. In this case, the processor140may determine that the registration for the user command is suitable as the control command for controlling the operation of the display apparatus100. Meanwhile, in a state in which the registration request information for operating the display apparatus100in the voice recognition mode is input, the result value corresponding to the analysis result of the user command may belong to a 2-2-th threshold section533of the second threshold section530. In this case, the processor140may determine that the registration for the user command is suitable as the trigger command for operating the display apparatus100in the voice recognition mode. Meanwhile, in the state in which the registration request information for operating the display apparatus100in the voice recognition mode is input, if the result value corresponding to the analysis result of the user command belongs to a 2-1-th threshold section531of the second threshold section530, the processor140may determine that the registration for the user command is suitable as the trigger command for operating the display apparatus100in the voice recognition mode according to the selection command of the user for the user command. Meanwhile, according to an aspect of the present disclosure, after the processor140determines similarity between the spoken voice of the user and a plurality of commands which are pre-registered or whether or not the spoken voice of the user corresponds to a prohibited command, the processor140may determine registration suitability for the user command according to various exemplary embodiments described above. According to an exemplary embodiment, the processor140measures similarity between the phonetic symbols generated from the user command and pre-stored phonetic symbols for a plurality of commands using a similarity algorithm such as a confusion matrix to calculate reliability values accordingly. Thereafter, the processor140compares the respective calculated reliability values with a predetermined threshold value to determine whether or not the respective reliability values are less than the predetermined threshold value. As the determination result, if at least one reliability value is the predetermined threshold value or more, the processor140determines that the user command and at least one pre-registered command are similar to each other and determines that the registration for the user command is not suitable. Meanwhile, if all of the reliability values are less than the predetermined threshold value, the processor140determines that the registration for the user command is suitable. According to an exemplary embodiment, the processor140determines whether or not the user command is an unregistrable command with reference to the prohibited commands which are registered and stored in the storing unit170. As the determination result, if the user command is associated with at least one prohibited command, the processor140determines that the registration for the user command is not suitable. Meanwhile, if the user command is not associated with at least one prohibited command, the processor140determines that the registration for the user command is suitable. In this case, the processor140may perform at least one of a first determination operation of determining whether or not the user command is similar to the pre-registered command and a second determination operation of determining whether or not the corresponding user command is the prohibited command, as described above. If the registration suitability for the user command is primarily determined by at least one of the first determination operation and the second determination operation described above, the processor140determines registration suitability for the user command according to various exemplary embodiments described above. If it is determined that the registration of the user command is suitable, the processor140may provide the registration suitability determination result of the user command through the output unit120. Specifically, if it is determined that the registration for the user command is suitable, the audio output unit123outputs an audio for the user command according to a control command of the processor140. In a state in which the above-mentioned audio is output, if the spoken voice of the user is input within the predetermined threshold time, the processor140registers and stores the user command in the storing unit170according to a degree of similarity between the text for the user command and the text for the spoken voice of the user. Specifically, if the spoken voice is input after the audio for the user command is output, the processor140may convert the input spoken voice into the text type or receive the voice recognition result converted into the text type from the voice recognition apparatus300. Thereafter, the processor140measures similarity between the phonetic symbols for the user command and the phonetic symbols for the spoken voice using the similarity algorithm such as the confusion matrix, and resisters and stores the user command in the storing unit170if the similarity value according to the measurement is the predetermined threshold value or more. Meanwhile, if it is determined that the registration for the user command is not suitable, the display unit121displays an analysis result analyzed according to the predetermined suitability determination conditions and a guide UI guiding a registrable user command, according to the control command of the processor140. Accordingly, the user may re-input a user command matched to the registration determination condition with reference to the guide UI displayed on a screen of the display apparatus100. Hereinafter, in a case in which the registration for the user command is not suitable in the display apparatus100, an operation of providing a determination result according to the above-mentioned unsuitable registration will be described in detail with reference toFIGS.6to8. FIG.6is a first illustrative view providing a registration unsuitability determination result for the user command in the display apparatus according to the exemplary embodiment of the present disclosure. As illustrated inFIG.6, a first user command610defined by the user may be determined that a registration thereof is not suitable, by the module corresponding to the fourth condition among the modules included in the module determining registration suitability described above. As described above, the module corresponding to the fourth condition is the module determining whether or not the number of respective words configuring the user command and the number of phonetic symbols corresponding to each word are the predetermined number or more, or are less than the predetermined number. Therefore, if the number of respective words configuring the first user command610exceeds the predetermined number, the processor140may determine that the registration for the first user command610is not suitable. As such, if it is determined that the registration for the first user command610is not suitable, the display apparatus100may display a guide UI620“this is an overlong command” on the screen thereof through the display unit121. Therefore, the user may re-input a user command consisting of words smaller than the first user command610with reference to the guide UI620displayed on the screen. FIG.7is a second illustrative view providing the registration unsuitability determination result for the user command in the display apparatus according to an exemplary embodiment of the present disclosure. As illustrated inFIG.7, a second user command710defined by the user may be determined that a registration thereof is not suitable, by the module corresponding to the fourth condition among the modules included in the module determining registration suitability described above. As described above, the module corresponding to the fourth condition is the module determining whether or not the number of respective words configuring the user command and the number of phonetic symbols corresponding to each word are the predetermined number or more, or are less than the predetermined number. Therefore, if the number of respective words configuring the second user command710is less than the predetermined number, the processor140may determine that the registration for the second user command710is not suitable. As such, if it is determined that the registration for the second user command710is not suitable, the display apparatus100may display a guide UI720including determination result information “an input command is not suitable for registration” and recommend information for the user command such as “Recommend: Run Skype, Skype TV” on the screen thereof through the display unit121. Therefore, the user may re-input a user command that he or she desires with reference to the user command recommended with regard to the second user command710through the guide UI720displayed on the screen. FIG.8is a third illustrative view providing the registration unsuitability determination result for the user command in the display apparatus according to an exemplary embodiment of the present disclosure. As illustrated inFIG.8, a third user command810defined by the user may be determined that a registration thereof is not suitable, by the module corresponding to the fifth condition among the modules included in the module determining registration suitability described above. As described above, the module corresponding to the fifth condition is the module determining whether or not the phonetic symbols of the beginning and the end among the phonetic symbols configuring the user command are the predefined weak phonetic symbols. Therefore, if the phonetic symbol of at least one of the beginning and the end on the phonetic symbols for the respective words configuring a third user command810is the weak phonetic symbol, the processor140may determine that the registration for the third user command810is not suitable. As such, if it is determined that the registration for the third user command810is not suitable, the display apparatus100may display a guide UI820including determination result information “this includes an unsuitable pronunciation” and weak pronunciation information guiding the unsuitable pronunciation such as “Skype [S,Pe]” on the screen thereof through the display unit121. Therefore, the user may re-input a user command excluding the unsuitable weak pronunciation with reference to the guide UI820displayed on the screen. Hereinabove, the operations of registering the user commands defined by the user in the display apparatus100according to the present disclosure have been described in detail. Hereinafter, a method for registration of a user command defined by the user in the display apparatus100according to the present disclosure will be described in detail. FIG.9is a flow chart of a method for determining registration suitability for the user command in the display apparatus according to an exemplary embodiment of the present disclosure. As illustrated inFIG.9, if the user command defined by the user is input, the display apparatus100determines whether the input user command is a command of a text type or a spoken voice (operation S910and operation S920). Specifically, if the registration request information for the user command defined by the user is input, the display apparatus100enters a registration performing mode for the user command. Here, the registration request information may be request information for registering the user command associated with the trigger command for entering the voice recognition mode or request information for registering the user command associated with the control command for controlling the operation of the display apparatus100. In a state in which such registration request information is input, the display apparatus100determines whether or not a user command corresponding to the registration request of the user is input from the input apparatus200. As the determination result, if the spoken voice for the user command is input through the input apparatus200such as a microphone (not illustrated) or a remote controller, the display apparatus100receives the voice recognition result for the spoken voice converted into the text from the voice recognition apparatus300(operation S930). However, the present disclosure is not limited thereto. If the data communication with the voice recognition apparatus300is not performed or the spoken voice for the user command is input through the microphone, the display apparatus100may convert the spoken voice of the user into the text using the speech to text (STT) algorithm. Meanwhile, the voice recognition apparatus300transmitting the voice recognition result for the spoken voice associated with the user command to the display apparatus100may transmit at least one voice recognition result of the text type with regard to the spoken voice of the user to the display apparatus100. Therefore, in a case in which a plurality of texts for the spoken voice of the user are received from the voice recognition apparatus300, the display apparatus100displays a list for the plurality of texts. Thereafter, if a selection command for one text is input, the display apparatus100may determine a text corresponding to the input selection command as a text for the spoken voice of the user. If the user command of the text type is input according to various exemplary embodiments described above, the display apparatus100generates phonetic symbols for the user command of the text type based on the predefined phonetic symbol set (operation S940). Thereafter, the display apparatus100analyzes the generated phonetic symbols based on the predetermined suitability determination condition to determine registration suitability for the user command (operation S950). Thereafter, the display apparatus100provides the registration suitability determination result for the user command (operation S960). Specifically, the display apparatus100analyzes the pre-generated phonetic symbols with regard to the user command according to the predetermined registration suitability determination module with regard to the suitability determination condition to determine registration suitability for the user command. Here, the module determining registration suitability may include at least one of the module analyzing a total number of phonetic symbols (first condition), the module analyzing a configuration of vowels and consonants configuring the phonetic symbols (second condition), the module analyzing a configuration form of the phonetic symbols (third condition), the module analyzing the phonetic symbols for each word configuring the user command (fourth condition), and the module detecting weak phonetic symbols (fifth condition), as described inFIG.4. Since the respective modules have been described in detail with reference toFIG.4, a detail description thereof will be omitted. According to an exemplary embodiment, the display apparatus100may analyze the phonetic symbols generated from the user command using the modules corresponding to the first and second conditions among the modules included in the module determining registration suitability to determine registration suitability for the user command. According to an exemplary embodiment, the display apparatus100may determine registration suitability for the user command using the modules corresponding to the first and second conditions and the module corresponding to at least one of the third to fifth conditions among the modules included in the module determining registration suitability. According to an exemplary embodiment, the display apparatus100may determine registration suitability for the user command for the respective modules corresponding to the first to fifth conditions included in the module determining registration suitability and may finally determine registration suitability for the user command based on a result value according to the determination result. Specifically, the display apparatus100determines registration suitability for the user command for the respective modules corresponding to the first to fifth conditions included in the module determining registration suitability. If it is determined that the registration with regard to at least one condition of the first to fifth conditions is not suitable, the display apparatus100may sum predetermined reference values for respective modules corresponding to other conditions except for the condition in which the registration is not suitable, among the first to fifth conditions to calculate the result value for the user command. Here, the reference values set for the respective modules corresponding to the first to fifth conditions may be set to be equal to each other or to be different from each other. In a case in which different reference values are set for the respective modules corresponding to the first to fifth conditions, a reference value of a module corresponding to the highest priority in a registration suitability determination reference among the modules corresponding to the first to fifth conditions may be set to be highest and a reference value of a module corresponding to the lowest priority may be set to be lowest. Therefore, if the result value for the user command is calculated by the module determining suitability described above, the display apparatus100may determine registration suitability for the user command depending on sections to which the calculated result value belongs, with reference to a registration determination reference model. Specifically, as described inFIG.5, if the result value corresponding to the analysis result of the user command belongs to the first threshold section510, the display apparatus100determines that the registration for the user command is not suitable. Meanwhile, if the result value corresponding to the analysis result of the user command belongs to the second threshold section530, the display apparatus100determines that the registration for the user command is suitable. Meanwhile, if the result value corresponding to the analysis result of the user command belongs to the third threshold section520between the first and second threshold sections, the display apparatus100may determine that the registration for the user command is suitable according to the selection command of the user for the user command. Meanwhile, the display apparatus100may determine whether the registration for the user command belonging to the second threshold section is suitable as the control command or is suitable as the trigger command according to the registration request information of the user. Meanwhile, according to an aspect of the present disclosure, after the display apparatus100determines similarity between the spoken voice of the user and a plurality of commands which are pre-registered or whether or not the spoken voice of the user corresponds to a prohibited command, the display apparatus100may determine registration suitability for the user command according to various exemplary embodiments described above. According to an exemplary embodiment, the display apparatus100determines registration suitability for the user command according to a degree of similarity between a plurality of pre-registered commands and the user command (first determination operation). As the determination result, if it is determined that the user command is similar to at least one of the plurality of commands, the display apparatus100determines that the registration for the user command is not suitable. Meanwhile, if it is determined that the user command is not similar to the plurality of commands, the display apparatus100may perform an operation of determining registration suitability for the user command according to various exemplary embodiments described above. The display apparatus100according to an exemplary embodiment determines whether the user command is the unregistrable command with reference to the pre-registered prohibited commands (second determination operation). As the determination result, if the user command is associated with at least one prohibited command, the display apparatus100determines that the registration for the user command is not suitable. Meanwhile, if the user command is not associated with at least one prohibited command, the display apparatus100may perform the operation of determining registration suitability for the user command according to various exemplary embodiments described above. In this case, the display apparatus100may perform at least one of the first determination operation of determining whether or not the user command is similar to the pre-registered command and the second determination operation of determining whether or not the corresponding user command is the prohibited command. If registration suitability for the user command is primarily determined by at least one of the first determination operation and the second determination operation, the display apparatus100provides the registration suitability determination result for the user command. Specifically, if it is determined that the registration for the user command is not suitable, the display unit100displays the analysis result information analyzed according to the module determining registration suitability predetermined with regard to the suitability determination conditions and the guide UI guiding a registrable user command, on the screen thereof. Accordingly, the user may re-input or speak a registrable user command with reference to the guide UI displayed on the screen of the display apparatus100. Meanwhile, if it is determined that the registration for the user command is suitable, the display apparatus100outputs an audio for the user command. After the audio for the user command described above is output, the display apparatus100may perform the registration for the corresponding user command according to the following operations. FIG.10is a flow chart of a method for registration of a user command in a display apparatus according to an exemplary embodiment of the present disclosure. As illustrated inFIG.10, if it is determined that the registration for the user command is suitable, the display apparatus100outputs the audio for the user command (operation S1010). Thereafter, the display apparatus100determines whether or not the spoken voice of the user is input within a predetermined threshold time (operation S1020). As the determination result, if the spoken voice is input within the predetermined threshold time, the display apparatus100registers the user command according to a degree of similarity between the text for the user command and the text for the input spoken voice (operation S1030and operation S1040). Specifically, if the spoken voice is input after the audio for the user command is output, the display apparatus100may convert the input spoken voice into the text type or receive the voice recognition result converted into the text type from the voice recognition apparatus300. Thereafter, the display apparatus100measures similarity between the phonetic symbols for the user command and the phonetic symbols for the spoken voice using the similarity algorithm such as the confusion matrix, and requests a re-speaking if the similarity value according to the measurement is less than the predetermined threshold value. Thereafter, if the spoken voice of the user is re-input, the display apparatus100re-performs the above-mentioned operations (operation S1030and operation S1040). If the similarity value measured between the user command and the spoken voice by the above-mentioned re-performing is the predetermined threshold value or more, or is less than the predetermined threshold value, the display apparatus100ends the operation of performing the registration for the user command. Meanwhile, if the similarity value measured between the user command and the spoken voice by the operation (operation S1040) is the predetermined threshold value or more, the display apparatus100registers and stores the user command (operation S1050). After the user command defined by the user is registered by the above-mentioned operations, the user may control the operation of the display apparatus100by the spoken voice associated with the pre-registered user command. In addition, the method for registration of a user command as described above may be implemented in at least one execution program for executing the method for registration of a user command as described above, in which the execution program may be stored in a non-transitory computer readable medium. The method for registration of a user command of the display apparatus according to various exemplary embodiments described above may be implemented in a program so as to be provided to the display apparatus. Particularly, the program including the method for registration of a user command of the display apparatus may be stored and provided in a non-transitory computer readable medium. The non-transitory computer readable medium does not refer to a medium storing data for a short period such as a register, a cache, a memory, or the like, but refers to a machine-readable medium semi-permanently storing the data. Specifically, the programs described above may be stored and provided in the non-transitory computer readable medium such as a compact disc (CD), a digital versatile disk (DVD), a hard disk, a Blu-ray disk, a universal serial bus (USB), a memory card, a read-only memory (ROM), or the like. According to various exemplary embodiments of the present disclosure as described above, the display apparatus may register the user command, which is resistant to misrecognition and guarantees the high recognition rate, among the user commands defined by the user. Hereinabove, the present disclosure has been described with reference to the exemplary embodiments thereof. Although the exemplary embodiments of the present disclosure have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the disclosure as disclosed in the accompanying claims. Accordingly, such modifications, additions and substitutions should also be understood to fall within the scope of the present disclosure.
61,332
11862167
DESCRIPTION OF EMBODIMENTS Embodiments of a spoken dialogue system according to the present disclosure will be described with reference to the drawings. If possible, the same reference numerals are given to the same portions and repeated description will be omitted. FIG.1is a diagram illustrating a functional configuration of a spoken dialogue system1according to an embodiment. The spoken dialogue system1is a system that performs a dialogue with a user by outputting system speech formed by a voice. As illustrated inFIG.1, the spoken dialogue system1includes a model generation device10and a spoken dialogue device20. The spoken dialogue system1can include storage units such as a dialogue scenario storage unit30, a learning data storage unit40, and a model storage unit50. The spoken dialogue system1may be configured as a single device, or one device or a plurality of devices among the model generation device10, the spoken dialogue device20, the dialogue scenario storage unit30, the learning data storage unit40, and the model storage unit50may be configured as a single device. The model generation device10is a device that generates a barge-in speech determination model that determines whether to engage barge-in speech in spoken dialogue control. As illustrated inFIG.1, the model generation device10includes a learning speech acquisition unit11, a user speech feature extraction unit12, a system speech feature extraction unit13, an identification information granting unit14, a label acquisition unit15, a model generation unit16, and a model output unit17as functional units. The spoken dialogue device20is a device that performs dialogue with a user by outputting system speech. The spoken dialogue device20includes an acquisition unit21, a recognition unit22, a user speech feature acquisition unit23, a system speech feature acquisition unit24, a barge-in speech control unit25, a dialogue control unit26, a response generation unit27, and an output unit28as functional units. The functional units will be described in detail later. The block diagram illustrated inFIG.1shows blocks in function units. The functional blocks (constituent units) are realized by a combination of at least one of hardware and software in combination. A method of realizing each functional block is not particularly limited. That is, each functional block may be realized using one physically or logically combined device or may be realized by connecting two or more physically or logically separate devices directly or indirectly (for example, in a wired or wireless manner) and using the plurality of devices. The functional blocks may be realized by combining software with the one device or the plurality of devices. The functions include determining, deciding, judging, calculating, computing, processing, deriving, investigating, looking up, ascertaining, receiving, transmitting, outputting, accessing, resolving, selecting, choosing, establishing, comparing, assuming, expecting, considering, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, and assigning, but the present disclosure is not limited thereto. For example, a functional block (constituent unit) of causing transmitting to function is called a transmitting unit or a transmitter. As described above, a realization method is not particularly limited. For example, the model generation device10and a spoken dialogue device20according to an embodiment of the present disclosure may function as a computer.FIG.2is a diagram illustrating an example of a hardware configuration of the model generation device10and the spoken dialogue device20according to the embodiment of the present disclosure. The model generation device10and the spoken dialogue device20may be physically configured as a computer device that includes a processor1001, a memory1002, a storage1003, a communication device1004, an input device1005, an output device1006, and a bus1007. In the following description, the word “device” can be replaced with “circuit,” “device,” “unit,” or the like. The hardware configuration of the model generation device10and a spoken dialogue device20may include one device or a plurality of the devices illustrated in the drawing or may be configured not to include some of the devices. Each function in the model generation device10and the spoken dialogue device20is realized by reading predetermined software (a program) on hardware such as the processor1001and the memory1002so that the processor1001performs calculation and controls the communication device1004performing communication or reading and/or writing of data on the memory1002and the storage1003. The processor1001controls the entire computer, for example, by operating an operating system. The processor1001may be configured as a central processing unit (CPU) including an interface with a peripheral device, a control device, a calculation device, and a register. For example, the functional units11to17,21to28, and the like illustrated inFIG.1may be realized by the processor1001. The processor1001reads a program (a program code), a software module, data, and the like from the storage1003and/or the communication device1004to the memory1002to perform various processes. As the program, a program causing a computer to perform at least some of the operations described in the above-described embodiment is used. For example, the functional units11to17and21to28in the model generation device10and the spoken dialogue device20may be realized by a control program that is stored in the memory1002and operates in the processor1001. It is described above that the various processes described above are performed by one processor1001, but they may be performed simultaneously or sequentially by two or more processors1001. The processor1001may be mounted on one or more chips. The program may be transmitted from a network via an electric communication line. The memory1002is a computer-readable recording medium and may be configured by at least one of, for example, a read-only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a random access memory (RAM), and the like. The memory1002may be called a register, a cache, a main memory (a main storage device), or the like. The memory1002can store a program (a program code), a software module, and the like that can be executed to implement a model generation method and a spoken dialogue method according to an embodiment of the present disclosure. The storage1003is a computer-readable recording medium and may be configured by at least one of, for example, an optical disc such as a compact disc ROM (CD-ROM), a hard disk drive, a flexible disk, a magneto-optic disc (for example, a compact disc, a digital versatile disc, or a Blu-ray (registered trademark) disc), a smart card, a flash memory (for example, a card, a stick, or a key drive), a floppy (registered trademark) disk, a magnetic strip, and the like. The storage1003may also be called an auxiliary storage device. The above-described storage medium may be, for example, a database, a server, or another appropriate medium including the memory1002and/or the storage1003. The communication device1004is hardware (a transceiver device) that performs communication between computers via a wired and/or wireless network and is also, for example, a network device, a network controller, a network card, a communication module, or the like. The input device1005is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, or a sensor) that receives an input from the outside. The output device1006is an output device (for example, a display, a speaker, or an LED lamp) that performs an output to the outside. The input device1005and the output device1006may be configured to be integrated (for example, a touch panel). The devices such as the processor1001and the memory1002are each connected by the bus1007to communicate information. The bus1007may be configured using a single bus or may be configured using different buses between respective devices. The model generation device10and the spoken dialogue device20may be configured to include hardware such as a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), and a field programmable gate array (FPGA), and some or all functional blocks may be realized by the hardware. For example, the processor1001may be mounted using at least one type of the hardware. Referring back toFIG.1, each storage unit included in the spoken dialogue system1will be simply described. The dialogue scenario storage unit30is a storage unit storing a dialogue scenario that has mutual response rules between user speech and system speech. The dialogue scenario storage unit30can include response candidates which are candidates for responses assumed for system speech from users. The learning data storage unit40is a storage unit that stores learning data provided to machine learning to generate a barge-in speech determination model to be described in detail later. The learning data includes user speech and immediately previous system speech which is system speech output immediately before the user speech. The model storage unit50is a storage unit that stores a barge-in speech determination model generated by the model generation device10. The spoken dialogue device20determines whether to engage barge-in speech in spoken dialogue control using the barge-in speech determination model stored in the model storage unit50. Next, each functional unit of the model generation device10will be described. The learning speech acquisition unit11acquires user speech formed by a voice produced by a user and immediately previous system speech which is system speech output immediately before the user speech in a spoken dialogue. FIG.3is a diagram illustrating an example of user speech and immediately previous system speech acquired by the learning speech acquisition unit11. As illustrated inFIG.3, the learning speech acquisition unit11acquires user speech su. The user speech su is, for example, data of a voice “tokkyu ressha wo riyo shimasu” produced by the user. The user speech su may be speech of one predetermined section in a series of utterances produced by the user. The speech of the one section is detected by, for example, a known technology for voice section detection (voice activity detection). One section of the speech can be set as, for example, a series of sounded portions partitioned by silent portions (pauses) for a predetermined time or more in a series of speech. Specifically, for example, two sections “SOUDESUNE” and “IITOOMOIMASU” are extracted from the speech “SOUDESUNE . . . (pause) . . . IITOOMOIMASU.” The learning speech acquisition unit11acquires immediately previous system speech ss in association with the user speech su. The immediately previous system speech ss is, for example, data of a voice “tokkyu ressha wo riyo shimasuka” produced by the system. FIG.4is a diagram illustrating a second example of user speech and immediately previous system speech acquired by the learning speech acquisition unit11. As illustrated inFIG.4, the learning speech acquisition unit11acquires user speech su-2(su). The user speech su2is, for example, data of a voice “I take an express train” produced by the user. The learning speech acquisition unit11acquires immediately previous system speech ss-2(ss) in association with the user speech su-2. The immediately previous system speech ss-2is, for example, data of a voice “Do you take an express train?” produced by the system. Based on the user speech, the user speech feature extraction unit12extracts a user speech feature series obtained by dividing the user speech su into user speech elements of a time with a predetermined length and chronologically disposing acoustic features of the user speech elements. Based on the immediately previous system speech, the system speech feature extraction unit13extracts a system speech feature series obtained by dividing the immediately previous system speech ss into system speech elements of a time with a predetermined length and chronologically disposing acoustic features of the system speech elements. FIG.5is a diagram schematically illustrating examples of a user speech feature series and a system speech feature series. In the embodiment, the user speech feature extraction unit12divides the user speech su into a plurality of user speech frames fu. The user speech frame fu constitutes an example of a user speech element. The length of one frame can be a time of any predetermined length and may be set to, for example, 10 ms. Each user speech frame fu includes an acoustic feature. The acoustic feature can include one or more of a sound pitch, a sound strength, a tone, and the like. The acoustic feature may be acquired by, for example, a known technology such as Mel-frequency cepstrum coefficient (MFCC) technology. As illustrated inFIG.5, based on the user speech su, the user speech feature extraction unit12extracts a user speech feature series FU in which the acoustic features of the user speech frames fu are chronologically disposed. The system speech feature extraction unit13divides the immediately previous system speech ss into the plurality of system speech frames fs. The system speech frame fs constitutes an example of a system speech element. The length of one frame can be a time of any predetermined length and may be set to, for example, 10 ms. Each system speech frame fs includes an acoustic feature as in the user speech frame fu. The acoustic feature can include one or more of a sound pitch, a sound strength, a tone, and the like. As illustrated inFIG.5, based on the immediately previous system speech ss, the system speech feature extraction unit13extracts a system speech feature series FS in which the acoustic features of the system speech frames fs are chronologically disposed. The identification information granting unit14grants identification information to the system speech element included in a morpheme which, among morphemes included in the immediately previous system speech, corresponds to a predetermined part of speech and does not correspond to an assumed response candidate to the immediately previous system speech by the user among the plurality of system speech elements included in the system speech feature series. In the embodiment, the identification information granting unit14grants a repetitive back-channel code rc to the system speech frame fs. The repetitive back-channel code rc constitutes an example of the identification information. For example, the response candidate is acquired from a dialogue scenario. When the user speech includes the morpheme which corresponds to the predetermined part of speech (for example, a verb, a noun, or an adjective) among the morphemes included in the immediately previous system speech, the morpheme corresponds to a repetition of the system speech by the user. A morpheme which does not correspond to the response candidate among the morphemes corresponding the repetitions corresponds to a back-channel by the user. In the embodiment, the identification information granting unit14grants the repetitive back-channel code rc to the system speech frame fs included in the morpheme which corresponds to the repetition and the back-channel at the time of production by the user among the morphemes included in the immediately previous system speech. On the other hand, since the morpheme corresponding to the response candidate corresponds to a response to be engaged in the dialogue control despite the morpheme corresponding to the repetition of the system speech at the time of production by the user among the morphemes included in the system speech, the repetitive back-channel code rc is not granted to this morpheme. The granting of the repetitive back-channel code rc to the system speech frame fs will be described with reference toFIGS.6to10.FIG.6is a flowchart illustrating content of a process of granting the repetitive back-channel code rc to the system speech frame fs. A timing at which the process of granting the repetitive back-channel code rc illustrated inFIG.6is performed is not limited as long as system speech (text) is confirmed or later in the spoken dialogue system1, and the process of granting the repetitive back-channel code rc is performed before the system speech is output, for example. That is, the process of granting the repetitive back-channel code rc may be performed on the system speech stored in the dialogue scenario storage unit30or may be performed on the system speech stored as the learning data in the learning data storage unit40. In step S1, the identification information granting unit14acquires the system speech (text) and performs morphemic analysis on the acquired system speech.FIG.7is a diagram illustrating a process of granting a repetitive back-channel code to a morpheme included in system speech. As illustrated inFIG.7, the identification information granting unit14performs the morphemic analysis on the system speech “tokkyu ressha wo riyo shimasuka” to obtain the morphemes “tokkyu ressha,” “wo,” “riyo,” “shimasu,” and “Ka” (see the column of morphemes inFIG.7). In step S2, the identification information granting unit14grants time information to each morpheme to associate each morpheme with the system speech frame. That is, the identification information granting unit14performs forced alignment of the text and a voice of the system speech, acquires a start time and an end time of each morpheme in data of the voice, and associates the start time and the end time with each morpheme. In the example illustrated inFIG.7, a start time “0.12” and an end time “0.29” are associated with the morpheme of “tokkyu ressha.” In step S3, the identification information granting unit14extracts a morpheme of a predetermined part of speech from the morphemes acquired in step S1. Specifically, the identification information granting unit14extracts morphemes of a verb, a noun, and an adjective and temporarily grants a repetitive back-channel code “1” for the morphemes to the extracted morphemes. In the example illustrated inFIG.7, the identification information granting unit14grants the repetitive back-channel code “1” to “tokkyu ressha” and “riyo.” In step S4, the identification information granting unit14excludes the morpheme included in the response candidate to the system speech by the user among the morphemes to which the repetitive back-channel code “1” is granted. The response candidates of the user are acquired from the dialogue scenario. In the example illustrated inFIG.7, the identification information granting unit14acquires speech content “hai,” “iie,” “riyo shimasu,” and “riyo shimasen” as response candidates of the user to the system speech “tokkyu ressha wo riyo shimasuka.” Since the morpheme “riyo” to which the repetitive back-channel code is granted in step S3is included in the response candidate of the user, the identification information granting unit14grants a repetitive back-channel code “0” instead of the repetitive back-channel code “1” temporarily granted to the morpheme “riyo” (see the column of the repetitive back-channel code inFIG.7). In step S5, the identification information granting unit14grants a repetitive back-channel code rc(1) which is identification information to the system speech frame fs corresponding to the morpheme to which the repetitive back-channel code for the morpheme is granted.FIG.8is a diagram schematically illustrating an example of a system speech frame to which a repetitive back-channel code is attached. As illustrated inFIG.8, the identification information granting unit14grants the repetitive back-channel code rc(1) to the system speech frame fs corresponding to a morpheme ms1“tokkyu ressha” among morphemes ms1to ms5included in system speech (text) ts. The repetitive back-channel code rc granted in this way is supplied as learning data for learning of a barge-in speech determination model along with the system speech feature series FS. FIG.9is a diagram illustrating a second example of a process of granting a repetitive back-channel code to a morpheme included in system speech. In the example illustrated inFIG.9, in step S1, the identification information granting unit14performs morphemic analysis on the system speech “Do you take an express train” to obtain the morphemes “Do,” “you,” “take,” “an,” “express,” and “train” (the column of morphemes inFIG.9). In step S2, the identification information granting unit14grants the time information (the start time and the end time) to each morpheme to associate each morpheme with the system speech frame. In the example illustrated inFIG.9, the start time “0.29” and the end time “0.32” are associated with the morpheme “you.” In the example illustrated inFIG.9, in step S3, the identification information granting unit14grants the respective back-channel code “1” to “take,” “express,” and “train” which are the morphemes of a verb, a noun, and an adjective from the morphemes acquired in step S1. In step S4, the identification information granting unit14excludes the morpheme included in the response candidate to the system speech by the user among the morphemes to which the repetitive back-channel code “1” is granted. The response candidate of the user is acquired from the dialogue scenario. In the example illustrated inFIG.9, the identification information granting unit14acquires the speech content “Yes,” “No,” “I take an express train,” and “I do not take an express train” as the response candidates of the user to the system speech “Do you take an express train.” Since the morpheme “take” to which the repetitive back-channel code is granted in step S3is included in the response candidate of the user, the identification information granting unit14grants a repetitive back-channel code “0” instead of the repetitive back-channel code “1” temporarily granted to the morpheme “take” (see the column of the repetitive back-channel code inFIG.9). In step S5, the identification information granting unit14grants the repetitive back-channel code rc(1) which is identification information to the system speech frame fs corresponding to the morpheme to which the repetitive back-channel code for the morpheme is granted.FIG.10is a diagram schematically illustrating a second example of a system speech frame to which a repetitive back-channel code is attached. As illustrated inFIG.10, the identification information granting unit14grants the repetitive back-channel code rc1-2(1) to the system speech frame fs corresponding to morphemes ms5-2to ms6-2“express train?” among morphemes ms1-2to ms6-2included in system speech (text) ts-2(ts). The repetitive back-channel code rc granted in this way is supplied as learning data for learning of a barge-in speech determination model along with the system speech feature series FS-2(FS). Of the morphemes included in the system speech, the repetitive back-channel code “1” may be granted to the system speech frame included in the morpheme which corresponds to the repetition and the back-channel at the time of production by the user and the repetitive back-channel code “0” may be granted to the system speech frame included in the morphemes other than the morpheme which corresponds to the repetition and the back-channel at the time of production by the user. Predetermined identification information may be granted to the system speech frame included in the morpheme which corresponds to the repetition and the back-channel at the time of production by the user and the identification information may not be granted to other system speech frames. Referring back toFIG.1, the label acquisition unit15acquires the correct label associated with the user speech frame fu included in the morpheme that should not be engaged in the dialogue control in the spoken dialogue system among the morphemes included in the user speech su among the plurality of user speech frames fu included in the user speech feature series FU. Specifically, the label acquisition unit15acquires the correct label associated with the user speech frame fu included in the morpheme which corresponds to the repetition and the back-channel to the system speech among the morphemes included in the user speech. The association of the correct label with the user speech frame fu may be performed in advance by manpower. The label acquisition unit15may perform the association with the user speech frame fu included in the morpheme which corresponds to the repetition and the back-channel to the system speech through the following process without depending on manpower. Specifically, the label acquisition unit15performs morphemic analysis on the user speech su obtained as text information, the immediately previous system speech ss, and each response candidate assumed as a response to the immediately previous system speech ss by the user. Subsequently, the label acquisition unit15extracts the morphemes which, among the morphemes included in the user speech su, correspond to predetermined parts of speech (a noun, a verb, or an adjective) included in the immediately previous system speech ss and are not included in the response candidates as morphemes that are not engaged. For example, when the user speech su “tokkyu ressha wo riyo shimasu,” the immediately previous system speech ss “tokkyu ressha wo riyo shimasuka” and the response candidates (“hai,” “iie,” “riyo shimasu,” and “riyo shimasen”) are acquired as learning data, the label acquisition unit15extracts the morphemes (“tokkyu ressha,” “riyo,” and “shimasu”) as the morphemes of the predetermined parts of speech included in the immediately previous system speech ss from the user speech su. Further, the label acquisition unit15extracts “tokkyu ressha,” which is a morpheme not included in the response candidate, as the morpheme that is not engaged among the extracted morphemes. Then, the label acquisition unit15associates the correct label with the user speech frame included in the morpheme that is not engaged.FIG.11is a diagram schematically illustrating an example of a user speech frame to which a correct label in learning data is attached. As illustrated inFIG.11, the label acquisition unit15associates a label L with the user speech frame fu. That is, the label acquisition unit15grants time information to each morpheme to associate the morphemes extracted from the user speech su with the user speech frames. Specifically, the label acquisition unit15performs forced alignment of the text and a voice of the user speech, acquires a start time and an end time of each morpheme in data of the voice, and associates the start time and the end time with each morpheme. The label acquisition unit15extracts the corresponding user speech frame fu based on the start time and the end time of the morpheme “tokkyu ressha” and associates a correct label11(1) which is a correct label L indicating that the user speech frame should not be engaged. On the other hand, the label acquisition unit15associates a correct label10(0) indicating that the user speech frame is not the user speech frame that should not be engaged with the user speech frame corresponding to the morphemes other than the morpheme “tokkyu ressha.” FIG.12is a diagram schematically illustrating a second example of a user speech frame to which a correct label in learning data is attached. As illustrated inFIG.12, the label acquisition unit15associates a label L-2(L) with a user speech frame fu-2(fu). That is, the label acquisition unit15grants time information to each morpheme to associate the morpheme extracted from user speech su-2(su) with the user speech frame. Specifically, the label acquisition unit15performs forced alignment of the text and a voice of the user speech, acquires a start time and an end time of each morpheme in data of the voice, and associates the start time and the end time with the morpheme. The label acquisition unit15extracts the corresponding user speech frame fu-2based on the start times and the end times of the morphemes “express” and “train” and associates a correct label11-2(1) indicating that the user speech frame should not be engaged. On the other hand, the label acquisition unit15associates a correct label10-2(0) indicating that the user speech frame is not a user speech frame that should not be engaged with the user speech frame corresponding to the morphemes other than the morphemes “express” and “train.” Of the morphemes included in the user speech, the correct label “1” may be associated with the user speech frame included in the morpheme that should not be engaged and the correct label “0” may be associated with the user speech frame included in the morphemes other than the morpheme that should not be engaged. Predetermined identification information serving as a correct label may be associated with the user speech frame included in the morpheme that should not be engaged and the predetermined identification information may not be associated with the morpheme included in the morphemes other than the morpheme that should not be engaged. The model generation unit16performs machine learning based on learning data including the user speech feature series FU, the system speech feature series FS including the repetitive back-channel code rc, and the correct label L associated with the user speech frame fu included in the user speech feature series FU to generate a barge-in speech determination model. The barge-in speech determination model is a model which includes a neural network and is a model that outputs a likelihood that each user speech frame fu should not be engaged in the dialogue control of the spoken dialogue system by setting the user speech feature series based on the user speech and the system speech feature series including the repetitive back-channel code rc based on the immediately previous system speech as inputs, each user speech frame fu being included in the user speech. FIG.13is a flowchart illustrating content of a process of learning and generating a barge-in speech determination model in the model generation device10. In step S11, the learning speech acquisition unit11acquires the user speech su for learning and the immediately previous system speech ss which is the system speech output immediately before the user speech su. In step S12, the user speech feature extraction unit12extracts the user speech feature series FU based on the user speech su. In step S13, the system speech feature extraction unit13extracts the system speech feature series FS based on the immediately previous system speech ss. The repetitive back-channel code rc for identifying the system speech frame fs included in the morpheme which corresponds the repetition and the back-channel at the time of production by the user is associated with the system speech frame fs included in the system speech feature series FS. In step S14, the label acquisition unit15associates the correct label L with the user speech frame fu included in the morpheme that should not be engaged in the dialogue control in the spoken dialogue system among the morphemes included in the user speech su. The process of steps S15to S17is a process for machine learning of a model. In step S15, the model generation unit16inputs the feature amount of the learning data formed by the user speech feature series FU, the system speech feature series FS including the repetitive back-channel code rc, and the correct label L to the barge-in speech determination model which is a learning and generating target model. In step S16, the model generation unit16calculates a loss based on the correct label L and an output value from the model. In step S17, the model generation unit16reversely propagates the loss calculated in step S16to the neural network and updates a parameter (weight) of the model (neural network). In step S18, the model generation unit16determines whether a predetermined learning end condition is satisfied. Then, the model generation unit16repeats the learning process of steps S15to S17using the learning data until the learning end condition is satisfied. When the learning ending condition is satisfied, the model generation unit16ends the process of learning the barge-in speech determination model. The model output unit17outputs the barge-in speech determination model generated by the model generation unit16. Specifically, the model output unit17stores the generated barge-in speech determination model in, for example, the model storage unit50. Next, each functional unit of the spoken dialogue device20will be described. The acquisition unit21acquires user speech formed by a voice produced by the user. The user speech is, for example, a voice produced by the user in response to system speech produced by the spoken dialogue device20. The recognition unit22outputs a recognition result obtained by recognizing the user speech acquired by the acquisition unit21as text information. The recognition result is supplied for dialogue control in which the dialogue scenario is referred to in the dialogue control unit26. The user speech feature acquisition unit23acquires the user speech feature series obtained by dividing the user speech acquired by the acquisition unit21into user speech frames of a time with a predetermined length and chronologically disposing acoustic features of the user speech elements. The length of the user speech frame is set to the same length as that of the user speech frame extracted by the user speech feature extraction unit12of the model generation device10. The system speech feature acquisition unit24acquires the system speech feature series obtained by dividing the system speech output by the spoken dialogue device20into system speech frames of a time with a predetermined length and chronologically disposing acoustic features of the system speech elements. To determine whether to engage the user speech which is the barge-in speech, the system speech feature acquisition unit24acquires the system speech feature series of the immediately previous system speech which is the system speech output by the spoken dialogue device20immediately before the user speech acquired by the acquisition unit21is produced. In the system speech feature series, the repetitive back-channel code described with reference toFIGS.6to10is granted to the system speech frame. The repetitive back-channel code serving as the identification information is used to identify a system speech frame included in the morpheme which, among the morphemes included in the immediately previous system speech, corresponds to a morpheme corresponding to a predetermined part of speech (a noun, a verb, or an adjective) and does not correspond to the assumed response candidate to the immediately previous system speech from the user. The barge-in speech control unit25determines whether to engage the barge-in speech which is user speech produced to cut off the system speech being produced. Specifically, when each user speech frame included in the user speech which is the barge-in speech corresponds to the predetermined morpheme (a noun, a verb, or an adjective) included in the immediately previous system speech which is the system speech output by the output unit28immediately before the user speech is produced and does not correspond to the morpheme included in the response candidate to the immediately previous system speech in the dialogue scenario, the barge-in speech control unit25determines not to engage the user speech frame or the user speech including the user speech frame. In other words, in the user speech including the user speech frame determined not to be engaged, the barge-in speech control unit25does not engage at least a portion corresponding to the user speech frame. That is, the barge-in speech control unit25may determine that some or all of the user speech included in the user speech frame are not engaged. The barge-in speech control unit25according to the embodiment determines whether to engage the user speech frame included in the barge-in speech using the barge-in speech determination model generated by the model generation device10. That is, the barge-in speech control unit25inputs the user speech feature series acquired by the user speech feature acquisition unit23and the system speech feature series (including the repetitive back-channel code) of the immediately previous system speech acquired by the system speech feature acquisition unit24to the barge-in speech determination model. Then, the barge-in speech control unit25acquires a likelihood of each system speech frame output from the barge-in speech determination model. The likelihood indicates the degree to which engagement should not be performed in the dialogue control. The barge-in speech determination model which is a model including a learned neural network can be ascertained as a program which is read or referred to by a computer, causes the computer to perform a predetermined process, and causes the computer to realize a predetermined function. That is, the learned barge-in speech determination model according to the embodiment is used in a computer that includes a CPU and a memory. Specifically, the CPU of the computer operates to perform calculation based on a learned weighted coefficient, a response function, and the like corresponding to each layer of input data (for example, the user speech feature series and the system speech feature series to which the repetitive back-channel code rc is granted) input to an input layer of a neural network in response to an instruction from the learned barge-in speech determination model stored in the memory and output a result (likelihood) from an output layer. FIG.14is a diagram schematically illustrating a likelihood of each user speech frame and an engagement or non-engagement determination result output from the barge-in speech determination model. As illustrated inFIG.14, the barge-in speech control unit25inputs the user speech feature series FUx or the like of the user speech which is the barge-in speech to the barge-in speech determination model and acquires the likelihood of each user speech frame fux from an output of the barge-in speech determination model. Then, the barge-in speech control unit25determines that a user speech frame fux1with the likelihood equal to or greater than a predetermined threshold is not engaged in the dialogue control and determines that a user speech frame fux0with a likelihood less than the predetermined threshold is engaged in the dialogue control. FIG.15is a diagram schematically illustrating an example of engagement or non-engagement determination of barge-in speech. When the acquisition unit21acquires user speech sux1“riyo shimasu” which is barge-in speech produced by the user with respect to system speech ss1“tokkyu ressha wo riyo shimasuka,” the morpheme included in the user speech sux1corresponds to a morpheme of a response candidate to the system speech ssx1. Therefore, the barge-in speech control unit25does not determine that any user speech frame included in the user speech sux1is not engaged. On the other hand, when the acquisition unit21acquires user speech sux2“tokkyu ressha ka” with respect to “tokkyu ressha wo riyo shimasuka,” the morpheme “tokkyu ressha” included in the user speech sux2corresponds to a predetermined morpheme included in the system speech ssx1and does not correspond to a response candidate to the system speech ssx1. Therefore, the likelihood output from the barge-in speech determination model is equal to or greater than the predetermined threshold with regard to each user speech frame included in the morpheme “tokkyu ressha” and the barge-in speech control unit25determines that the user speech frame included in the morpheme “tokkyu ressha” of the user speech sux2is not engaged. That is, the barge-in speech determination model determines that the user speech sux2“tokkyu ressha ka” is a repetition and is a back-channel with respect to the system speech ssx1“tokkyu ressha wo riyo shimasuka.” FIG.16is a diagram schematically illustrating a second example of engagement or non-engagement determination of barge-in speech. When the acquisition unit21acquires the user speech sux1-2“I take an express train,” which is the barge-in speech produced by the user with respect to system speech ssx1-2“Do you take an express train?,” the morpheme included in the user speech sux1-2corresponds to a morpheme of a response candidate to the system speech ssx1-2. Therefore, the barge-in speech control unit25does not determine that any user speech frame included in the user speech sux1-2is not engaged. On the other hand, when the acquisition unit21acquires user speech sux2-2“Express train,” with respect to the system speech ssx1-2“Do you take an express train?,” the morphemes “express” and “train” included in the user speech sux2-2correspond to predetermined morphemes included in the system speech ssx1-2and do not correspond to the morpheme of a response candidate to the system speech ssx1-2. Therefore, the likelihood output from the barge-in speech determination model is equal to or greater than the predetermined threshold with regard to each user speech frame included in the morphemes “express” and “train” and the barge-in speech control unit25determines that the user speech frame included in the user speech sux2-2is not engaged. That is, the barge-in speech determination model determines that the user speech sux2-2“Express train,” is a repetition and is a back-channel with respect to the system speech ssx1-2“Do you take an express train?” When each user speech element included in the user speech corresponds to an element of predetermined speech set in advance, the barge-in speech control unit25may determine that the user speech element is not engaged in addition to the determination performed using the barge-in speech determination model. Specifically, when user speech corresponding to a simple back-channel that has no special meaning as a response such as “Yeah” or “hai” is set in advance as predetermined speech and an acoustic feature of the user speech frame included in the user speech acquired by the acquisition unit21corresponds to the acoustic feature of speech corresponding to the simple back-channel set as the predetermined speech, the barge-in speech control unit25determines that the user speech frame is not engaged in the dialogue control. Thus, it is possible to perform the dialogue control such that the simple back-channel is not engaged. Referring back toFIG.1, the dialogue control unit26outputs a system response indicating response content with which to respond to the user based on the recognition result corresponding to the user speech other than the barge-in speech determined not to be engaged by the barge-in speech control unit25with reference to a dialogue scenario that has a mutual response rule between the user speech and the system speech. Specifically, the dialogue control unit26acquires and outputs a system response formed by text to respond to user speech other than the user speech determined not to be engaged with reference to the dialogue scenario stored in the dialogue scenario storage unit30. The response generation unit27generates system speech formed by voice information based on the system response output by the dialogue control unit26. The output unit28outputs the system speech generated by the response generation unit27as a voice. Next, a spoken dialogue method in the spoken dialogue device20will be described with reference toFIG.17.FIG.17is a flowchart illustrating content of a process in a spoken dialogue method according to the embodiment. In step S21, the system speech feature acquisition unit24acquires a system speech feature series of system speech output by the output unit28. When the system speech is a dialogue triggered by speech from the spoken dialogue system1, the system speech may be initial system speech triggered by that speech or may be system speech which is a response to previous user speech while the dialogue continues. In step S22, the acquisition unit21determines whether a voice produced by the user is detected. When the voice of the user is detected, the voice is acquired as user speech. When the user speech is acquired, the process proceeds to step S24. When the user speech is not acquired, the process proceeds to step S23. In step S23, the acquisition unit21determines whether a state in which the user speech is not acquired reaches a timeout of a predetermined time. The acquisition unit21attempts to acquire the user speech until the state reaches the timeout. Conversely, when the state reaches the timeout, the process proceeds to step S28. In step S24, the dialogue control unit26determines whether the user speech is detected and acquired in step S22during output of the system speech. That is, it is detected whether the acquired user speech is the barge-in speech. When it is determined that the user speech is acquired during output of the system speech, the process proceeds to step S25. Conversely, when it is determined that the user speech is not acquired during output of the system speech, the process proceeds to step S27. In step S25, the user speech feature acquisition unit23acquires the user speech feature series of the user speech acquired in step S22. In step S26, the barge-in speech control unit25determines whether to engage the user speech acquired in step S22and determined to be the barge-in speech in step S24based on the user speech feature series acquired in step S25. Specifically, the barge-in speech control unit25inputs the user speech feature series and the system speech feature series based on the immediately previous system speech to the barge-in speech determination model, acquires a likelihood of each user speech frame, and determines whether to engage each user speech frame based on the acquired likelihood. When the user speech is determined not to be engaged, the process returns to step S22. In step S27, the recognition unit22outputs a recognition result obtained by recognizing the user speech not determined not to be engaged as text information. In step S28, the dialogue control unit26acquires and outputs a system response formed by text to respond to the user speech other than the user speech determined not to be engaged with reference to the dialogue scenario. Then, the response generation unit27generates the system speech formed by voice information based on the system response output by the dialogue control unit26. In step S29, the system speech feature acquisition unit24acquires the system speech feature series of the system speech generated in step S28and holds the system speech feature series as information regarding immediately previous system speech of subsequent user speech. In step S30, the output unit28outputs the system speech generated by the response generation unit27as a voice. In step S31, the dialogue control unit26determines whether a predetermined dialogue end condition of a spoken dialogue with the user is satisfied. When it is determined that the dialogue end condition is not satisfied, the process returns to step S22. Next, a model generation program causing a computer to function as the model generation device10according to the embodiment will be described.FIG.18is a diagram illustrating a configuration of a model generation program P1. The model generation program P1includes a main module m10that performs general control of the model generation process in the model generation device10, a learning speech acquisition module m11, a user speech feature extraction module m12, a system speech feature extraction module m13, an identification information granting module m14, a label acquisition module m15, a model generation module m16, and a model output module m17. The modules m11to m17realize functions of the learning speech acquisition unit11, the user speech feature extraction unit12, the system speech feature extraction unit13, the identification information granting unit14, the label acquisition unit15, the model generation unit16, and the model output unit17of the model generation device10. The model generation program Pt may be configured to be transmitted via a transmission medium such as a communication line or may be configured to be stored in a recording medium M1, as illustrated inFIG.18. FIG.19is a diagram illustrating a configuration of a spoken dialogue program causing a computer to function as the spoken dialogue device20according to the embodiment. The spoken dialogue program P2includes a main module m20that generally controls the spoken dialogue process in the spoken dialogue device20, an acquisition module m21, a recognition module m22, a user speech feature acquisition module m23, a system speech feature acquisition module m24, a barge-in speech control module m25, a dialogue control module m26, a response generation module m27, and an output module m28. The modules m21to m28realize functions of the acquisition unit21, the recognition unit22, the user speech feature acquisition unit23, the system speech feature acquisition unit24, the barge-in speech control unit25, the dialogue control unit26, the response generation unit27, and the output unit28of the spoken dialogue device20. The spoken dialogue program P2may be configured to be transmitted via a transmission medium such as a communication line or may be configured to be stored in a recording medium M2, as illustrated inFIG.19. In the spoken dialogue device20, the spoken dialogue method, and the spoken dialogue program P2according to the above-described embodiment, when a user speech element included in a user speech corresponds to a predetermined morpheme included in an immediately previous system speech, there is a high possibility of the user speech elements corresponding to repetition elements of a system speech. When the user speech elements are repetitions of some of the immediately previous system speeches and correspond to elements of a response candidate to the immediately previous system speech, the user speech corresponds to elements to be engaged in dialogue control. In consideration of this, when the user speech elements correspond to predetermined morphemes included in the immediately previous system speech and do not correspond to elements of a response candidate to the immediately previous system speech, it is determined that the user speech elements are not engaged in the dialogue control. Accordingly, an erroneous operation in the spoken dialogue system is prevented and convenience for a user is improved. In a spoken dialogue system according to another embodiment, the user speech element may be an element obtained by chronologically dividing a user speech into times of a predetermined length and each user speech element may include an acoustic feature. According to the above embodiment, since the user speech includes the chronologically user speech elements which each include the acoustic feature and whether to engage each user speech element is determined, it is not necessary to recognize the user speech as text information to determine engagement or non-engagement. Accordingly, since it can be determined whether to engage the barge-in speech without waiting for the end of one determination target section of the user speech, the dialogue control process is performed quickly. A spoken dialogue system according to still another embodiment may further include a user speech feature acquisition unit configured to acquire a user speech feature series obtained by dividing the user speech into user speech elements of a time with a predetermined length and chronologically disposing acoustic features of the user speech elements based on the user speech; and a system speech feature acquisition unit configured to acquire a system speech feature series in which acoustic features of the system speech elements obtained by dividing the immediately previous system speech into times with a predetermined length are chronologically disposed, the system speech feature series including identification information attached to a system speech element included in a morpheme which, among morphemes included in the immediately previous system speech, corresponds to a predetermined part of speech and does not correspond to a response candidate acquired from the dialogue scenario and assumed to the immediately previous system speech by the user among the plurality of system speech elements. The barge-in speech control unit may determine whether to engage each user speech element using a barge-in speech determination model in which the user speech feature series, the system speech feature series, and the identification information are set as inputs and a likelihood of each speech element not engaged in dialogue control of the spoken dialogue system is set as an output, each speech element being included in the user speech. According to the above embodiment, since the barge-in speech determination model in which the user speech feature series and the system speech feature series including the identification information are set as inputs and the likelihood of each speech element not to be engaged is output for each user speech element is used, whether to engage each user speech element included in the user speech can be determined with high precision. In the spoken dialogue system according to still another embodiment, the barge-in speech determination model may be configured by machine learning based on learning data, the learning data may include feature information including the user speech feature series based on the user speech, the system speech feature series based on the immediately previous system speech output immediately before the user speech, and the identification information granted to a plurality of system speech elements included in the system speech feature series as input values and include, as an output value, a correct label associated with the user speech element included in a morpheme not to be engaged in the dialog control of the spoken dialogue system among morphemes included in the user speech. According to the above embodiment, the barge-in speech determination model which is generated by machine learning based on learning data and feature amounts including includes the user speech feature series, the system speech feature series, and the identification information granted to a plurality of system speech elements as the input values and including, as an output value, the correct label associated with the user speech element not to be engaged is used to determine whether to engage the user speech element. Thus, it is possible to determine whether to engage each user speech element included in the user speech with high precision. In the spoken dialogue system according to still another embodiment, the barge-in speech control unit may determine that the user speech element is not engaged when each user speech element corresponds to an element of a predetermined speech set in advance. According to the above embodiment, by setting a speech corresponding to a simple back-channel that has no special meaning in a dialogue as a predetermined speech in advance, it is possible to perform control such that the simple back-channel included in the barge-in speech is not engaged. According to one embodiment of the present invention, a model generation device generates a barge-in speech determination model determining to engage a barge-in speech which is a user speech produced to cut off ongoing output of a system speech in the spoken dialogue system performing a dialogue with a user by outputting the system speech formed by a voice with respect to the user speech formed by the voice produced by the user. The model generation device includes: a learning speech acquisition unit configured to acquire the user speech and an immediately previous system speech which is a system speech output immediately before the user speech; a user speech feature extraction unit configured to extract a user speech feature series obtained by dividing the user speech into user speech elements of a time with a predetermined length and chronologically disposing acoustic features of the user speech elements based on the user speech; a system speech feature extraction unit configured to extract a system speech feature series obtained by dividing the immediately previous system speech into system speech elements of a time with a predetermined length and chronologically disposing acoustic features of the system speech elements based on the immediately previous system speech; an identification information granting unit configured to grant identification information to the system speech element included in a morpheme which, among morphemes included in the immediately previous system speech, corresponds to a predetermined part of speech and does not correspond to a response candidate acquired from a dialogue scenario that has a mutual response rule between the user speech and the system speech and assumed to the immediately previous system speech by the user among the plurality of system speech elements included in the system speech feature series; a label acquisition unit configured to acquire a correct label associated with the user speech element included in a morpheme not to be engaged in the dialog control of the spoken dialogue system among morphemes included in the user speech; a model generation unit configured to perform machine learning based on learning data including the user speech feature series, the system speech feature series including the identification information, and the correct label and generate a barge-in speech determination model in which the user speech feature series based on the user speech and the system speech feature series including the identification information based on the immediately previous system speech are set as inputs and a likelihood of each speech element not to be engaged in the dialogue control of the spoken dialogue system is set as an output each speech element being included in the user speech; and a model output unit configured to output the barge-in speech determination model generated by the model generation unit. According to the above embodiment, the barge-in speech determination model is generated by machine learning in which the user speech feature series, the system speech feature series, and the identification information granted to system speech elements are included as the input values and are based on learning data including the correct label associated with the user speech element not to be engaged as an output value. Thus, it is possible to obtain a model appropriate to determine whether to engage the user speech element. In the model generation device according to one embodiment, the label acquisition unit may perform morphemic analysis on the user speech, the immediately previous system speech, and each response candidate assumed as a response to the immediately previous system speech by the user, extract an unengaged morpheme which is a morpheme included in the immediately previous system speech and not included in the response candidate among morphemes included in the user speech, and associate the correct label with the user speech element included in the unengaged morpheme. According to the above embodiment, it is possible to easily generate the correct label associated with the user speech element included in the morpheme not to be engaged in the dialogue control among the morphemes included in the user speech. Thus, a load for generating the learning data used to learn the barge-in speech determination model is reduced. According to one embodiment of the present invention, a barge-in speech determination model is a barge-in speech determination model learned to cause a computer to function so that it is determined in a spoken dialogue system whether to engage a barge-in speech which is a user speech produced to cut off ongoing output of a system speech in the spoken dialogue system that performs a dialogue with a user by outputting the system speech formed by a voice in response to the user speech formed by a voice produced by the user. The barge-in speech determination model is configured by machine learning based on learning data. The learning data includes, as input values, feature information including: a user speech feature series in which acoustic features of user speech elements obtained by dividing the user speed into times with a predetermined length are chronologically disposed; a system speech feature series in which acoustic features of system speech elements obtained by dividing an immediately previous system speech which is a system speech output immediately before the user speech into times with a predetermined length are chronologically disposed; and identification information granted to a system speech element included in a morpheme which, among morphemes included in the immediately previous system speech, corresponds to a predetermined part of speech and does not correspond to a response candidate acquired from a dialogue scenario that has a mutual response rule between the user speech and the system speech and assumed to the immediately previous system speech by the user, among a plurality of system speech elements included in the system speech feature series. The learning data includes, as an output value, a correct label associated with the user speech element included in a morpheme not to be engaged in dialogue control of the spoken dialogue system among morphemes included in the user speech. The user speech feature series based on the user speech and the system speech feature series including the identification information based on the immediately previous system speech are set as inputs for the barge-in speech determination model and a likelihood of each user speech element not to be engaged in the dialogue control of the spoken dialogue system is set as an output for the barge-in speech determination model, each speech element being included in the user speech. According to the above embodiment, since the barge-in speech determination model in which the user speech feature series and the system speech feature series including the identification information are set as inputs and the likelihood of each speech element being unengaged is output for each user speech element is configured by machine learning, it is possible to obtain the model which can determine whether to engage each user speech element included in the user speech with high precision. According to one embodiment, a spoken dialogue program causes a computer to function as a spoken dialogue system that performs a dialogue with a user by outputting a system speech formed by a voice and realize: an acquisition function of acquire a user speech formed by a voice produced by the user; a recognition function of outputting a recognition result obtained by recognizing the user speech acquired by the acquisition function as text information; a barge-in speech control function of determining whether to engage a barge-in speech which is the user speech produced to cut off ongoing output of the system speech; a dialogue control function of outputting a system response representing response content to be responded for the user based on the recognition result corresponding to the user speech other than the barge-in speech determined not to be engaged by the barge-in speech control function with reference to a dialogue scenario that has a mutual response rule between the user speech and the system speech; a response generation function of generating the system speech based on the system response output by the dialogue control function; and an output function configured to output the system speech. The user speech is formed by one or more chronological user speech elements. The dialogue scenario includes a response candidate which is a response assumed to the system speech from the user. When each user speech element corresponds to a predetermined morpheme included in an immediately previous system speech which is the system speech output by the output function immediately before the user speech is produced by the user and does not correspond to an element of the response candidate to the immediately previous system speech in the dialogue scenario, the barge-in speech control function determines not to engage the user speech element or the user speech including the user speech element. In the program according to the above embodiment, when the user speech element corresponds to the predetermined morpheme included in the immediately previous system speech and does not correspond to the element of the response candidate of the immediately previous system speech, the user speech element is determined not to be engaged in the dialogue control. Accordingly, an erroneous operation in the spoken dialogue system is prevented and convenience for the user is improved. While the embodiments of the invention have been described above in detail, it is apparent to those skilled in the art that the invention is not limited to the embodiments described in this specification. The embodiment can be modified and altered in various forms without departing from the gist and scope of the invention defined by description in the appended claims. Accordingly, description in this specification is for exemplary explanation, and does not have any restrictive meaning for the embodiment. The aspects/embodiments described in this specification may be applied to systems employing Long Term Evolution (LTE), LTE-Advanced (LTE-A), SUPER 3G, IMT-Advanced, 4G, 5G, future radio access (FRA), W-CDMA (registered trademark), GSM (registered trademark), CDMA2000, ultra mobile broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, ultra-wideband (UWB), Bluetooth (registered trademark), or other appropriate systems and/or next-generation systems to which these systems are extended on the basis thereof. The order of the processing sequences, the sequences, the flowcharts, and the like of the aspects/embodiments described above in this specification may be changed as long as it does not cause any inconsistencies. For example, in the methods described in this specification, various steps are presented as elements in an exemplary order but the methods are not limited to the presented order. The input or output information or the like may be stored in a specific place (for example, a memory) or may be managed in a management table. The input or output information or the like may be overwritten, updated, or added. The output information or the like may be deleted. The input information or the like may be transmitted to another device. Determination may be performed using a value (0 or 1) which is expressed in one bit, may be performed using a Boolean value (true or false), or may be performed by comparison of numerical values (for example, comparison with a predetermined value). The aspects/embodiments described in this specification may be used alone, may be used in combination, or may be switched during implementation thereof. Transmission of predetermined information (for example, transmission of “X”) is not limited to explicit transmission, and may be performed by implicit transmission (for example, the predetermined information is not transmitted). While the present disclosure has been described in detail, it is apparent to those skilled in the art that the present disclosure is not limited to the embodiments described in the present disclosure. The present disclosure can be modified and altered in various forms without departing from the gist and scope of the invention defined by description in the appended claims. Accordingly, description in the present disclosure is for exemplary explanation, and does not have any restrictive meaning for the present disclosure. Regardless of whether it is called software, firmware, middleware, microcode, hardware description language, or another name, software can be widely interpreted to refer to commands, a command set, codes, code segments, program codes, a program, a sub program, a software module, an application, a software application, a software package, a routine, a sub routine, an object, an executable file, an execution thread, an order, a function, or the like. Software, commands, and the like may be transmitted and received via a transmission medium. For example, when software is transmitted from a web site, a server, or another remote source using wired technology such as a coaxial cable, an optical fiber cable, a twisted-pair wire, or a digital subscriber line (DSL) and/or wireless technology such as infrared rays, radio waves, or microwaves, the wired technology and/or the wireless technology are included in the definition of the transmission medium. Information, signals, and the like described in the present disclosure may be expressed using one of various different techniques. For example, data, an instruction, a command, information, a signal, a bit, a symbol, and a chip which can be mentioned in the overall description may be expressed by a voltage, a current, an electromagnetic wave, a magnetic field or magnetic particles, a photo field or photons, or an arbitrary combination thereof. The terms described in the present disclosure and/or the terms required for understanding this specification may be substituted by terms having the same or similar meanings. The term, “system” and “network” are used synonymously in this specification. Information, parameters, and the like described in this specification may be expressed by absolute values, may be expressed by values relative to a predetermined value, or may be expressed by other corresponding information. Terms such as “determining” used in the present disclosure may include various operations of various types. The “determining,” for example, may include a case in which judging, calculating, computing, processing, deriving, investigating, looking up, searching, or inquiring (for example, looking up a table, a database, or any other data structure), or ascertaining is regarded as “determining.” In addition, “determining” may include a case in which receiving (for example, receiving information), transmitting (for example, transmitting information), inputting, outputting, or accessing (for example, accessing data in a memory) is regarded as “determining.” Furthermore, “determining” may include a case in which resolving, selecting, choosing, establishing, comparing, or the like is regarded as “determining.” In other words, “determining” includes a case in which a certain operation is regarded as “determining.” Further, “determining” may be replacing with reading such as “assuming,” “expecting,” or “considering.” Description of “on the basis of” used in the present disclosure does not mean “only on the basis of” unless otherwise mentioned. In other words, description of “on the basis of” means both “only on the basis of” and “at least on the basis of.” Any reference to elements having names such as “first” and “second” which are used in this specification does not generally limit amounts or an order of the elements. The terms can be conveniently used as methods for distinguishing two or more elements in this specification. Accordingly, reference to first and second elements does not mean that only two elements are employed or that the first element has to precede the second element in any form. When the terms “include,” “including,” and modifications thereof are used in this specification or the appended claims, the terms are intended to have a comprehensive meaning similar to the term “comprising.” The term “or” which is used in this specification or the claims is not intended to mean an exclusive logical sum. In this specification, a singular term includes plural forms unless only one is mentioned to be apparent in context or technically. Through the present disclosure, a plurality is assumed to be included unless a single is clearly indicated from a context. REFERENCE SIGNS LIST 1: Spoken dialogue system10: Model generation device11: Learning speech acquisition unit12: User speech feature extraction unit13: System speech feature extraction unit14: Identification information granting unit15: Label acquisition unit16: Model generation unit17: Model output unit20: Spoken dialogue device21: Acquisition unit22: Recognition unit23: User speech feature acquisition unit24: System speech feature acquisition unit25: Barge-in speech control unit26: Dialogue control unit27: Response generation unit28: Output unit30: Dialogue scenario storage unit40: Learning data storage unit50: Model storage unitM1, M2: Recording mediumm11: Learning speech acquisition modulem12: User speech feature extraction modulem13: System speech feature extraction modulem14: Identification information granting modulem15: Label acquisition modulem16: Model generation modulem17: Model output modulem21: Acquisition modulem22: Recognition modulem23: User speech feature acquisition modulem24: System speech feature acquisition modulem25: Barge-in speech control modulem26: Dialogue control modulem27: Response generation modulem28: Output moduleP1: Model generation programP2: Spoken dialogue program
73,789
11862168
DETAILED DESCRIPTION As introduced above, in certain situations, it is difficult to disambiguate between simultaneously speaking participants. For example, within a conference room or during a meeting, participants may simultaneously speak into a conferencing device. In such instances, speech from the participants is mixed together, making it difficult to identify a number of participants, which participants are speaking, and/or what each participant said. Voice overlap is therefore a continuing concern for generating transcripts. That is, when participants speak at the same time it is difficult to identify a number of distinct participants within the voice overlap, which participant are associated with which speech, which participants were speaking, and/or an identity of the participants speaking. Further technological improvements may increase user experiences and provide accurate transcriptions. Described herein, are among other things, systems and methods for processing audio signals, identifying participants within a meeting, and generating a transcript of the meeting (e.g., conference, meeting, virtual session, etc.). Participants may use one or more devices for interacting within the meeting, such as phones, conferencing devices, and/or computers. The devices include microphones that capture speech for determining the presence of distinct participants that are spread across one or more environments. For example, within a conference room, multiple participants may speak simultaneously. In some instances, the multiple participants may be communicatively coupled to a remote device utilized by additional participants(s). To detect which participants are speaking and what each participant says (e.g., associating speech with certain participants), the audio data generated by the microphones may be analyzed or processed. For example, microphones may be directional and have increased sensitivity to sound coming from one or more specific directions than sound coming from other directions. In such instances, a directional response or a beampattern may be formed. Determining the directions of distinct speech allows for the participants to be associated with speech/audio signals (or speech) generated by the microphones, respectively. By identifying the participants, or disambiguating between the speech of the participants, an accurate transcript may be generated. Furthermore, an identity of the participants may be determined using voiceprint and/or voice recognition techniques. This advantageously enables participants to be distinguished from one another, even in instances where the participants move around and/or talk simultaneously. More particularly, a device may include one or more microphones to generate one or more audio signals indicative of the speech received from participants in an environment, such as a conference room, and speech processing components to process the audio signals. In some situations, participants may speak at the same time, the audio signals may represent the speech of the participants, and to produce a transcript, the speech of each participant may be isolated from one another. To isolate the speech of each participant, or remove unwanted audio from each audio signal, the device may be equipped with a beamforming module, an echo cancellation module, and/or other signal processing components to attenuate audio attributable to an echo, noise, double talk, or speech from other participants. For example, utilizing characteristics of the audio signals generated by the microphones (e.g., energy, signal level, etc.) directions of speech, or a direction from which the speech is received, may be determined. In some instances, using beamforming techniques, directional beams formed by processing audio signals may be used to determine the direction from which the speech originated. In some instances, beamforming techniques are utilized to analyze the audio signals for determining the presence of speaking participants. Beamforming or spatial filtering is a signal processing technique for directional signal reception. Signals generated by the microphones may be processed in such a way that signals at particular angles experience constructive interference while others experience destructive interference. The beamforming techniques form multiple directional signals corresponding to different directions or orientations within the environment associated with speech. As the speech is received from a particular direction, the directional signal (e.g., formed beam) associated with that direction tends to exhibit more energy or signal strength than the other signals (or beams), thereby arriving at the direction of the participants, respectively. The beam that exhibits the greatest energy is selected and a direction to the participant is determined from that beam. This process may repeat to determine the direction, presence, position, or location of each participant within the environment. For example, as the microphones receive audio, the directional beams formed by processing signals may indicate a direction of the participants within the environment (i.e., as sources of sound). Given that the direction tends to exhibit more energy or signal strength than the other signals (or beams), participants within the environment may be determined. In this manner, the audio signals generated by a respective microphone may be processed using the audio signals from other microphones to determine a presence of each participant, and attenuate speech of other participants. This process may repeat to determine the presence of distinct participants within the environment, or to disambiguate the participants from one another. In another implementation, directionality or the presence of different participants may be ascertained by measuring time differences when the participant speech reached the microphones. By way of example, envision that the environment includes a first participant and a second participant. As the first participant and the second participant speak, the microphones generate audio data indicative of the speech, and transmit audio signals representative of the audio data (or speech). However, a first microphone located closest to the first participant may detect the audio of the first participant first compared to remaining microphones. Additionally, the first microphone may detect an increased energy or signal level compared to the remaining microphones. This determination may indicate the presence of the first participant adjacent or proximal to the first microphone. Similarly, a second microphone located closest to the second participant may detect the audio of the second participant first and/or may detect the audio of the second participant at an increased energy or signal. As such, given that speech is directional and attenuates over distance, beamforming techniques may be used to identify the presence of the first participant and the second participant, or that there are two participants within the environment. In some instances, voice processing techniques such as same voice detection may be used for determining the presence of multiple participants. For example, the device (or a communicatively coupled device) may compare the audio data (and/or signals) generated by the first microphone and the second microphone to determine similarities and/or differences therebetween. These similarities and/or differences may indicate or be used to determine the number of distinct participants within an environment and to disambiguate the participants from one another. By identifying the participants, or disambiguating the speech of each participants, the audio signals (or data) may be attenuated to isolate the speech of each participant. In such instances, the audio processing techniques may filter out or attenuate noise to generate a processed audio signal that represents or determines speech of each participant. Therein, the processed audio signal may substantially represent the speech of a single participant. For example, continuing with the above example, as the first microphone may be located closest to the first participant, the speech of the second participant may be attenuated from the audio signal generated by the first microphone. In some instances, the audio data generated and the audio signals transmitted by the other microphones may be utilized to isolate the speech of the first participant. For example, the audio data generated by the second microphone, or other microphones of the device, may be used to attenuate the speech of the second participant from the audio data generated by the first microphone (e.g., using same voice detection). As a result, the speech of the first participant may be isolated from the speech of the other participants in the room. This process may repeat to identify the number of participants in the environment. In some instances, other audio signal processing modules may be implemented to reduce noise, identify same voice, double-talk, echo, and/or to attenuate any signal components associated with noise other than the associated participant. In turn, after processing the audio signals, a clean-high quality audio signal may be generated for each participant. Such processed audio signals, that represent or correspond to the speech of a single participant, may be used when generating the transcript. In some instances, certain microphones may be associated with respective participants. In future instances, the identification of or disambiguation between the participants may be determined though the association of the participants with the microphones. For example, after determining that the first microphone first receives or captures the audio of the first participant, and/or at the highest energy level, the first microphone may be associated with the first participant. Similarly, after determining that the second microphone first receives or captures the audio of the second participant, and/or at the highest energy level, the second microphone may be associated with the second participant. As the microphones continue to receive audio, audio corresponding to the first participant may be determined (or identified) and audio corresponding to the second participant may be determined (or identified). Moreover, processing techniques may attenuate noise and/or audio from the other participants, such that the processed audio signal from the first microphone substantially represents the speech of the first participant. In some instances, mapping information may be used to assign the audio data to respective participants. In some instances, participants may be associated with virtual microphones. For example, if a particular device or environment includes two microphones, but three participants, one or more of the participant(s) may be associated with virtual microphones that represent a combination of actual microphones. For example, a third participant may be associated with fifty percent of the first microphone and fifty percent of the second microphone, in instances where the third participant is halfway between the first microphone and the second microphone. If the third participant moves closer to the first microphone, then the third participant may be associated with eighty percent of the first microphone and twenty percent of the second microphone. Here, this “virtual” microphone may then be associated with the third participant. That is, because the device or the environment includes more participants than microphones, these virtual microphones may be associated with respective participants. In turn, the virtual microphones may be used to generate audio data that represents the speech of the third participant. For example, the device (or a communicatively coupled device), may use eighty percent of the output of the first microphone and combine that with twenty percent of the output of the second microphone to generate an audio signal that represents the speech of the third participant. After microphones receive and/or generate audio data, and/or after the devices process the audio data using beamforming or other techniques, the device may transmit audio signals to a remote system, speech processing service, or transcription service. By transmitting each of the audio signals, separately, which substantially corresponds to speech of a single participant, the transcription service may process the audio signals to determine words associated with the speech of each participant. That is, by transmitting the audio signals of each microphone, the transcription service may determine the speech of each participant by analyzing the individual audio signals. However, in some instances, the transcription service may perform signal processing to determine the number of participants within the environment and/or to disambiguate the speech of the participants using the audio data captured at the microphones. In some instances, the transcription service may verify the number of participants determined by the device. For example, the transcription service may receive audio signals from the device to determine the number of distinct participants and/or to disambiguate the speech of the participants, for verifying and/or confirming the determination made at the device. After processing the audio signals, the transcription service may generate a transcript of the meeting. For example, using the processed audio signals, the utterances, words, phrases, or speech of the participants may be determined. Knowing which participants spoke, which audio is associated with each participant, as well as their respective utterances, allows for an accurate transcript of the meeting to be generated. That is, by using the directional microphones to determine the participants, or identifying the participants within the environment, the transcript may identify which participants spoke and their respective speech. At this point, the first microphone may be associated with the first participant and the second microphone may be associated with the second participant. Additionally, participants may be associated with virtual microphones. That is, after attenuating noise, the speech of the first participant and the second participant may be determined, respectively. In some instances, after associating participants with the microphones, an identity of the participants may be determined. To determine an identity of the participants, audio signatures (e.g., acoustic fingerprint, voice signature, voiceprint, etc.) associated with the audio signals may be compared against audio signatures stored in association with participant profiles. For example, an audio signature of the audio signal corresponding to the first participant may be compared against stored audio signatures to determine an identity of the first participant. Each signature may uniquely identify a participant's voice based on a combination of one or more of a volume, pitch, tone, frequency, and the like. If a similarity between the audio signal and a stored audio signature is greater than a threshold, an identity of the participant may be determined (e.g., using an identifier associated with the stored audio signature). Knowing the identity of the participants allows for the transcript to be annotated. For example, after comparing the audio signatures, the identity or name of the first participant may be John and the identity or name of the second participant may be Pamela. Therein, the transcript may indicate speech corresponding to John and speech corresponding to Pamela, respectively. In some instances, the identity of the participants may also be used to indicate to other participants within the meeting the identity of the speaking participant. For example, if John and Pamela are in a conference room, and another participant is located at a remote location, an indication may be provided to the other participant (e.g., a device with which the other participant is using) to indicate whether John and/or Pamela is/are speaking. In this sense, the other participants may receive an indication of which participants, among the participants in the meeting, is/are speaking. In some instances, the transcript may be utilized to determine one or more action item(s) or task(s). For example, the transcription service may be configured to analyze the transcript to identify commands of the participants, and perform, in some examples, along with one or more other computing devices, various operations such as scheduling meetings, setting reminders, ordering goods, and so forth. As the transcripts identify which participant spoke, or which portion of the transcript was spoken by the participants, task(s) may be created. For example, during the meeting, the first participant (e.g., John), may utter a phrase, such as “Please remind me to schedule a company meeting.” After associating this speech (or request) with John, an action item may be generated for John that reminds him to schedule a meeting. Accordingly, knowing the identity of the participants allows for a complete transcript to be generated, indicating participants spoke, what the participants said, as well as to associate commands, or action item(s), with each participant. As discussed above, the devices may include various components to process audio, such as speech-processing components, to analyze speech of the participants. In some examples, the devices may have relatively low functionality with respect to processing the audio. For example, the devices may include pre-processing components to perform less complicated processing on the audio, such as beamforming components, echo-cancellation components, wake-word detection components, and so forth. Additionally, and/or alternatively, in some instances, the devices may be configured to perform speech recognition, such as automatic speech recognition (ASR), natural language processing (NLP), and/or natural language understanding (NLU), on the audio signals to identify words or phrases associated with the speech of the participant(s), as well as an intent associated with the words or phrases, or may be configured to provide the audio data to another device (e.g., a remote service such as remote system for performing the ASR, NLU, and/or NLP on the audio data). In such examples, the devices may serve as an interface or “middle man” between a remote system and the participants. In this way, the more intensive processing involved in speech processing may be performed using resources of the remote systems, which may increase the performance of the speech-processing techniques utilized on audio data generated by the devices. For example, while the devices may be configured with components for determining metadata associated with the audio data (e.g., SNR values, timestamp data, etc.), in some examples the devices may relay audio signals to the transcription service which performs processing techniques to determine the identity of participants and/or generate a transcript. The remote system may perform ASR on the audio signals to identify speech, translate speech into text, and/or analyze the text to identify intents, context, commands, etc. Therein, the transcript may be generated as a result of the speech processing. However, any combination of processing may be performed by the devices and/or the remote system to generate the transcript of the meeting. Although the above discussion relates to determining the presence or identification of two participants sharing a device within an environment, the techniques discussed herein may be utilized to identify any number of participants within the environment, or associating audio with any number of participants. Still, environments may include more than one device that captures speech. For example, external microphones may be coupled to a device for capturing audio of the participants. In such instances, the audio data generated by the microphone(s), of each device, may be processed and compared to determine a number of participants engaged in the meeting and to disambiguate their respective speech from one another to generate a transcript of the meeting. Still the transcription service may receive audio signals from any number of devices for use in generating transcripts. For example, the transcription service may receive audio signal(s) from another device operated by a third participant engaged in the meeting. Therein, the transcript may indicate the speech of the first participant, the second participant, and the third participant, respectively. The transcription service may therefore receive audio signals from any number of devices, associated with one or more participants, and across one or more distinct environments, for generating transcripts of the meeting. Therein, the transcript may be distributed to the participants and/or actions item(s) may be identified. As such, the present disclosure is directed to generating transcripts of a meeting and determining respective participants associated with speech uttered during the meeting. In some instances, the audio obtained during the meeting may be processed to filter noise and obtain clean (high-quality) audio data for use in generating a transcript. In this sense, the audio data (or signals) may be compared to one another in an iterative process to identify the number of distinct participants, and generate processed audio signals that represents the speech of the respective participants. Therein, in some instances, the individual audio streams or audio data generated by the microphones may be processed to identify the participants (e.g., speakers) corresponding to the audio data. For example, audio signatures associated with the participants may be compared against those of the participants to annotate the transcript. The present disclosure provides an overall understanding of the principles of the structure, function, device, and system disclosed herein. One or more examples of the present disclosure are illustrated in the accompanying drawings. Those of ordinary skill in the art will understand and appreciate that the devices, the systems, and/or the methods specifically described herein and illustrated in the accompanying drawings are non-limiting embodiments. The features illustrated or described in connection with one embodiment, or instance, may be combined with the features of other embodiments or instances. Such modifications and variations are intended to be included within the scope of the disclosure and appended claims. FIG.1illustrates a schematic diagram100of an illustrative environment102in which participants engage in a meeting using one or more devices. For example, within the environment102, a first participant104(1) and a second participant104(2) may utter first speech106(1) and second speech106(2), respectively. A device108(1) within the environment102detects the first speech106(1) and the second speech106(2). In this example, the environment102may be a room or an office and the first participant104(1) and the second participant104(2) may interact with the device108(1). In some instances, collectively, the first participant104(1) and the second participant104(2) may be referred to herein as “the participants104” which utilize the device108(1) for engaging within the meeting. However, although the environment102is shown including two participants, the environment102may include any number of participants (e.g., three, four, five, etc.) and the techniques and processes discussed herein may extend to identify or disambiguate between the participants. The device108(1) may communicatively couple to a transcription service110that functions to generate transcripts of the meeting. The transcription service110may also act as a host for the meeting and/or distribute content or media source(s) (e.g., audio, video, etc.) to other participants within the meeting. For example, the transcription service110may allow the participants104to communicate with a third participant112located remote from the environment102and interacting with a device108(2). However, in some instances, the meeting may only include the participants104within the environment102, and the participants104may not engage with remote participants (i.e., the third participant112). In some instances, the device108(2) may include similar components and/or a similar functionality as the device108(1). In some instances, the device108(1) and the device108(2) may be collectively referred to herein as “the devices108” or individually, “the device108.” The devices108may be communicatively coupled to the transcription service110and one another over a network114. The network114may include any viable communication technology, such as wired and/or wireless modalities and/or technologies. The network114may include any combination of Personal Area Networks (PANs), Local Area Networks (LANs), Campus Area Networks (CANs), Metropolitan Area Networks (MANs), extranets, intranets, the Internet, short-range wireless communication networks (e.g., ZigBee, Bluetooth, etc.) Wide Area Networks (WANs)—both centralized and/or distributed—and/or any combination, permutation, and/or aggregation thereof. As introduced above, the first participant104(1), the second participant104(2), and the third participant112may engage in the meeting. The meeting may include video and/or audio conferencing, communication sessions, teleconferencing, and/or other online environments in which participants communicate with one another (e.g., chat rooms) either remotely or at the same location. InFIG.1, the participants104are shown communicating with the third participant112via the device108(1). The device108(1) may be configured to provide feedback or messages to the participants104, such as speech of the third participant112. In some instances, the device108(1) may be configured to record audio and/or video of the meeting, or of the participants104, for generating transcripts of the meeting between the participants104. The devices108may be one or more devices, such as but not limited to a smart phone, a smart watch, a personal computer (“PC”), desktop workstation, laptop computer, tablet computer, notebook computer, personal digital assistants (“PDA”), electronic-book reader, game console, set-top box, consumer electronics device, server computer, a telephone, a telephone conferencing device, video conferencing device, or any other type of computing device capable of connecting to the network114. Interface(s)116of the device108(1) and interface(s)118of the transcription service110are also provided to facilitate connection to, and data transmission over, the network114. The device108(1) is equipped with an array of microphones120for capturing verbal input, utterances, or speech (e.g., the first speech106(1) and the second speech106(2)) of the first participants104, as well as any other sounds in the environment102. Although multiple microphones120are discussed, in some instances, the devices108may be embodied with only one microphone. Additionally, the microphones120may be external microphones that are not physical components of the device108(1), but which are communicatively coupled to the device108(1) (e.g., Bluetooth or hard-wired, such as USB, A/V jack, etc.). The device108may utilize same voice or speech detection, beamforming, and noise cancellation functions to provide individual audio streams to the transcription service110so that automated transcriptions are better facilitated. For example, the device108(1) may include speech-processing component(s)122stored within memory124of the device108(1) and which process audio signal representations of the speech received or captured by the microphones120. Processor(s)128may power the components of the device108(1), such as components stored in the memory124, and/or perform various operations described herein. In some instances, the speech-processing component(s)122may include a wake word engine, speech recognition, natural language processing, echo cancellation, noise reduction, beamforming, and the like to enable speech processing. In instances where multiple participants share a device, such as in the environment102, audio processing techniques may be utilized to distinguish participants from one another. As the participants104within the environment102speak, each of the microphones120may generate corresponding audio data and/or audio signals. The device108(1) detects when the participants104begin talking and the microphones120may each receive audio of the participants104at different times and/or at different energy levels. These characteristics may be used to ascertain the direction of the participants104or discrete sources of sound within the environment102. In one implementation, the speech-processing component(s)122may include a beamforming module or component used to process audio signals generated by the microphones120. Directional beams formed by processing the audio signals may be used to determine the direction from which the speech originated. Therein, sources of sound may be determined, and these sources may be associated with respective the participants104in the environment102. In another implementation, directionality may be ascertained by measuring time differences as to when utterances of the participants104reach the microphones120. For example, as the first participant104(1) speaks, a first microphone of the microphones120located closest to the first participant104(1) may first detect the first speech106(1) of the first participant104(1). The remaining microphones120may receive the first speech106(1) at various delays, or offsets. Additionally, the first microphone located closest to the first participant104(1) may detect audio of the first speech106(1) at an increased energy or signal level. For example, the speech-processing component(s)122may identify the participants104based upon the power level from each of the microphones120. In some instances, the speech-processing component(s)122may compute the power level of the audio signals from microphones120and rank order them in decreasing order of signal power. The speech-processing component(s)122may then select or identify a predetermined number of microphones with the greatest signal power. Each one of the identified microphones120may then be associated with one of the participants104. This determination may indicate that the first participant104(1) is speaking, or a direction of the first participant104(1) relative to the device108(1). This process may repeat for the speech captured by the remaining microphones120to determine the presence of additional participant within the environment102, such as the second participant104(2). As such, within the environment102, the presence of distinct participants may be determined as well as their relative location or direction from the device108(1). In some instances, the audio processing techniques may involve same voice detection for identifying similar speech within the audio signals and determining a discrete number of participants. For example, the audio data generated by the microphones120may be compared with one another to determine similarities and/or differences. By comparing frequencies, pitch, amplitude, and/or other characteristics of the audio data and/or signals, similarities and/or differences may be determined. By identifying the similarities and/or differences, this comparison may indicate the number of discrete sources of sound within the environment102. These discrete sources of sound may be used to indicate the presence of the number of participants within the environment102. As such, by comparing characteristics of the audio data and/or signals, the speech of the participants104may be disambiguated from one another and the number of participants may be determined. Moreover, in instances where the participants104speak at the same time, each of the microphones120may generate respective audio representing the speech of the participants104. Noted above, beamforming or other audio processing techniques may be used to determine that two participants are speaking simultaneously, based on the comparison and/or processing of the audio (or signals). Determining a presence of the participants104, or distinguishing the speech between the participants104, allows for the differentiation of the participants104within the environment102. The device108(1) may therefore include components that adaptively “hone in” on active participants and capture speech signals emanating therefrom. This improves the perceptual quality and intelligibility of such speech signals, even in instances where the participants104are moving around the environment102in which the device108(1) is being utilized, or when two or more active participants are speaking simultaneously. In some instances, once the participants104are distinguished from one another, the participants104may be associated with corresponding audio signals generated by the microphones120and/or associated with respective microphones120. The audio data (or signal) generated by a microphone closest to the participants104may be selected for speech processing. For example, continuing with the above example, the first microphone and the second microphone may be respectively chosen based on their corresponding audio having the highest signal strength of the first participant104(1) and the second participant104(2), respectively. The first microphone may be associated with the first participant104(1) and the second microphone may be associated with the second participant104(2). The audio data generated by the first microphone may be processed for determining speech of the first participant104(1), while the audio data generated by the second microphone may be processed for determining speech of the second participant104(2). Therein, the speech-processing component(s)122may attenuate noise to isolate the speech of the participants104within the environment102. However, noted above, participants may be associated with virtual microphones in instances where the environment102includes a greater number of participants than microphones (e.g., not every participant may be associated with a physical microphone). Here, virtual microphones may be associated with the participant(s), such that the speech of the participants may be determined using a combination of the audio data generated by the microphones120. For example, the speech of an additional participant within the environment102may be generated using certain a combination of the audio data generated by the microphones (e.g., forty percent of the first microphone and sixty percent of the second microphone). Virtual microphones therefor allow for a combination of microphones to be used to generate audio data associated with additional participants in the environment102. In some instances, the speech-processing component(s)122may filter out or attenuate noise to generate a processed audio data or signal that substantially represents speech of each of the participants104. For example, as the first microphone may be located closest to the first participant104(1), the speech of the second participant104(2) may be attenuated from the audio data generated by the first microphone (or using other audio data generated by additional microphones in the environment102). In some instances, this may be accomplished using echo cancellation, same voice detection, noise reduction, and/or other techniques. In other words, as the speech of the second participant104(2) may be received at the first microphone, this speech may be attenuated from the audio data generated by the first microphone to obtain processed audio data that substantially represent the speech of the first participant104(1). As a result, the speech of the first participant104(1) may be isolated from that of the other participants in the environment. Therein, by processing the audio data, a clean-high quality audio signal may be generated for each of the participants104. In some instances, after determining the presence of the participants104within the environment102, and which microphones are closest or associated with the participants104, respectively, the device108(1) may transmit audio signals to the transcription service110for advanced signal processing. In some instances, the advanced signal processing performed by the transcription service110may better sort out the speech in the environment102, and thus, an increase in the quality of the transcripts. In some instances, the device108(1) separately transmits the audio signals to the transcription service110over the network114. For example, in instances where the device108(1) detects two participants (i.e., the participants104), the device108(1) may transmit, to the transcription service110, audio signal126(1) generated by the first microphone and audio signal126(2) generated by the second microphone. That is, after determining which audio data generated by the microphones120corresponds to different participants, the device108(1) may transmit an audio signal representative of the audio data, separately, to the transcription service110. However, in some instances, the device108(1) may transmit all of the audio data generated by the microphones120, respectively. For example, in instances where the device108(1) includes four microphones120, the device108(1) may separately transmit first, second, third, and fourth audio signals generated by the microphones120. Therein, the transcription service110may identify the audio associated with each participant, or which microphones are associated with participants104. Additionally, or alternatively, in some instances, the device108(1) may transmit the audio signals captured by the microphones120as a packet to the transcription service110. The audio signals may be embedded with metadata that indicates or identifies which portions of the audio corresponds to the audio signal126(1), the audio signal126(2), and so forth. In some instances, the device108(1) may process the audio generated by microphones120using the speech-processing component(s)122in whole or in part. In some cases, some or all of the speech processing is performed by the transcription service110. Accordingly, in some instances, the device108(1) may send audio signals or data, or a partial processed version of the audio signals or data, to the transcription service110, where the audio signals or data are more fully processed. In some instances, the transcription service may perform a verification stage to check or confirm the processing performed by the device. For example, the device108(1) may determine the presence of two participants within the environment102, and correspondingly, transmit the audio signals126(1) and126(2) to the transcription service110. Additionally, or alternatively, the device108(1) may transmit audio signals generated by all microphones of the device108(1), and/or the transcription service110may receive audio signals from all microphones in the environment102. Therein, the transcription service110may utilize the audio signals for determining the number of distinct participants in the environment102to confirm or correct the determination made by the device108(1). As such, the transcription service110in some instances may function to confirm the processing and/or results of the device108(1). The transcription service110may include cloud services hosted, for example, on one or more servers. These servers may be arranged in any number of ways, such as server farms, stacks, and the like that are commonly used in data centers. In some examples, the transcription service110may include one or more processor(s)130and memory132storing various components. The processor(s)130may power the components of the transcription service110, such as components stored in the memory132. The transcription service110may include components such as, for example, a speech-processing system134, a speaker identification component136, a transcription component138, and/or a distribution component140for performing the operations discussed herein. It should be understood that while the speech-processing system134and the other components are depicted as separate from each other inFIG.1, some or all of the components may be a part of the same system. The speech-processing system134may receive the audio signal126(1) and the audio signal126(2) from the device108(2) for processing. Additionally, the speech-processing system134may receive audio signal126(3) generated by the device108(2) (or the microphone(s) of the device108(1)), that represents third speech106(3) of the third participant112. However, as shown, the third participant112may be the only participant utilizing the device108(2), and thus, speech or audio received from the device108(2) may be associated with the third participant112. The speech-processing system134may include an automatic speech recognition component (ASR)142and/or a natural language understanding component (NLU)144. For example, the ASR component142may process the audio signal126(1)-(3) to generate textual data corresponding to the first speech106(1), the second speech106(2), and the third speech106(3), respectively. In some examples, the ASR component142may generate ASR confidence scores representing the likelihood that a particular set of words of the textual data matches those uttered in the speech106(1)-(3), respectively. For example, the ASR component142may determine a confidence or likelihood that a particular word which matches the sounds would be included in the sentence at the specified location (e.g., using a language or grammar model). Thus, each potential textual interpretation (e.g., hypothesis) of the first speech106(1), the second speech106(2), and the third speech106(3) is associated with an ASR confidence score. The ASR component142may then return the textual data and, in various examples, the textual data may be sent to the NLU component144to be analyzed or process. The NLU component144ay determine an intent or otherwise assist in determining contextual information of the first speech106(1), the second speech106(2), and/or the third speech106(3). For example, if during the meeting the first participant104(1) issued a command such as “schedule a meeting with Bob,” the NLU component144may determine that the intent of the first participant104(1) is to schedule a meeting with Bob. After ASR and/or NLU processing, the transcription service110may generate a transcript146of the meeting between participants, such as the first participant104(1), the second participant104(2), and the third participant112. In some instances, the transcription service110may include a transcription component138for generating the transcript146, and which utilizes the audio signal126(1)-(3). In some instances, the transcription component138may generate the transcript146after ASR and/or NLU processing has been performed. By receiving the individual audio streams of the microphones120, or the audio signal126(1)-(3) separately, the transcription service110may generate transcripts that represent the speech of each of the three participants within the meeting. For example, by separating or disambiguating the speech of participants, the respective speech of the first participant104(1), the second participant104(2), and the third participant112may be determined. In some instances, after separately determining the transcript of the participants individually, the transcription service110may generate the transcript146, which combines the respective speech of the participants engaged within the meeting. In doing so, time stamps may be compared such that the transcript146represents a chronological order of the dialogue or discussion that took place during the meeting. For example, as shown inFIG.1, the transcript146may individually identify the participants and their associated words, phrases, or speech. The transcript146may be stored within a transcript database148, which includes transcripts of meetings. In some instances, the participants may access the transcripts146within the transcript database148or the transcripts146may be automatically sent to participants after the meeting has concluded. The transcripts146may also be sent to people who were unable to attend the meeting. In some instances, the transcript146may be generated at the conclusion of the meeting, or may be generated in real time as the meeting is in progress. The audio signal126(1)-(3) may be utilized by the devices108and/or the transcription service110to perform speaker identification and/or determining the presence of distinct participants. For example, the speaker identification component136may obtain speech signals (e.g., the audio signal126(1)-(3)) originating from different participants to identify a particular participant associated with each speech signal. This identification may generate information used to assign each speech signal with an identified participant. As shown, the memory132may store or otherwise have access to participant profiles150, which include various data associated with participants engaged in the meeting. The memory124of the device108(1) may further store the participant profiles150. In some instances, the participant profiles150may include data relating to schedules of participants, identifiers associated with participants (e.g., username), devices of participants, contact information (e.g., email), and so forth. The schedules may be accessed for use in determining which participants are engaged in the meeting (e.g., meeting invite), which may further be used to assist in identifying which participants are speaking. Additionally, a given participant profile150may include one or more reference audio signatures that may be distinctive to the participant associated with the participant profile150. The one or more reference audio signatures may be used to identify which participants are speaking to the devices108, respectively, which participants are associated with respective audio data (or signals) received from the device108(2), or which participants are associated with respective audio data (or signals) generated by the microphones120of each device. For example, in some instances, the speaker identification component136may analyze a sample audio signature from the audio signal126(1)-(3) in association with the reference audio signatures to determine whether the sample audio signature corresponds to at least one of the reference audio signatures. A confidence value associated with such a determination may also be determined. In some instances, the participant profiles150may be queried for the reference audio signatures and a candidate set of reference audio signatures may be identified. The speaker identification component136may then analyze the candidate set of reference audio signatures against the sample audio signature from the audio signal126(1)-(3) to determine a confidence value associated with how closely the sample audio signature corresponds to each or some of the reference audio signatures. The reference audio signature with the most favorable confidence value may be selected and may indicate which participant profile150the audio data is associated with. Therein, a predicted or presumed identity of the participant may be determined. Upon determining the identity, the transcript146may be updated to indicate which speech corresponds to respective participants. For example, the speaker identification component136may determine that the first participant104(1) includes a first identity of John, the second participant104(2) includes a second identity of Pamela, and the third participant112includes a third identity of Luke. After determining the identity, as shown inFIG.1, the transcript146may indicate speech associated with each of the participants. In some instances, the transcript146may be parsed to identify key words that indicate an action item(s). For example, after the meeting, action item(s) or task(s) may be created for participants that represent follow-up tasks that are to be performed by participants of the meeting, respectively (e.g., schedule meeting, book trip, etc.). In some instances, in addition to performing beamforming or other audio processing techniques to determine the presence of distinct participants within the environment102, or distinguish between speech of the participants104within the environment102, the transcription service110may compare the audio signatures of the audio signal126(1) and the audio signal126(2). For example, the transcription service110may analyze the audio signal126(1) and the audio signal126(2) against audio signatures to determine the presence of different participants within the environment102. In this sense, the techniques discussed herein may use beamforming techniques and/or speaker identification techniques for determining the presence of the participants104within the environment102, that multiple participants are utilizing the device108(1) within the environment102, or to otherwise disambiguate speech emanating within the environment and which is generated by the participants104, respectively. The memory132of the transcription service110is further shown including an audio data database152that stores audio data (e.g., the audio signal126(1)-(3)) received from the devices108. The audio data database152may therefore store a recording of the meeting for use in generating the transcript146. Additionally, the transcription service110may store other forms of content or media associated with the meeting, such as video data. As discussed above, the transcription service110, or systems and/or components thereof, supports communications between participants engaged in the meeting. The transcription service110, or another system and/or service, may function to deliver audio, or other forms of media sources, to devices within the meeting. For example, the transcription service110is shown including the distribution component140for distributing media source(s) (or content) amongst participants of the meeting, such as the devices108. The distribution component140may receive the audio signal126(3) from the device108(2) and transmit the audio signal126(3) to the device108(1) for output. The device108(1) may include loudspeaker(s)154for outputting the audio signal126(3), or may include other output components (e.g., display, lights, etc.). The loudspeaker(s)154may be physical components of the device108(1) and/or the loudspeaker(s)154may be coupled to the device108(1) through wireless or wired communication (e.g., Bluetooth, USB, etc.). The device108(1) may also be connected to home audio systems for outputting audio. The transcription service110may store mapping information156in the memory132, which may include information that maps each audio signals associated with each identified participant received by the transcription service110to a corresponding microphone within the environment102(or other environments). That is, as noted above, each of the participants may be associated with, or mapped to, a respective microphone or associated with respective audio signals received from the microphones120of the devices108, respectively. Upon receiving audio signals, the transcription service110may access the mapping information156to associate the audio signals with respective participants within the meeting. The mapping information156may also store information associated with the virtual microphones, or what combination of microphone(s) (or audio data generated therefrom) is associated with respective participants. Additionally, in some instances, other inputs and/or data may be used for determining the presence and/or identity of the participants104. For example, the devices108or the environments may include cameras that capture image data of participants for use in determining a number of participants, which participants speak, and/or an identity of participants. In some instances, the devices108may transmit the image and/or video data to the transcription service110indicating the participants speaking. This image and/or video data may then be used to associate content within the transcript146with the correct participant. In some instances, the transcription service110may perform facial recognition analysis to identify the participants and speakers of the meeting. In some instances, if the devices108and/or the transcription service110is unable to distinguish between the participants104, or is unable to recognize different participants within the environment102, the device108(1) may output commands and/or instructions. For instance, the device108(1) may output a request for the participants104to move apart from one another (e.g., spatially distribute), for the participants104to speak one at a time, or may ask the participants104to state his or her name. After such, the device108(1) may be able to disambiguate between the participants104and/or recognize the presence of the different participants104. In some instances, at the start of the meeting, the participants104may individually identify themselves and the microphone120closest to the respective participants may be associated for participant identification. In some instances, the device108may display certain appearance states based on the identity of participants, or which participant(s) are speaking. For example, the device108may include lighting elements that illuminate to different colors and/or patterns based on which participant is speaking. Such indications may be used to inform other participants in the meeting which participant is speaking and/or the identity of that participant(s). In some instances, the device108may additionally or alternatively include a display that presents identifying information of the speaking participant. In some instances, the transcription service110may generate translations for output or output audio interpretations in instances where participants of the meeting speak in more than one language. For example, if some participants speak English and some participants speak Italian, the transcription service110may translate the audio data, generate translated audio data that represents an interpretation of that audio data, and then transmit the translated audio data to the devices in the meeting for rebroadcasting. This rebroadcasting may translate the speech of participants into a common language for understanding by the participants in the meeting. FIG.2illustrates the device108(1), or an example device, for capturing audio within a meeting. Using the audio, the transcript146of the meeting may be generated. As shown, the device108(1) may include four microphones120spatially distributed or around a top (or first end) of the device108(1). The microphones120, in some instances, may include a first microphone200(1), a second microphone200(2), a third microphone200(3), and a fourth microphone200(4). The audio data, or audio signals, generated by the microphones120may be utilized to identify discrete sources of sound emanating within the environment102, such as the first participant104(1) and the second participant104(2). For example, beamforming techniques may be used to identify that the environment102includes the first participant104(1) and the second participant104(2), or that the first participant104(1) and the second participant104(2) are speaking simultaneously. Additionally, spatially distributing the microphones120may assist in identifying the participants104and/or disambiguating the participants104from one another. As shown inFIG.2, the device108(1) may capture speech from participants using the microphones120. For example, the microphones120may capture the first speech106(1) and the second speech106(2). However,FIG.2illustrates that the first microphone200(1) may be located proximate or nearest the first speech106(1), while the second microphone200(2) may be located proximate or nearest the second speech106(2). As discussed above, these microphones may be respectively associated with the first participant104(1) and the second participant104(2) for use in generating the transcript146. That is, in some instances, after attenuating noise from other background noise or other participants (e.g., same voice detection), an audio signal generated by the first microphone200(1) may be used to formulate a transcript of the first participant104(1), while an audio signal generated by the second microphone200(2) may be used to formulate a transcript of the second participant104(2). In such instances, as the first microphone200(1) may capture audio of the first participant104(1) at the highest energy, or highest signal level, after processing, this audio data may be used to determine the speech of the first participant104(1). Similarly, as the second microphone200(2) may capture audio of the second participant104(2) at the highest energy, or highest signal level, after processing, this audio data may be used to determine the speech of the second participant104(2). To further illustrate, the first microphone200(1) may capture first audio representative of the first speech106(1). Using the audio data (or signals) generated by the other microphones120(e.g., the second microphone200(2), the third microphone200(3), and/or the fourth microphone200(4)), the device108(1) may isolate the speech of the first participant104(1). In one approach, the device108(1) (or another communicatively coupled device, system, or service) may include a beamforming component to analyze signals received from the microphones200(1)-(4) (i.e., the microphones120). As the speech is received from a particular direction, the directional signal (e.g., formed beam) associated with that direction tends to exhibit more energy or signal strength than the other signals (or beams), thereby arriving at the direction of the participants speaking. Additionally, or alternatively, other techniques may be employed to determine a location or direction of the participants, or the number of distinct participants within the environment102. For instance, a timing component may be configured to analyze signals from the microphones200(1)-(4) to produce multiple time values indicative of timing differences between arrivals of the speech at the microphones200(1)-(4). The time difference of arrival values may be analyzed to ascertain direction of the participants104and approximate location of the user within the environment102. Triangulation, and the comparison of energy levels between microphones, may determine the presence of more than one participant and the location of the participant(s) relative to the device108(1). Isolating the first speech106(1) of the first participant104(1) may create a processed audio signal used to generate the transcript146. As part of this process, the first audio corresponding to the first speech106(1) may be associated with the first participant104(1) and/or the first microphone200(1) may be associated with the first participant104(1). Therein, subsequent audio data generated by the first microphone200(1) may associated with the first participant104(1), and background noise, echo, or speech of other participants (e.g., the second participant104(2)) may be attenuated. Such processes may utilize same voice or speech detection across the audio data generated by the microphones200(1)-(4) to attenuate and/or filter out audio other than the first participant104(1) (e.g., the first speech106(1)). Therein, ASR and/or NLU may be performed on the audio signals to determine utterances of the first participant104(1). This process may repeat for second audio received by the second microphone200(2), that represents the second speech106(2)) of the second participant104(2). More generally, the audio captured by the microphones120may be processed for determining or disambiguating the speech of any and all participants engaged in the meeting. After isolating the speech of participants, and performing ASR and/or NLU the transcript146may be generated. The transcript146may indicate utterances made by the first participant104(1) and the second participant104(2), as well as utterances captured by devices in remote locations that indicate speech of additional participants. That is, the transcription service110may receive audio signals from a plurality of devices within the environment102, or at remote locations, for generating the transcript146. In this sense, the device108(1) may represent just one device that receives audio for use in generating the transcript146, or that obtains a recording of the meeting. For example, microphones located elsewhere in the environment102may be used to capture audio and/or personal devices carried by respective participants (e.g., held in their hands or pockets) may be used to capture audio. Although the device108(1) is illustrated and discussed as having certain components, the device108(1) may be an input/output device configured to record audio and/or video, receive voice queries, commands, and/or utterances and provide data to one or more of the services and/or other applications. For example, one or more cameras may capture video data within an environment102for use in determining the presence of the participants104. The device108(1) may also include one or presentation devices (e.g., a video screen, speakers, etc.) that may be utilized to present sound and/or video to the participants104. FIGS.3-8illustrate various processes related to determining participants within a meeting and generating transcripts of the meeting. The processes described herein are illustrated as collections of blocks in logical flow diagrams, which represent a sequence of operations, some or all of which may be implemented in hardware, software, or a combination thereof. In the context of software, the blocks may represent computer-executable instructions stored on one or more computer-readable media that, when executed by one or more processors, program the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types. The order in which the blocks are described should not be construed as a limitation, unless specifically noted. Any number of the described blocks may be combined in any order and/or in parallel to implement the process, or alternative processes, and not all of the blocks need be executed. For discussion purposes, the processes are described with reference to the environments, architectures, and systems described in the examples herein, such as, for example those described with respect toFIGS.1and2, although the processes may be implemented in a wide variety of other environments, architectures, and systems. FIG.3illustrates an example process300for receiving audio, processing the audio, and transmitting the audio to generate transcripts. At302, the process300may receive a first audio signal generated by a first microphone. For example, the device108(1) may receive a first audio signal generated by the first microphone200(1). At304, the process300may receive a second audio signal generated by a second microphone. For example, the device108(1) may receive a second audio signal generated by the second microphone200(2). At306, the process300may receive an nthaudio signal generated by an nthmicrophone. For example, the device108(1) may receive the nthaudio signal generated by an nthmicrophone of the device108(1) (e.g., the third microphone200(3), the fourth microphone200(4), etc.). The nthmicrophone may also be separate from the device108(1) (e.g., part of a separate device, standalone microphone, etc.)) At308, the process300may process the first audio signal, the second audio signal, and/or the nthaudio signal. For example, after receiving the first audio signal126(1), the second audio signal126(2), and/or the nthaudio signal, the process300may perform various techniques associated with processing the audio and determining participants associated with the audio, and/or determining which audio corresponds to participants within the meeting. By way of example, the processing techniques may include beamforming, acoustic echo cancellation, triangulation, same voice detection, and/or time of arrival. As discussed above, these processing techniques function to determine the participants104within the environment102, and/or which participants are substantially or primarily associated with the audio signal and/or the microphones120, for use in generating the transcript146. At310, the process300may determine that a first participant is associated with the first microphone and/or first processed audio signal. For example, as part of processing the audio received from the microphones120, the process300may determine that speech of the first participant104(1) (e.g., the first speech106(1)) is associated with the first microphone200(1) (e.g., based on an energy level, beamforming, etc.). Such determination, after processing the audio to attenuate noise or speech of other participants, may be utilized to identify the speech of the first participant104(1). For example, the techniques may process the audio to substantially cancel acoustic echoes and substantially reduce double talk. Noise reduction may also be provided to process the audio signals to substantially reduce noise originating from sources other than an associated participant. In this manner, audio signals may be processed to identify times where echoes are present, where double talk is likely, where background noise is present, and attempt to reduce these external factors to isolate and focus on the speech of the near participant. By isolating on signals indicative of the speech from the near participant, better signal quality is provided to enable more accurate interpretation of the speech. Therein, after attenuating the noise of other sources within the environment102, other than the first participant104(1), the first processed audio signal may correspond to the speech of the first participant104(1). At312, the process300may determine that a second participant is associated with the second microphone and/or second processed audio signal. For example, as part of processing the audio received from the microphones120, the process300may determine that speech of the second participant104(2) (e.g., the second speech106(2)) is associated with the second microphone200(2) (e.g., based on an energy level, beamforming, etc.). Such determination, after processing the audio to attenuate noise or speech of other participants, may be utilized to identify the speech of the second participant104(2). Therein, after attenuating the noise of other sources within the environment102, other than the second participant104(2), the second processed audio signal may correspond to the speech of the second participant104(2). At314, the process300may determine that an nthparticipant is associated with the nthmicrophone and/or an nthprocessed audio signal. For example, as part of processing the audio received from the microphones120, the process300may determine that speech of an nthparticipant is associated with the nthmicrophone. In some instances, the nthmicrophone may be a physical microphone of the device108(1), or may represent a virtual microphone that corresponds to audio data generated across multiple microphones. Therein, after attenuating the noise of other sources within the environment102, other than the nthparticipant, the nthprocessed audio signal may correspond to the speech of the second participant104(2). At316, the process300may transmit the first processed audio data, or a first processed audio signal. For example, after determining that the first processed audio signal or that the first microphone200(1) corresponds to the first participant104(1), the device108(1) may transmit the first processed audio signal (e.g., the audio signal126(1)) to the transcription service110. At318, the process300may transmit the second processed audio data, or a second audio signal. For example, after determining that the second processed audio signal or that the second microphone200(2) corresponds to the second participant104(2), the device108(1) may transmit the second processed audio signal (e.g., the audio signal126(2)) to the transcription service110. At320, the process300may transmit the nthprocessed audio data, or an nthaudio signal. For example, after determining that the nthprocessed audio signal or that the nthmicrophone corresponds to the nthparticipant, the device108(1) may transmit the nthprocessed audio signal to the transcription service110. In some instances, the device108(1) may perform the processing of the transcription service110on the audio to determine, identify, and associate the audio (or signals) and/or microphones with participants. FIG.4illustrates an example process400for generating transcripts of a meeting. At402, the process400may receive a first processed audio signal (or data) corresponding to a first participant. For example, the transcription service110may receive the audio signal126(1) corresponding to speech of the first participant104(1), as captured by the device108(1). In some instances, the audio signal126(1) received by the transcription service110may be already processed for removing noise or audio from sources other than the first participant104(1). At404, the process400may associate a first microphone with the first participant. For example, the transcription service110may store the mapping information156that indicates the first microphone200(1) of the device108(1) is associated with the first participant104(1). Such association may indicate that the first microphone200(1) is nearest the first participant104(1) or substantially captures speech of the first participant104(1). Such association may be utilized when generating the transcript146of the meeting for identifying or determining the first speech106(1) of the first participant104(1). At406, the process400may receive a second processed audio signal (or data) corresponding to a second participant. For example, the transcription service110may receive the audio signal126(2) corresponding to speech of the second participant104(2), as captured by the device108(1). In some instances, the audio signal126(2) received by the transcription service110may be already processed for removing noise or audio from sources other than the second participant104(2). At408, the process400may associate a second microphone with the second participant. For example, the transcription service110may store the mapping information156that indicates the second microphone200(2) of the device108(1) is associated with the second participant104(2). Such association may indicate that the second microphone200(2) is nearest the second participant104(2) or substantially captures speech of the second participant104(2). Such association may be utilized when generating the transcript146of the meeting for identifying or determining the second speech106(2) of the second participant104(2). At410, the process400may receive a third processed audio signal (or data) corresponding to a third participant. For example, the transcription service110may receive the audio signal126(3) corresponding to the third speech106(3) of the third participant112, as captured by the device108(2). At412, the process400may associate a third microphone with the third participant. For example, the transcription service110may store the mapping information156that indicates a third microphone of the device108(2), or a microphone of the device108(2), is associated with the third participant112. Such association may indicate that the audio signal126(3) received from the device108(2) corresponds to the third speech106(3) or utterances of the third participant112. At414, the process400may determine whether additional processed audio signal (or data) are received. For example, the transcription service110may determine whether additional audio signals (or data) are received from the devices108engaged in the meeting or whether the meeting has concluded. In some instances, the transcription service110may continuously receive audio signals/data from the devices108throughout the meeting, or may receive the audio signal/data at the conclusion of the meeting for generating the transcript146. If the process400determines that no additional processed audio signals/data are received, the process400may follow the “NO” route and proceed to416. At416, the process400may generate a transcript of the meeting, which may represent the speech or utterances of the first participant, the second participant, and/or the third participant. For example, the transcription service110may perform ASR and/or NLU or the audio data (e.g., the audio signal126(1)-(3)) to generate the transcript146. In some instances, the transcription service110may utilize components, such as the transcription component138, for processing the audio signal126(1)-(3) and generating corresponding text associated with the speech106(1)-(3). Such text may be used to generate the transcript146of the meeting, and the utterances of the respective participants. Alternatively, if the process400at414determines that additional processed audio signals (or data) is received, or that the meeting has not concluded, the process400may follow the “YES” route and proceed to418. At418, the process400may associate the additional processed audio signals (or data) with the first participant, the second participant, or the third participant. For example, after receiving the additional processed audio signals (or data), the transcription service110may determine the originating source of the audio signals, or which microphone generated and/or received the audio associated with the additional processed audio signals. Such determination may indicate whether the additional audio signals, or data therein, are associated with the first participant104(1), the second participant104(2), or the third participant112. That is, the association of participants to respective microphones (e.g., using the mapping information156), may be used to determine which participant the additional audio signal(s) correspond to, or who is associated with, the additional audio signal(s). From418, the process400may loop to414and generating the transcript146if additional audio signal(s) are not received. As such, the process400illustrates a scenario whereby the transcription service110generates the transcript146of the meeting. After generating the transcript146, the transcription service110may store the transcript146and/or transmit the transcript146to participants of the meeting. FIG.5illustrates an example process500for receiving audio of a meeting, processing the audio data, and generating transcripts. At502, the process500may receive a first audio signal representative of first audio data generated by a first microphone, a second audio signal representative of second audio data generated by a second microphone, and/or an nthaudio signal representative of nthaudio data. For example, the transcription service110may receive, from the device108(1), the first audio signal126(1) and/or the second audio signal126(2). The transcription service110, however, may receive additional nthaudio data generated by the device108(1), the device108(2), or other microphones within one or more environments in which the meeting takes place. In some instances, the transcription service110may receive any number of audio signals generated by microphones, and which are utilized for capturing utterances or speech of participants engaged in the meeting. At504, the process500may process the first audio signal (or data therein), the second audio signal (or data therein), and/or the nthaudio signal (or data therein). For example, the transcription service110may process the audio data to determine a number of distinct participants engaged in the meeting, a number of distinct participants within the environment102(e.g., the first participant104(1) and the second participant104(2)) to disambiguate the participants. Discussed above, such processing may include beamforming, time of arrival, noise cancellation, same voice detection, a comparison of energy/signal levels, for disseminating speech of the participants. At506, the process500may determine a first participant associated with the first processed audio signal. For example, as part of processing the audio signal126, the transcription service110may determine to associate the first processed audio signal with the first participant104(1). In other words, the first processed audio signal (or data therein) may represent the speech of the first participant104(1), or a microphone that generated the audio signal126(1) is associated with the first participant104(1). At508, the process500may determine a second participant associated with the second processed audio signal. For example, as part of processing the audio signal126, the transcription service110may determine to associate the second processed audio signal with the second participant104(2). In other words, the second processed audio signal (or data therein) may represent the speech of the second participant104(2), or a microphone that generated the audio signal126(2) is associated with the second participant104(2). At510, the process500may determine an nthparticipant associated with an nthprocessed audio signal. For example, as part of processing the audio signals, the transcription service110may determine to associate the nthprocessed audio signal with the nthparticipant. The nthprocessed audio signal (or data therein) may represent the speech of the nthparticipant, or a microphone that generated the nthaudio signal is associated with the nthparticipant. The nthaudio signal may also be generated as a combination of audio signals from multiple microphones (e.g., virtual microphones). At510, the process500may generate a transcript of the meeting. For example, the transcription component138of the transcription service110may generate the transcript146. The transcript146may also be generated utilizing ASR and/or NLU techniques. In some instances, as part of generating the transcript, action item(s) may be identified. For example, the transcript146may be parsed for key words and/or key phrases to identify action items discussed during the meeting. In some instances, tasks may be generated that correspond to the action items. FIGS.6and7illustrates an example process for determining an identify of participants engaged in a meeting. At602the process600may receive a first audio signal generated by a first microphone within an environment. For example, the transcription service110may receive, from the device108(1), the audio signal126(1) generated by the first microphone200(1). At604the process600may receive a second audio signal generated by a second microphone within the environment. For example, the transcription service110may receive, from the device108(1), the audio signal126(2) generated by the second microphone200(2). At606, the process600may process the first audio signal to generate a first processed audio signal (or data). For example, components of the transcription service110may attenuate noise of other participants within the environment102such that the first processed audio signal substantially represents utterances or speech of the first participant104(1). At608, the process600may compare a signature of the first processed audio signal to previously stored signatures are associated with participants to determine a first similarity between the signature of the first processed audio signal and the previously stored audio signatures. For example, the audio signatures stored in association with the participant profiles150may be compared to an audio signature of the first processed audio signal. This may include comparing a volume, pitch, frequency, tone, and/or other audio characteristic(s) of the first processed audio signal to the stored signatures. In some instances, the speaker identification component136may determine the first similarity. At610, the process600may determine whether the first similarity is greater than a first threshold. If not, meaning that the signature of the generated first processed audio signal does not match well with the selected signature to which it was compared, then the process600may follow the “NO” route and loop back to608to compare the signature of the first processed audio signal to another previously generated signature associated with a different participant. If, however, the calculated similarity is greater than the first threshold, meaning that the signature of the generated signal and the selected signature are strong matches, then the process600may follow the “YES” route and proceed to612. At612, the process may determine a first identity of a first participant within the environment. For example, based on determining a match (e.g., above the first threshold) between the audio signature of the first processed audio signal and the previously stored audio signature, the process600may determine an identity of a participant (e.g., the first participant104(1)) associated with that previously stored audio signature. At614, the process600may process the second audio signal to generate a second processed audio signal. For example, components of the transcription service110may attenuate noise of other participants within the environment102such that the second processed audio signal substantially represents utterances of the second participant104(2). At616, the process600may compare a signature of the second processed audio signal to the previously stored signatures associated with participants to determine a second similarity between the signature of the second processed audio signal and the previously stored audio signatures. For example, audio signatures stored in association with the participant profiles150may be compared to an audio signature of the second processed audio signal. As discussed above, this may include comparing a volume, pitch, frequency, tone, and/or other audio characteristic(s) of the first processed audio signal to the stored signatures. In some instances, the speaker identification component136may determine the second similarity. At618, the process600determines whether the second similarity is greater than a second threshold. If not, meaning that the signature of the second processed audio signal does not match well with the selected signature to which it was compared, then the process600may follow the “NO” route and loop back to616to compare the signature of the generated second processed audio signal to another previously generated signature associated with a different participant. If, however, the calculated similarity is greater than the second threshold, meaning that the signature of the generated signal and the selected signature are strong matches, then the process600may follow the “YES” route and proceed to620. At620, the process may determine a second identity of a second participant within the environment. For example, based on determining a match (e.g., above the second threshold) between the audio signature of the second processed audio signal and the previously stored audio signature, the process600may determine an identity of a participant (e.g., the second participant104(2)) associated with that previously stored audio signature. From620, the process600may proceed to “A” ofFIG.6whereby at622, shown inFIG.7, the process600may associate the first processed audio signal (or data) with the first participant. For example, based on the first similarity being greater than the first threshold, the process600may determine that the first processed audio signal is associated with the first participant104(1), or speech of the first participant104(1). At624, the process600may associate the second processed audio signal (or data) with the second participant. For example, based on the second similarity being greater than the second threshold, the process600may determine that the second processed audio signal is associated with the second participant104(2), or speech of the second participant104(2). At626, the process626may generate a transcript of the meeting, representing the respective utterances of the first participant and the second participant (or other participants within the environment102and/or engaged in the meeting). For example, the transcription component138of the transcription service110may generate the transcript146. The transcript146may be generated utilizing ASR and/or NLU techniques. In some instances, generally, the speaker identification component136may analyze a candidate set of reference audio signatures against the audio signature from the first and/or second processed audio signals to determine a confidence value associated with how closely the sample audio signature corresponds to each or some of the reference audio signatures. The reference audio signature with the most favorable confidence value may be selected and may indicate which user profile the audio data is associated with. In some instances, the process600may narrow a set of candidate reference audio signatures based on information about who was invited or who is attending the meeting (e.g., based on the participants accepting the meeting invite). From there, the transcription service110may determine participants within the meeting and may then compare the audio signatures of those participants to determine which participant said what during the meeting. Furthermore, in some instances, other forms of data may be used for identifying the participants, such as facial recognition obtained from cameras within the environment102. FIG.8illustrates an example process800for performing iterative operations to determine the number of participants within an environment, or for disambiguating participants from one another within an environment. At802, the process800may receive a first audio signal generated by a first microphone within an environment. For example, the transcription service110may receive a first audio signal generated by a first microphone of the device108(1) within the environment102. At804, the process800may receive a second audio signal generated by a second microphone within the environment. For example, the transcription service110may receive a second audio signal generated by a second microphone of the device108(1), or of another device, in the environment102. At806, the process800may compare the first audio signal and/or the second audio signal. For example, audio processing components of the transcription service110may compare the first audio signal and the second audio signal to identify similarities and/or differences therebetween. In some instances, comparing the first audio signal and the second audio signal may include comparing frequencies, amplitudes, pitch, and/or other audio characteristics to identify the similarities and/or differences. At808, the process800may determine whether there is a similarity and/or a difference between the first audio signal and the second audio signal. For example, the transcription service110, based on comparing the first audio signal and the second audio signal, may determine a portion of the first audio signal that corresponds to a portion of the second audio signal, vice versa, that represents the same speech or sound. For example, the first microphone and the second microphone may receive the same audio but at different energy levels. The comparison of the first audio signal and the second audio signal may therefore identify the portion of the speech that were received at the microphones, respectively. If at808the process800determines that there is not a similarity between the first audio signal and the second audio signal, then the process800may follow the “NO” route and proceed to810whereby the process800may determine a number of participants within the environment102based on the number of similarities and/or differences. Alternatively, if at808the process800determines that there are similarities and/or differences, the process800may follow the “YES” route and proceed to812, whereby the process800may associate the similarity and/or difference with a participant. For example, the transcription service110may associate the same audio, or the portion of the same audio represented within the first audio signal and the second audio signal, with a participant. This portion, noted above, may represent the same speech of the participant as captured by the respective microphones in the environment102. At814, the process800may filter the similarity and/or the difference from the first audio signal and/or the second audio signal. For example, based on determining the portion of the first audio signal and the portion of the second audio signal that correspond to speech of a participant, that speech (or audio) may be filtered from the audio signals. Filtering this speech from the audio signals, respectively, may be used to identify additional participants within the environment102. That is, as shown, from814, the process800may loop to806whereby the process may compare the first audio signal and the second audio signal. However, at this instance, the first audio signal and the second audio signal may be compared after filtering out the speech of the participant within the environment. Therein, the comparison of the filtered first audio signal and the filtered second audio signal may be used to identify additional participants within the environment102. At the conclusion, after there are no additional similarities and/or differences, a number of participants may be determined. Additionally, each of these similarities and/or differences, or the portions of the audio signals that are filtered out, may be used for generating a transcription of the meeting and/or associating microphones with participants. Furthermore, participants may be associated with virtual microphones, or the combination of audio signals across microphones, to determine a speech signal used to generate corresponding audio and/or data for the participant. Although the process800is discussed as being performed by the transcription service110, some or all of the audio processing may be carried out by the device108(1). Additionally, more than two audio signals may be received from the environment102for determining the presence of the participants and/or disambiguating between the participants. FIG.9is a system and network diagram that shows an illustrative operating environment900that includes a service provider network902. The service provider network902may be configured to implement aspects of the functionality described herein, such as the functions of the transcription service110to generate the transcripts146. The service provider network902may provide computing resources, like virtual machine (VM) instances and storage, on a permanent or an as-needed basis. The computing resources provided by the network service provider902may include data processing resources, data storage resources, networking resources, data communication resources, network services, and the like. Among other types of functionality, the computing resources provided by the service provider network902may be utilized to implement the various services and components described above. Each type of computing resource provided by the service provider network902may be general-purpose or may be available in a number of specific configurations. For example, data processing resources may be available as physical computers or VM instances in a number of different configurations. The VM instances may be configured to execute applications, including web servers, application servers, media servers, database servers, gaming applications, and/or other types of programs. Data storage resources may include file storage devices, block storage devices, and the like. The service provider network902may also be configured to provide other types of computing resources not mentioned specifically herein. The computing resources provided by the service provider network902may be enabled in one embodiment by one or more data centers904A-904N (which might be referred to herein singularly as “a data center904” or in the plural as “the data centers904”). The data centers904are facilities utilized to house and operate computer systems and associated components. The data centers904typically include redundant and backup power, communications, cooling, and security systems. The data centers904may also be located in geographically disparate locations, or regions806. One illustrative embodiment for a data center904that may be utilized to implement the technologies disclosed herein will be described below with regard toFIG.9. The transcription service110may utilize the service provider network902and may access the computing resources provided by the service provider network902over any wired and/or wireless network(s)908(such as the network114), which may be a wide area communication network (“WAN”), such as the Internet, an intranet or an Internet service provider (“ISP”) network or a combination of such networks. For example, and without limitation, the devices108engaged in the meeting may transmit audio data (or other data, information, content, etc.) to the service provider network902, or computing resources thereof, by way of the network(s)908. It should be appreciated that a local-area network (“LAN”), the Internet, or any other networking topology known in the art that connects the data centers904to remote clients and other users may be utilized. It should also be appreciated that combinations of such networks may also be utilized. The transcription service110may be offered as a service by the service provider network902and may manage the deployment of computing resources of the service provider network902when generating the transcripts146within the transcript database148, as described herein. FIG.9is a computing system diagram1000that illustrates one configuration for the data center904that implements aspects of the technologies disclosed herein. The example data center904shown inFIG.9includes several server computers1002A-1002F (which might be referred to herein singularly as “a server computer1002” or in the plural as “the server computers1002”) for providing computing resources1004A-1004E. The server computers1002may be standard tower, rack-mount, or blade server computers configured appropriately for providing the computing resources described herein (illustrated inFIG.9as the computing resources1004A-1004E). The computing resources provided by the service provider network902may be data processing resources such as VM instances or hardware computing systems, database clusters, computing clusters, storage clusters, data storage resources, database resources, networking resources, and others. Some of the server computers1002may also be configured to execute a resource manager1006capable of instantiating and/or managing the computing resources. In the case of VM instances, for example, the resource manager1006may be a hypervisor or another type of program configured to enable the execution of multiple VM instances on a single server computer1002. The server computers1002in the data center904may also be configured to provide network services and other types of services. In the example data center904shown inFIG.9, an appropriate LAN908is also utilized to interconnect the server computers1002A-1002F. It should be appreciated that the configuration and network topology described herein has been greatly simplified and that many more computing systems, software components, networks, and networking devices may be utilized to interconnect the various computing systems disclosed herein and to provide the functionality described above. Appropriate load balancing devices or other types of network infrastructure components may also be utilized for balancing a load between each of the data centers904A-904N, between each of the server computers1002A-1002F in each data center904, and, potentially, between computing resources in each of the server computers1002. It should be appreciated that the configuration of the data center904described with reference toFIG.9is merely illustrative and that other implementations may be utilized. The data center904shown inFIG.9also includes a server computer1002F that may execute some or all of the software components described above. For example, and without limitation, the server computer1002F (and the other server computers1002) may generally correspond to a server/computing device configured to execute components including, without limitation, the transcription service110that manages the generation of the transcripts146, as described herein, and/or the other software components described above. The server computer1002F may also be configured to execute other components and/or to store data for providing some or all of the functionality described herein. In this regard, it should be appreciated that the components illustrated inFIG.9as executing on the server computer1002F may execute on many other physical or virtual servers in the data centers904in various embodiments. Thus, the data center904inFIG.9may also include a plurality of server computers1002that execute a fleet of VM instances. FIG.10shows an example computer architecture for a computer1100capable of executing program components for implementing the functionality described above. The computer architecture shown inFIG.10illustrates a server computer, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and may be utilized to execute any of the software components presented herein. In some examples, the computer1100may correspond to one or more computing devices that implements the components and/or services described inFIG.1(e.g., the devices108, the transcription service110, etc.). The computer1100includes a baseboard1102, or “motherboard,” which is a printed circuit board to which a multitude of components or devices may be connected by way of a system bus or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”)1104operate in conjunction with a chipset1106. The CPUs1104may be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computer1100. The CPUs1104perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements may be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset1106provides an interface between the CPUs1104and the remainder of the components and devices on the baseboard1102. The chipset1106may provide an interface to a random-access memory (RAM)1108, used as the main memory in the computer1100. The chipset1106may further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)1110or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computer1100and to transfer information between the various components and devices. The ROM1110or NVRAM may also store other software components necessary for the operation of the computer1100in accordance with the configurations described herein. The computer1100may operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the LAN908. The chipset1106may include functionality for providing network connectivity through a network interface controller (NIC)1112, such as a gigabit Ethernet adapter. The NIC1112is capable of connecting the computer1100to other computing devices over the LAN908(or the network(s)908). It should be appreciated that multiple NICs1112may be present in the computer1100, connecting the computer1100to other types of networks and remote computer systems. The computer1100may be connected to a mass storage device1114that provides non-volatile storage for the computer1100. The mass storage device1114may store an operating system, programs, and/or components including, without limitation, the transcription service110that generates the transcripts146, as described herein, and data, which have been described in greater detail herein. The mass storage device1114may be connected to the computer1100through a storage controller1118connected to the chipset1106. The mass storage device1114may consist of one or more physical storage units. The storage controller1118may interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computer1100may store data on the mass storage device1114by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state may depend on various factors, in different embodiments of this description. Examples of such factors may include, but are not limited to, the technology used to implement the physical storage units, whether the mass storage device1114is characterized as primary or secondary storage, and the like. For example, the computer1100may store information to the mass storage device1114by issuing instructions through the storage controller1118to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computer1100may further read information from the mass storage device1114by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device1114described above, the computer1100may have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that may be accessed by the computer1100. In some examples, the operations performed by the service provider network902, and or any components and/or services included therein, may be carried out by the processor(s)128and/or130. By way of example, and not limitation, as discussed herein, memory, such as the memory124and/or132, or computer-readable storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store the desired information in a non-transitory fashion. The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. As mentioned briefly above, the mass storage device1114may store an operating system utilized to control the operation of the computer1100. According to one embodiment, the operating system comprises the LINUX operating system. According to another embodiment, the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Washington. According to further embodiments, the operating system may comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems may also be utilized. The mass storage device1114may store other system or application programs and data utilized by the computer1100. In one embodiment, the mass storage device1114or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the computer1100, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. These computer-executable instructions transform the computer1100by specifying how the CPUs1104transition between states, as described above. According to one embodiment, the computer1100has access to computer-readable storage media storing computer-executable instructions which, when executed by the computer1100, perform the various processes described above with regard toFIGS.3-7. The computer1100may also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein. The computer1100may also include one or more input/output controllers1118for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller1118may provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It will be appreciated that the computer1100might not include all of the components shown inFIG.10, may include other components that are not explicitly shown inFIG.10, or might utilize an architecture completely different than that shown inFIG.10. While various examples and embodiments are described individually herein, the examples and embodiments may be combined, rearranged, and modified to arrive at other variations within the scope of this disclosure. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the claims.
107,898
11862169
DETAILED DESCRIPTION As mentioned above, transcribed speech may be provided to the participants of the voice interaction in real-time to compensate for barriers to efficient communication which arise from differences in native languages and accents of the participants. Additionally, the transcribed speech may also be provided to the enterprise for performing text-based backend analytics and processing. Presently, STT transcription takes place at the enterprise end which requires significant computational resources and is not specifically tailored to the particular speech characteristics of different users. Therefore, there is a need for a system and method for performing STT transcriptions that addresses the challenges facing call centers. Today's customers often prefer to avail the services of contact centers through powerful devices such as smartphones and smart tablets as opposed to using landline equipment. The power of these devices evolves through their extensible and customizable nature providing the capability to utilize diverse applications (apps) which can be installed and executed locally on the device. The complexity of these apps varies from simple, such as those offering music or game services, to highly-complex, such as personal assistants or STT transcribers. Advancements in technologies related to the power of smart devices and the speed and accuracy of STT engines have made it possible to perform STT transcriptions at customer endpoints in real-time with sufficient accuracy. There is a need for a method and system which utilizes the increasing power of user endpoint devices such that STT transcriptions can take place locally on user endpoint devices rather than a server associated with an enterprise or contact center. Embodiments of the present disclosure provide a computing system and method for utilizing user endpoint devices for performing STT transcriptions of calls between a user/customer and an agent of an enterprise. The invention provides a way to take advantage of the power of millions of smart devices to improve call center profitability as well as the quality of the calls. The invention allows the enterprise to reduce costs as it removes the need for costly third-party STT engines. At the same time, performing STT at the user endpoint device saves the enterprise significant computational resources, which are especially valuable in today's cloud computing era. An embodiment of the present disclosure enhances the way in which voice interaction between a user and an enterprise takes place by augmenting the voice interaction with the transcribed speech in real-time. The transcribed text may be displayed on one or both ends of the audio communication and may improve the quality of the call by providing a transcription of the call in a language preferable to the agent and/or user. An embodiment of the invention comprises a method wherein a transcription of the call is provided in multiple languages. These languages may include a related language of either party, a related language of the enterprise and/or operating company, and/or a default system language to be used for backend analytics and processing. A “related language” could be a native language or a preferred language of a participant of the call. In an embodiment of the present disclosure, the enterprise is a call center and a transcript of the user's voice is provided to a display on a device an agent of the call center is using to facilitate the call, e.g., to be displayed within a desktop application on an agent's computer station. The language may be provided to the agent in a preferred language such as the agent's mother tongue or native language, as well as any other language in which the agent might feel comfortable with for the purposes of the call. From a quality perspective, the present disclosure permits a more productive STT transcription since the user endpoint device may be better tuned, configured, and/or customized for transcription of the user's particular language and accent. Advanced STT engines are capable of improving their transcription capability by learning a particular user's voice characteristics. Additionally, a user may manually adjust his or her particular accent settings to improve the effectiveness of the STT transcription. For example, a person of Asian descent may select Asian-English as his or her accent, and such a selection might permit the STT engine to tailor its algorithm accordingly to improve transcription quality and efficiency. Embodiments of the present disclosure aim to utilize this aspect of STT engines and powerful user endpoint devices to improve the performance in addition to the profitability of calls between a user and an enterprise. Thus, the present disclosure provides a system and method for utilizing a processor associated with a user endpoint device to perform at least one STT transcription of at least a portion of a voice interaction produced during an audio communication (call) between a user and an agent associated with an enterprise. This call may be initiated by the user associated with the user endpoint device using, for example, a smart app or web application. The user endpoint device may be a smartphone, another smart device such as a tablet, or any other device capable of initiating calls using Voice Over Internet Protocol (VoIP), Web Real-Time Communication (WebRTC), or other technological means which permit the audio of a call to be shared with an STT engine for transcription, and also allows for multi-channel communication. Furthermore, the call may comprise an audio communication alone or an audio communication in conjunction with audiovisual communication. The processor associated with the user endpoint device may utilize any available STT engine on the market. Speech recognition (also known as voice recognition) is the process of converting spoken words into computer text. Also known as Speech-to-text (STT) as used herein, STT can be accomplished using a variety of available software applications and “engines” to transcribe audio conversations in one spoken language into textual renditions in a variety of different languages. Additionally, STT not only handles different languages, but also dialects within individual languages. The embodiments of the present disclosure contemplate such functionality of the STT engine of disclosed system and method. Thus, as described herein, STT transcription may be performed to a preferred or related language comprising different languages or even dialects within different languages. Furthermore, as is known in the field of voice recognition, an STT engine can include machine learning components that allow that engine to be trained for a particular speaker such that the accuracy of the STT transcription can be improved. In one embodiment, for example, a method comprises an audio communication between a user endpoint device and an agent associated with an enterprise server, to which the enterprise server routes the call. The audio communication may comprise a voice interaction between the user and the agent. The call may be initiated through the use of an audio-capable application for initiating calls between multiple parties. For example, the call may be initiated using Skype, Google Duo, a customized application of the enterprise, or even through a web page associated with a web application, e.g., using a web browser plug-in. In the context of the present disclosure, it is to be understood that a step taken by the user endpoint device may be a step performed by the user endpoint device in conjunction with the audio-capable application used to initiate and facilitate the call. If no agent is available to take the call, the call can be placed in a wait queue in the enterprise and, for example, on-hold music or announcements can be played back to the user. When an agent of the enterprise becomes available, the call is taken out of the wait queue and assigned to the available agent. As mentioned, the call may be made through the use of a web browser plug-in which is used to initiate audio communications between multiple parties. Alternatively, WebRTC provides web browsers and mobile applications the ability to communicate in audio and video in real-time without the need for a plug-in. Alternatively, the call may be initiated through the use of a customized application associated with the enterprise, e.g., one created by the enterprise or specifically for the enterprise. The customized application may be downloaded and installed on the user's endpoint device. Additionally, the call may also be initiated using a traditional phone application. In this instance, the operating system, e.g., Google Android, iOS, etc., may share the audio data with a helper application for performing the STT transcription. To facilitate this functionality, a separate channel may be initialized to transmit the transcribed text concurrently with the voice data being transmitted on the voice channel set up by the traditional phone application. The system and method may further comprise a step wherein a determination is made regarding what language the STT should transcribe the at least the portion of the voice interaction. In order to do this, the user endpoint device may, upon initiating the call, send an inquiry to the enterprise server to determine at least one related language in which to transcribe the voice interaction. Alternatively, the determination of the related language may take place after the call is assigned to an agent associated with the enterprise. The possible related languages include a native language of the agent and/or a preferred language of the agent, call center, and/or operating company. Additionally, this inquiry may also check to determine what language to transcribe the voice interaction for the purposes of backend analytics by the enterprise and/or operating company. In a different example, the end user device may, upon initiating the call, provide to the enterprise server a list of languages for which it has the capability of transcribing audio or speech data. Embodiments of the present disclosure will be illustrated below in conjunction with an exemplary communication system, e.g., the Avaya Aura® system. Although well suited for use with, e.g., a system having an Automatic Call Distribution (ACD) or other similar contact processing switch, embodiments of the present disclosure are not limited to any particular type of communication system switch or configuration of system elements. Those skilled in the art will recognize the disclosed techniques may be used in any communication application in which it is desirable to provide improved contact processing. The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together. The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising”, “including”, and “having” can be used interchangeably. The term “automatic” and variations thereof, as used herein, refers to any process or operation done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material”. The term “computer-readable medium” as used herein refers to any tangible storage and/or transmission medium that participate in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, NVRAM, or magnetic or optical disks. Volatile media includes dynamic memory, such as main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. When the computer-readable media is configured as a database, it is to be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, embodiments may include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software embodiments of the present disclosure are stored. The terms “determine”, “calculate” and “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique. The term “module” as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and software that is capable of performing the functionality associated with that element. Also, while the present disclosure is described in terms of exemplary embodiments, it should be appreciated those individual aspects of the present disclosure can be separately claimed. A module that performs a function also may be referred to as being configured to perform the function, e.g., a data module that receives data also may be described as being configured to receive data. Configuration to perform a function may include, for example: providing and executing computer code that performs the function; providing provisionable configuration parameters that control, limit, or enable capabilities of the module (e.g., setting a flag, setting permissions, setting threshold levels used at decision points, etc.); providing a physical connection, such as a jumper to select an option, or to enable/disable an option; attaching a physical communication link; enabling a wireless communication link; energizing a circuit that performs the function (e.g., providing power to a transceiver circuit in order to receive data); and so forth. The term “switch” or “server” as used herein should be understood to include a Private Branch Exchange (PBX), an ACD, an enterprise switch, an enterprise server, or other type of telecommunications system switch or server, as well as other types of processor-based communication control devices such as media servers, computers, adjuncts, etc. FIG.1Ashows an illustrative embodiment of the present disclosure. A contact center100comprises a server110, a set of data stores or databases114containing contact or customer related information, resource or agent related information and other information that may enhance the value and efficiency of the contact processing, and a plurality of servers, namely a voice mail server118, an Interactive Voice Response unit (e.g., IVR)122, and other servers126, a switch130, a plurality of working agents operating packet-switched (first) communication devices134-1-N (such as computer work stations or personal computers), and/or circuit-switched (second) communication devices138-1-M, all interconnected by a Local Area Network (LAN)142, (or Wide Area Network (WAN)). In another embodiment of the present disclosure, the customer and agent related information may be replicated over multiple repositories. The servers may be connected via optional communication lines146to the switch130. As will be appreciated, the other servers126may also include a scanner (which is normally not connected to the switch130or Web Server), VoIP software, video call software, voice messaging software, an IP voice server, a fax server, a web server, an email server, and the like. The switch130is connected via a plurality of trunks to a circuit-switched network150(e.g., Public Switch Telephone Network (PSTN)) and via link(s)154to the second communication devices138-1-M. A security gateway158is positioned between the server110and a packet-switched network162to process communications passing between the server110and the packet-switched network162. In an embodiment of the present disclosure, the security gateway158(as shown inFIG.1A) may be a G700 Media Gateway™ from Avaya Inc., or may be implemented as hardware such as via an adjunct processor (as shown) or as a chip in the server110. The switch130and/or server110may be any architecture for directing contacts to one or more communication devices. In some embodiments of the present disclosure, the switch130may perform load-balancing functions by allocating incoming or outgoing contacts among a plurality of logically and/or geographically distinct contact centers. Illustratively, the switch130and/or server110may be a modified form of the subscriber-premises equipment sold by Avaya Inc. under the names Definity™ Private-Branch Exchange (PBX) based ACD system, MultiVantage™ PBX, Communication Manager™, S8300™ media server and any other media servers, SIP Enabled Services™, Intelligent Presence Server™, and/or Avaya Interaction Center™, and any other products or solutions offered by Avaya or another company. Typically, the switch130/server110is a stored-program-controlled system that conventionally includes interfaces to external communication links, a communications switching fabric, service circuits (e.g., tone generators, announcement circuits, etc.), memory for storing control programs and data, and a processor (i.e., a computer) for executing the stored control programs to control the interfaces and the fabric and to provide ACD functionality. Other types of known switches and servers are well known in the art and therefore not described in detail herein. The first communication devices134-1-N are packet-switched and may include, for example, IP hardphones such as the 4600 Series IP Phones™ from Avaya, Inc., IP softphones such as an IP Softphone™ from Avaya Inc., Personal Digital Assistants (PDAs), Personal Computers (PCs), laptops, packet-based H.320 video phones and conferencing units, packet-based voice messaging and response units, packet-based traditional computer telephony adjuncts, peer-to-peer based communication devices, and any other communication device. The second communication devices138-1-M are circuit-switched devices. Each of the second communication devices138-1-M corresponds to one of a set of internal extensions Ext-1-M, respectively. The second communication devices138-1-M may include, for example, wired and wireless telephones, PDAs, H.320 videophones and conferencing units, voice messaging and response units, traditional computer telephony adjuncts, and any other communication devices. It should be noted that the embodiments of the present disclosure do not require any particular type of information transport medium between switch, or server and first and second communication devices, i.e., the embodiments of the present disclosure may be implemented with any desired type of transport medium as well as combinations of different types of transport channels. The packet-switched network162may be any data and/or distributed processing network, such as the Internet. The packet-switched network162typically includes proxies (not shown), registrars (not shown), and routers (not shown) for managing packet flows. The packet-switched network162as shown inFIG.1Ais in communication with a first communication device166via a security gateway170, and the circuit-switched network150with an external second communication device174. In one configuration, the server110, the packet-switched network162, and the first communication devices134-1-N are Session Initiation Protocol (SIP) compatible and may include interfaces for various other protocols such as the Lightweight Directory Access Protocol (LDAP), H.248, H.323, Simple Mail Transfer Protocol (SMTP), IMAP4, ISDN, E1/T1, and analog line or trunk. It should be emphasized that the configuration of the switch130, the server110, user communication devices, and other elements as shown inFIG.1Aare for purposes of illustration only and should not be construed as limiting embodiments of the present disclosure to any particular arrangement of elements. Further, the server110is notified via the LAN142of an incoming service request or work item by the communications component (e.g., switch130, a fax server, an email server, a web server, and/or other servers) receiving the incoming service request as shown inFIG.1A. The incoming service request is held by the receiving telecommunications component until the server110forwards instructions to the component to forward or route the contact to a specific contact center resource, such as the IVR unit122, the voice mail server118, and/or first or second telecommunication device134-1-N,138-1-M associated with a selected agent. FIG.1Billustrates at a relatively high-level hardware abstraction a block diagram of a server such as the server110, in accordance with an embodiment of the present disclosure. The server110may include an internal communication interface151that interconnects a processor157, a memory155and a communication interface circuit159. The communication interface circuit159may include a receiver and transmitter (not shown) to communicate with other elements of the contact center100such as the switch130, the security gateway158, the LAN142, and so forth. By use of programming code and data stored in the memory155, the processor157may be programmed to carry out various functions of the server110. Although embodiments are discussed with reference to client-server architecture, it is to be understood that the principles of embodiments of the present disclosure apply to other network architectures. For example, embodiments of the present disclosure apply to peer-to-peer networks, such as those envisioned by the Session Initiation Protocol (SIP). In the client-server model or paradigm, network services and the programs used by end users to access the services are described. The client side provides a user with an interface for requesting services from the network, and the server side is responsible for accepting user requests for services and providing the services transparent to the user. By contrast in the peer-to-peer model or paradigm, each networked host runs both the client and server parts of an application program. Additionally, embodiments of the present disclosure do not require the presence of packet- or circuit-switched networks. FIG.2depicts a high-level flowchart of the method disclosed herein. At step202, a user may initiate an audio communication, using a user endpoint device, with an enterprise, particularly with an agent associated with an agent device to which an enterprise server routes the audio communication. At step204, during the audio communication, the user endpoint device may perform multilingual STT of at least a portion of the voice interaction to produce transcribed speech. At step206, during the audio communication, the user endpoint device may transmit the at least the portion of the voice interaction and transcribed speech to the enterprise server. The frequency of transmission may depend on a number of factors. However, embodiments in accordance with the present disclosure contemplate the sending of the transcribed speech in a manner that it appears to mimic the audio conversation, or audio communication, in real-time. One benefit of the transcribed speech is to assist the user associated with the user endpoint device and an agent associated with an agent device to which the enterprise server routes the audio communication. As such, performing the STT and transmitting the STT in real-time or near real-time can enhance the understanding of the audio conversation by one or both of the parties, and thus achieve a more productive conversation and better outcome of the interaction. FIGS.3A and3Bdepict flowcharts showing embodiments of the steps which may be performed by the user endpoint device upon initiating the audio communication. While the flowcharts and corresponding discussion are in relation to particular sequences of events, changes, additions, and omissions to this sequence can occur without materially affecting the operation of embodiments of the present disclosure. The illustrated flowcharts3A and3B are examples of two methods in accordance with the principles of the present disclosure. While the steps may be shown sequentially and occurring in a particular order, one of ordinary skill will recognize that some of the steps may be performed in a different order and some of the steps may be performed in parallel as well. FIG.3Adepicts a flowchart202ashowing steps which may be performed by the user endpoint device upon initiating the audio communication. Once the user generates an outbound call towards an enterprise, the following steps may occur: First, the user endpoint device may determine whether or not it is capable of performing STT transcription at step302a. Alternatively, while not shown inFIG.3A, the user endpoint device may also initiate this determination in response to a request from the enterprise server, e.g., the server may send a request for the user endpoint device to respond with whether or not it is capable of performing STT. Two non-inclusive means for the user endpoint device to accomplish the STT transcription include (1) performing the task locally on the user endpoint device and (2) offloading the task to an external computer or server. The latter approach would involve the user endpoint device accessing services provided by the computer or server. The determination as to whether the user endpoint device is capable of performing the STT transcription locally on the device may comprise identifying the computational resources available on the user endpoint device. For example, this determination may involve checking the available processing power, memory, internet connectivity, and/or other performance metrics of the user endpoint device. Based on this information and the computational resources required for carrying out STT transcription, the user endpoint device may determine whether or not it can perform the task locally. Additionally, the determination of whether or not the user endpoint device is capable of performing STT may involve checking to ensure the device has STT software installed, or the ability to immediately download and install an STT engine for carrying out the transcription task. Additionally, and/or alternatively, the method of determining whether or not the user endpoint device is capable of performing STT transcription may comprise determining whether it has access to a third-party service such as Google Translate for which it can offload the STT transcription. Another potential means of accomplishing the STT transcription would be to offload the task onto an external server such as a home computer or a network server. Utilizing an external server may require the user endpoint device to first determine whether it is capable of accessing the services of the external server. Once the determination has been made as to whether or not the user endpoint device is capable of performing STT transcription, the next step304amay comprise the user endpoint device transmitting this determination to the enterprise server and, subsequently or concurrently, determining the desired languages for which it is to transcribe the voice interaction at step306a. FIG.4Adepicts a flowchart306ashowing a set of steps taken by the user endpoint device in determining the desired transcription languages. First, at step402a, the user endpoint device may determine a set of transcription languages in which it is capable of performing STT. At step404a, this set of transcription languages may then be transmitted to the enterprise server along with an inquiry, at step406a, requesting identification of the desired transcription languages. In response, the user endpoint device may receive from the enterprise server a set of desired transcription languages as shown at step408a. The user endpoint device may experience a temporary hold prior to receiving the set of desired transcription languages as the enterprise server may need to take additional steps prior making this determination. For example, a call center may first need to assign the call to an agent before determining the related languages to identify for STT transcription at the user endpoint device. Next, at step410a, the user endpoint device may determine a related language associated with the user of the user endpoint device. This step may involve the user endpoint device referencing an STT language setting configured by the user on an application installed on the user endpoint device. Alternatively, the user endpoint device may query the user to select a language for the STT transcription. Following this step, the user endpoint device is considered to have initialized the audio communication and the method proceeds with step204ofFIG.2. FIG.3Bdepicts a flowchart202bshowing another embodiment of the steps which may be performed by the user endpoint device upon initiating the audio communication. The primary difference between this embodiment and the former embodiment is when in the method the determination of the desired languages for STT transcription is performed. In this embodiment, the user endpoint device may determine the desired languages for STT transcription prior to, or concurrently with, determining whether it has the capability to perform STT considering the computational resources available, availability of STT software, and/or access to external means for which it may offload the STT transcription task. Therefore, in this embodiment, rather than send the possible STT languages to the enterprise server, the method may start out with step306bto determine the desired transcription languages, which comprises the steps shown in the flowchart depicted inFIG.4B. First, at step406b, the user endpoint device may send an inquiry to the enterprise server requesting identification of the desired transcription languages. Next, in response to sending this inquiry, the user endpoint device may receive identification of the desired transcription languages from the enterprise server at step408b. The user endpoint device may experience a temporary hold prior to receiving the identification of transcription languages as the enterprise server may need to take additional steps prior to making this determination. For example, a call center may first need to assign the call to an agent before determining the related languages to identify for STT transcription at the user endpoint device. The user endpoint device may then, at step410b, determine a related language associated with the user to complete the set of desired transcription languages before going on to the STT capability determination step302binFIG.3B. As described above for the embodiment shown inFIG.4A, this step may involve the user endpoint device referencing an STT language setting configured by the user, or querying the user to select a language for the STT transcription. The user endpoint device may then determine whether or not it is capable of performing STT transcription, and moreover, whether it is capable of performing STT to the desired transcription languages. As discussed above, this will involve the similar steps as outlined in the former embodiment, but with the additional step of checking to see whether or not the user endpoint device is capable of STT transcription to the desired languages. Once this determination has been made, the user endpoint device may transmit this determination to the enterprise server, as shown by step304binFIG.3B. Following this step, the user endpoint device is considered to have initialized the audio communication and the method proceeds with step204ofFIG.2. In another embodiment, if STT transcription is to take place on the user endpoint device, through negotiation with the enterprise server, the customized app queries if the current language used for transcription is also the preferred language of the agent, and if not, the user endpoint device will query the preferred language of the agent. Once the user endpoint device receives the agent's preferred language, it will switch the transcription language accordingly. In another embodiment, the agent and/or the user may be capable of changing the STT transcription language during the audio communication. For example, a user-interface widget or menu item that is displayed on the WebRTC screen or the app screen may be available which allows a participant of the call to change the current transcription language if the participant determines a different language is more preferable. After a language selection is made, the user endpoint device may inform the user whether or not that language is available for STT transcription by the user endpoint device. As a call progresses, a participant of the call may determine, for example based on the subject matter of the call, that they would prefer to have a transcription in a different and/or additional language. An embodiment of the present disclosure includes such a functionality. In a further embodiment, the user endpoint device may ask if there is a more-related language that the agent and/or user prefers for the STT. In either of the embodiments just described, or any other embodiment consistent with the principles of the present disclosure, the user endpoint device may also determine whether it is capable of concurrently performing multiple STTs on at least a portion of the voice interaction. To determine whether or not the device can perform multiple STTs concurrently, the same performance metrics of the device may be identified as with determining whether or not the device is capable of performing a single STT, although multiple STTs will require more computational resources. In order to concurrently perform multiple STTs, the user endpoint device may be able to at least dedicate the necessary processing power and memory requirements to the separate STT tasks such that they can be performed in parallel. If it is determined that the user endpoint device is unable to perform the desired STT, or if the device is unable to determine whether or not it has such a capability, the STT transcription may fallback to a server associated with the enterprise. Transmitting this determination to the enterprise server may notify it to instantiate an STT transcription instance for the current voice interaction at the enterprise server. If it is determined that the user endpoint device is capable of handling the desired STT task, an instance of STT will be activated by the user endpoint device. Transmitting this determination to the enterprise server may notify it to not instantiate an STT transcription instance for the current voice interaction at the enterprise server. Steps204and206ofFIG.2take place during the audio communication, and are further explained in the flowchart depicted inFIG.5. As discussed above and shown in step502, the audio communication may comprise a voice interaction between a user and an agent associated with an enterprise. The user may, via the user endpoint device, provide first speech. The agent associated with the enterprise may provide second speech. At least a portion of the voice interaction is transcribed in real-time by an STT engine to produce a first transcribed speech in a first language and a second transcribed speech in a second language at step504. The portion of the voice interaction to be transcribed may include the first speech provided by the user, the second speech provided by the agent, or both. The present disclosure also envisions the user endpoint device performing, concurrently, more than two STT transcriptions into more than two languages. This aspect of the invention realizes the varying interests and backgrounds of users, enterprises, operating companies that might make use of enterprises, and the agents associated with enterprises which take part in the call. Therefore, the power of the user endpoint device may be leveraged for transcribing the voice interaction into a number of different languages to address the varying interests of the involved parties. One limiting factor on the number of STT transcriptions performed might be the number of desired languages in which the enterprise and/or the user prefer to have a transcription of the voice interaction. Another limiting factor may be the computational resources available to the STT engine. As smart devices become more powerful, and STT software becomes more advanced, the number of STT transcriptions and languages in which STT engines can concurrently transcribe may increase. Therefore, this disclosure does not present a limit on the number of STT transcriptions or languages in which the voice interaction may be transcribed. A discussion outlining embodiments of potential STT languages is presented in more detail later in the specification. As noted above, for a voice interaction, a first STT can be performed by the user endpoint device on at least a portion of the voice interaction and a second STT can also be performed by the user endpoint device on the at least a portion of the voice interaction. More particularly, the at least a portion of the voice interaction can include first speech provided by the user and second speech provided by the agent. In a further embodiment in accordance with the principles of the present disclosure, the user endpoint device may be assisted by the agent device and/or the enterprise server with performing STT of the first speech provided by the user, the second speech provided by the agent, or both. For example, the enterprise server may perform STT of both the speech provided by the user and the speech provided by the agent. Alternatively, the agent device may be responsible for performing STT of the speech provided by the user and the speech provided by the agent. In either case, the agent device and/or the enterprise server can negotiate with the user endpoint device about which languages it has the capabilities for performing STT. Thus, with respect to embodiments in which the user endpoint device advertises and negotiates its potential STT capabilities with the agent device and/or the enterprise server, these roles can be reversed when the agent device or the enterprise server assists with the STT tasks. As such, in accordance with this embodiment, the three entities (user endpoint device, agent device, and enterprise server may all advertise and negotiate the languages for which they have STT capabilities so that a determination can be made amongst them as to a) which entity will be responsible for performing STT on what portions of the audio communication and b) in what languages such STT tasks will occur. In a particular embodiment, some of the computing load of performing STT can be shared by the agent device which the user endpoint device can cooperate with to perform other STT. The agent device receives the second speech from the agent associated with that agent device while the user endpoint device receives the first speech from the user associated with that user endpoint device. As described herein, the user endpoint device can, for example, perform STT of the first speech in one or more different languages and transmit that STT along with at least the first speech to the enterprise server which will forward it to the agent device for display to the agent associated with that agent device. However, STT of the second speech can be performed by the agent device such that the second speech and corresponding STT transcription can be transmitted by the agent device, to the enterprise server and then on to the user endpoint device for display to the user. Thus, in accordance with this embodiment, the user endpoint device may perform STT on first speech provided by the user to be transmitted to the enterprise server, while an STT transcription of second speech provided by the agent may be performed by an agent device to which the enterprise server routes the audio communication. The enterprise server may then transmit the second transcribed speech to the user endpoint device. The user endpoint device may collaborate with the enterprise server to facilitate the exchange of the transcribed speeches as well as which languages are used for the different transcribed speeches. One of ordinary skill will also recognize that the present embodiments contemplate that the user endpoint device may perform STT on the second speech it may receive in audio format from the agent device/enterprise server. Similarly, the agent device can perform the STT of the first speech that it may receive in audio format from the user endpoint device/enterprise server. For example, the first speech provided by the user can be transcribed by the user endpoint device into a language selected by the agent and communicated to the agent device. Alternatively, the user endpoint device may not have the capability of STT in the selected language, or one of the selected languages, and so the agent device can be responsible for performing STT of that first speech in one or more of the selected languages. In a similar manner, the agent device may not have the capability of performing STT of the second speech provided by the agent into a language selected or identified by the user. Thus, in this case, the user endpoint device may perform the STT of the second speech. The performance of STT transcription at the enterprise server and/or on an agent device to which the enterprise server routes the audio communication may involve the enterprise server and/or agent device taking the steps outlined in the foregoing and succeeding discussion with respect to the user endpoint device. A method which facilitates STT transcription on both ends of the audio communication may be associated with a processor of the enterprise server, which may communicate with the agent device and user endpoint device to designate which device may be responsible for STT transcription, and furthermore, which portion of the voice interaction, or the audio communication, each device may be responsible for transcribing. It may be preferable for the devices to perform STT transcriptions over the corresponding portion of the voice interaction in which they are receiving from the respective call participant (i.e., the user endpoint device receives a corresponding portion of the voice interaction from the user). Such an approach may allow the devices to perform STT on portions of the voice interaction in which they are more tailored to handle. The STT engine and associated algorithmic processes at the agent device or enterprise server may become more familiar with the agent's voice in a similar way to which the user endpoint device may be better tuned, configured, and/or customized for transcription of the user's particular language and accent. Therefore, the quality of the transcription of the agent's voice may be improved, and moreover, the productivity of the call. The desired benefit of offloading of at least some of the computing load of performing STT can be accomplished as well. In other words, or to summarize, for each interaction, the user endpoint device, the contact center server (which can be a subcomponent of enterprise server) and the agent device can form a ‘federated subsystem’ in which they will collaborate and communicate to distribute/divide the work load among themselves to make the best use of each other's capabilities and resources for that interaction under the current circumstances each different entity is experiencing. In at least some instances, the contact center server, for example, may have the role of arbiter when more than one solution appears to be similarly effective. The term “best” can be defined differently by different enterprises. Some enterprises may prioritize accuracy over speed and maintain historical data that indicates STT of certain languages or dialects are best performed by one of entities as compared to others. Alternatively, some enterprises may prioritize speed of performing STT such that the contemplated federated subsystem will determine how to distribute tasks so that they are able to be performed as quickly as possible. In some instances, the “best use” of resources can vary throughout the day such that as the workload on the contact center server, for example, varies, then more tasks are offloaded to be performed by the user endpoint device or the agent device. An embodiment of the present disclosure comprises a determination of the STT capability of the user endpoint device as a quantitative metric that can be used in determining how many STTs may be performed. A further embodiment comprises assigning weights to different languages based on the computational resources required for a transcription to or from each language. The number of STTs may be a function of the languages associated with the audio communication and desired transcription languages. A set of weights may be developed for potential pairs of spoken language-transcribed text combinations. For example, a conversation between a user and an agent may be taking place in English, and the desired transcription languages may be Hindi, French, and Chinese. It may be determined that the user endpoint device has an overall STT capability of “20”, and that the English-Hindi, English-French, and English-Chinese transcriptions have weights of 4, 6, and 9, respectively. Therefore, in this example, the user endpoint device would be considered capable of performing the desired STT transcriptions because the STT transcription weights add up to 19, which is below the STT capability limit of 20. The above embodiment may be accomplished through the use of a data look-up table containing a list of potential STT spoken language-transcribed text pairs and a weight for each corresponding to the computational difficulty of the respective transcriptions. This embodiment may comprise the application or webpage referencing the data look-up table to determine the weights associated with the identified languages. The application or webpage may then sum the weights and compare the sum to the STT capability of the device. In a further embodiment, the STT languages may be selected by the user endpoint device so as to utilize as much of the STT capability as possible based of the identified languages. Thus, referring back to the above example, if the user endpoint device had an STT capability of 15, the user endpoint device may proceed with the English-Chinese and English-French transcriptions. Alternatively, the user endpoint device may take further steps to determine a subset of transcriptions from those initially identified. In another embodiment of the present disclosure, the user endpoint device may receive from the enterprise server an identification of STT languages weighted by their priority. In this instance, the user endpoint device may prioritize this STT language selection based on the priority specified by the enterprise server rather than a maximizing of the computational resources available for STT transcription. Once at least a portion of the voice interaction has been transcribed by the STT engine associated with the user endpoint device, the user endpoint device transmits to the enterprise server: (1) at least the first transcribed speech and (2) at least the corresponding portion of the voice interaction. In an embodiment of the present disclosure, the voice interaction is transmitted in full while the STT transcription and transmitted transcribed speech may comprise a portion of the voice interaction. For example, the user endpoint device may perform STT of the user's voice, and send the transcribed speech along with the audio of the full voice interaction to the enterprise server. Through the use of SIP, VoIP, WebRTC, or similar communication technology capable of transmitting data via multiple channels, the first and second speech may be converted into a digital signal and transcribed through the use of an STT engine available to the processor associated with the user endpoint device. Further, these communication means may be utilized to transmit the voice portion of the interaction after it has been converted into a digital signal. The transcribed speech may be transmitted through a separate digital channel established between the user endpoint device application or web app and the enterprise server. The WebRTC and VoIP technologies discussed above permit multi-channel communication between call participants. Accordingly, the system and method disclosed herein may make use of a voice channel, to exchange the speech produced by each participant, and a digital channel, to transmit the STT transcriptions from the user endpoint device to the enterprise server. The voice channel may still be a “digital channel” based on the manner in which the voice data is formatted and transmitted on that channel. However, for clarity, the content of the voice channel is audio or voice data and, therefore, can be referred to simply as the “voice channel” as an easy way to distinguish the voice channel from other digital channels. The illustration provided inFIG.6provides a schematic overview of an embodiment of the present disclosure. The audio of the voice interaction, comprising first speech provided by the user through the user endpoint device602, and second speech provided by the agent through a device associated with the agent604, may be transmitted between the user and the agent via a voice channel606established when the audio communication is initialized. Transcribed speech may be transmitted via a digital channel608established when the audio communication is initialized.FIG.6illustrates an embodiment of the present disclosure wherein at least a first transcribed speech is transmitted to the enterprise server110using the established digital channel608and the first and second speech are transmitted via the established voice channel606. Further, via the established voice channel606, the enterprise server110may receive the first speech provided by the user, and then route the first speech to the device associated with the agent604. The enterprise server110may receive the second speech provided by the agent and route it to the user endpoint device602via the established voice channel606. The second transcribed speech may be presented on a display apparatus associated with the user endpoint device610in a second language that may have been preconfigured as a related language of the user. The related language of the user may be a preferred language such as the user's mother tongue, i.e., the user's native language, or alternatively a language with which the user feels more acquainted for the particular subject matter of the conversation. The first transcribed speech may be presented on a display apparatus associated with the agent's device612, and may be in a first language which is a related language of the agent. The related language of the agent may be a preferred language such as the agent's mother tongue, i.e., the agent's native language, or alternatively a language that the agent feels comfortable with for the particular subject matter of the conversation. For example, the agent may be able to converse with an English-speaking user in English but be more acquainted with a different language and thus prefer a transcription of the user's voice in the language with which he or she feels more acquainted. Alternatively, the agent may simply be more acquainted with a different accent than that of the user, and thus may prefer a transcription of the speech into text in the language in which the user is speaking, but without the accent. An example wherein the above described embodiment would be particularly useful is during times when the load on a contact center suddenly increases and agents are employed on a temporary basis and/or swapped between different agent groups based on the requirements of the call center. In this situation, which is commonly encountered during festival and holiday times of the year such as Christmas, there is a high probability of a Hindi agent being assigned to a call in English, or vice versa. The present disclosure also addresses the desire for an enterprise to have a transcription of the voice interaction in its own related language. For example, the enterprise may be a contact center located in the United States, providing service for an operating company based in France, and employing a Hindi speaking agent. In this example, the call center may prefer to have a transcript of the language in English for its records. The present disclosure also addresses the desire for an operating company on behalf of an enterprise, e.g., a contact center, to have a transcription of the voice interaction transcribed in its own related language. Referring back to the above example, the French company may prefer to have a transcription of the voice interaction in French. Therefore, in this example, the STT engine associated with the user endpoint device may transcribe the voice interaction into at least an English transcription, a French transcription, and possibly, for the agent, a Hindi transcription. With user devices becoming more powerful, it may be possible to transcribe voice interactions into multiple languages locally on the user endpoint device. Finally, and significantly, the present disclosure also addresses the need for enterprises and operating companies to have a transcription of the voice interaction in a default system language for the purposes of backend analytics and processing. Once the transcription in a default system language is transmitted to the enterprise server, it may be archived, recorded, and analyzed for further processing. Standard internal circuits such as network of user endpoint devices are able to receive and transmit data through the use of network and Bluetooth components The exemplary embodiments of this present disclosure have been described in relation to a contact center. However, to avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scope of the present disclosure. Specific details are set forth by use of the embodiments to provide an understanding of the present disclosure. It should however be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific embodiments set forth herein. Furthermore, while the exemplary embodiments of the present disclosure illustrated herein show the various components of the system collocated, certain components of the system can be located remotely, at distant portions of a distributed network, such as a LAN and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system can be combined in to one or more devices, such as a switch, server, and/or adjunct, or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switch network, or a circuit-switched network. It will be appreciated from the preceding description, and for reasons of computational efficiency, the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system. For example, the various components can be located in a switch such as a PBX and media server, gateway, in one or more communications devices, at one or more users' premises, or some combination thereof. Similarly, one or more functional portions of the system could be distributed between a telecommunications device(s) and an associated computing device. Furthermore, it should be appreciated the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. These wired or wireless links can also be secure links and may be capable of communicating encrypted information. Transmission media used as links, for example, can be any suitable carrier for electrical signals, including coaxial cables, copper wire and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Also, while the flowcharts have been discussed and illustrated in relation to a particular sequence of events, changes, additions, and omissions to this sequence can occur without materially affecting the operation of embodiments of the present disclosure. A number of variations and modifications of the present disclosure can be used. It would be possible to provide for some features of the present disclosure without providing others. For example in one alternative embodiment of the present disclosure, the systems and methods of this present disclosure can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this present disclosure. Exemplary hardware that can be used for the present disclosure includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, non-volatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein. In yet another embodiment of the present disclosure, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with embodiments of the present disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized. In yet another embodiment of the present disclosure, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this present disclosure can be implemented as program embedded on personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system. Although the present disclosure describes components and functions implemented in the embodiments with reference to particular standards and protocols, it is not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are in existence and considered to be included in the present disclosure. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein are periodically superseded by faster or more effective equivalents having essentially the same functions. Such replacement standards and protocols having the same functions are considered equivalents included in the present disclosure. The present disclosure, in various embodiments, configurations, and aspects, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various embodiments, sub-combinations, and subsets thereof. Those of skill in the art will understand how to make and use the present disclosure after understanding the present disclosure. The present disclosure, in various embodiments, configurations, and aspects, includes providing devices and processes in the absence of items not depicted and/or described herein or in various embodiments, configurations, or aspects hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and/or reducing cost of implementation. While much of the foregoing discussion relates to implementations on a server associated with an enterprise, it is to be appreciated that the user endpoint device would be capable of performing in the same manner. Standard networking circuits available on common user endpoint devices may be utilized for the receiving and transmission of data. The foregoing discussion has been presented for purposes of illustration and description. It is not intended to limit the present invention to the form or forms disclosed herein. In the foregoing Detailed Description, for example, various features are grouped together in one or more embodiments, configurations, or aspects for the purpose of streamlining the disclosure. The features of the embodiments, configurations, or aspects may be combined in alternate embodiments, configurations, or aspects other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention the present invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment, configuration, or aspect. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the present disclosure. Moreover, though the disclosure herein has included description of one or more embodiments, configurations, or aspects and certain variations and modifications, other variations, combinations, and modifications are within the scope of the present invention, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative embodiments, configurations, or aspects to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter. Having thus described the present application in detail and by reference to embodiments and drawings thereof, it will be apparent that modifications and variations are possible without departing from the scope defined in the appended claims.
64,908
11862170
DETAILED DESCRIPTION Automatic speech recognition (ASR) is a field of computer science, artificial intelligence, and linguistics concerned with transforming audio data associated with speech into text representative of that speech. Similarly, natural language understanding (NLU) is a field of computer science, artificial intelligence, and linguistics concerned with enabling computers to derive meaning from text input containing natural language. ASR and NLU are often used together as part of a speech processing system. Text-to-speech (TTS) is a field of concerning transforming textual data into audio data that is synthesized to resemble human speech. Certain systems may be configured to perform actions responsive to user inputs. For example, for the user input of “Alexa, play Adele music,” a system may output music sung by an artist named Adele. For further example, for the user input of “Alexa, what is the weather,” a system may output synthesized speech representing weather information for a geographic location of the user. In a further example, for the user input of “Alexa, send a message to John,” a system may capture spoken message content and cause same to be output via a device registered to “John.” A system may receive a user input requesting the system to perform a particular action when a particular event occurs. For example, a user input may be “Alexa, tell me when I receive an email from Joe,” and the system may create and store registration data that causes the system to generate and send a notification to the user when an email from Joe is received. Another user input may be “notify me when my prescription for Asthma is ready for pickup,” and the system may create and store registration data that causes the system to generate and send a notification to the user when the prescription is ready. In some cases, the user may not want other persons to know certain information that may be included in an output that is generated in response to an event occurring. For example, the user may not want a communal, smart speaker device to simply make an announcement, for all who are nearby to hear, when a medical prescription is ready for pickup. The improved system of the present disclosure determines when an output, generated in response to an event occurring, may or, in some embodiments, likely includes private, confidential, personal or otherwise sensitive data, and applies an appropriate privacy control before outputting so that the sensitive data is not broadcasted without user authentication. In some cases, when the user requests to receive a notification when an event occurs, and the system determines that the request relates to outputting sensitive data, the system may ask the user to set a privacy control for receiving the output in the future when the event occurs. For example, the user may say “Alexa, tell me when my prescription for Asthma is ready.” The system may respond “please provide a password to receive the notification in the future.” In some cases, when the indicated event occurs and the system determines that the responsive output includes sensitive data, the system may modify the output to not include the sensitive data. The system may also ask the user to provide authentication data to receive the sensitive data. For example, the system may determine that a prescription for Asthma is ready for pickup, and may output the following announcement to the user “you have a medical notification. To receive further details, please provide voice authentication.” In other cases, the user may provide privacy control settings when requesting to receive a notification, and the system may generate an output according to the privacy control settings when the event occurs. For example, the user may say “Alexa, notify me when I receive an email from Joe.” The system may respond “Ok, I will notify you. Do you want to enable password protection for this notification,” the user may respond “yes” and provide a password. When an email from Joe is received, the system may output the following notification “you have a new email. Please provide a password to receive details.” In some cases, the user may provide content-based privacy controls. For example, the user input may be “Alexa, do not announce any of my medical or prescription information without authentication.” The system may determine that an output relating to medical or prescription information is generated (in response to an event occurring or in response to a user request), and apply the privacy controls. In some embodiments, the system may also apply privacy controls to when responding to an incoming user request. For example, the user input may be “Alexa, what is on my calendar today?” or “Give me details on my appointment today.” The system may determine that the user's calendar has a doctor appointment, and that outputting information relating to the appointment may include sensitive data. In this case, the system may respond “you have an appointment at 2 PM today. To receive more details please provide authentication.” FIG.1Aillustrates a system configured to respond to a user request to receive an output in the future along with sensitive data controls for a user according to embodiments of the present disclosure.FIG.1Billustrates a system configured to generate output data, using sensitive data controls, when an event occurs according to embodiments of the present disclosure. Although the figures and discussion herein illustrate certain operational steps of the system in a particular order, the steps described may be performed in a different order (as well as certain steps removed or added) without departing from the intent of the disclosure. As illustrated inFIGS.1A and1B, the system may include device110local to a user5, in communication with one or more systems120across one or more networks199. The system(s)120receives (132) audio data representing a user input. The audio data may include a user input/utterance spoken by the user5and captured by the device110. The system(s)120determines (134), using the audio data, that the user's intent is to receive an output when an event occurs. The system(s)120may perform automatic speech recognition (ASR) processing on the audio data to determine text data representing the user input. Further details on how the system(s)120may perform ASR processing are described below in relation toFIG.2. The system(s)120may perform natural language understanding (NLU) processing on the text data to determine an intent corresponding to the user input. The system(s)120may determine that user's intent is to receive data or to cause the system to perform an action in the future in response to an occurrence of an event. The system(s)120may determine, using NLU and the text data, a trigger for executing an action. For example, if the user input is “notify me when I receive an email,” the system(s)120, using NLU, may determine that the intent is to receive a notification in the future, the trigger for receiving the notification is when the user receives an email, and the action to be executed when the user receives an email is to send the user a notification. As another example, the user input may be “turn on the lights when I arrive home,” and the system(s)120, using NLU, may determine that the intent is to perform a smart-home action in the future, the trigger is when the user arrives home, and the action to be executed is turning on the lights. As another example, the user input may be “tell me when my prescription is ready,” and the system(s)120, using NLU, may determine that the intent is to receive a notification in the future, the trigger is when the user's prescription is ready, and the action to be executed when the prescription is ready is to send the user a notification. The system(s) determines (136) that the user input relates to receiving sensitive data. The system(s)120may determine that an output, which may be provided in the future in response to the occurrence of an event, may include sensitive data. The system(s)120may determine that the output includes sensitive data by processing the trigger and/or the action to be executed. For example, if the action is to receive a notification when a prescription is ready, the output that may be generated by the system(s)120in the future may be “your prescription for ______ is ready” or “your prescription is ready.” The system(s)120determines that the medical information, such as prescription information, that may be included in the output, is sensitive data that a user may not want other persons to know. Other information that may be determined by the system(s)120as being sensitive data includes, but is not limited to, medical information, health-related information, personal identification, personal correspondence, and age-related content (e.g., adult content). In some embodiments, the system(s)120may determine that an output includes sensitive data by performing natural language understanding (NLU)/semantic analysis (e.g., processing information representing the natural language input to determine its meaning in a computer recognizable form). The system(s)120may perform NLU/semantic analysis using data representing the user input, the intent, and/or data representing a potential output to be presented in response to the event occurring, and may determine that one or more words in the output corresponds to sensitive data. Using NLU/semantic analysis, the system(s)120may determine that one or more words in the output corresponds to sensitive data based on understanding the meaning of the word(s) by relating syntactic structures, from the levels of phrases, clauses, sentences and paragraphs to the level of the writing as a whole, to their natural language meanings. For example, the user input may relate to receiving the user's bank account balance, and the system(s)120may determine, using the user input, the output and semantic analysis, that the output includes sensitive data relating to the user's account balance. In some embodiments, the system(s)120uses NLU/semantic analysis to determine that word(s) in the output correspond to sensitive data categories. For example, the system(s)120may determine that the word “prescription” corresponds to the medical data category, the word “account balance” corresponds to the financial/banking data category, etc. The system(s)120requests (138) the user to provide an input for privacy control. The system(s)120stores the input and applies the privacy control in the future when presenting the output indicating event occurrence. For example, the system(s)120may request the user to provide a pin or password to receive the notification regarding the user's prescription or other medical information. The system(s)120stores (140) registration data with the privacy control. The system(s)120may store the registration data including the trigger, the action and the privacy control input in profile storage (e.g.,270) and associate it with the user profile for the user5. When the system(s)120determines an event occurred triggering the stored action, the system(s)120also determines to apply privacy control to the responsive output. In some cases, the system(s)120may output a notification without including sensitive data. For example, the system(s)120may output synthesized speech representing “you have a medical notification.” The system(s)120may also request the user to provide the privacy control input to authenticate/verify the user to receive the sensitive data. For example, the system(s)120may output synthesized speech representing “provide your password to receive further details.” In some embodiments, the system(s)120may ask the user5if any privacy controls are to be applied to the output that the user wants to receive. For example, the user input may be “notify me when I receive an email from Joe,” and the system(s)120may respond “I will notify you. Do you want to enable password protection for this notification?” The user5may respond “yes” and provide a password. When the system(s)120determines that the user5received an email from Joe, the system(s)120may output “You received a new email. Please provide the password to receive further information.” In response to receiving the correct password, the system(s)120may further output “you have received an email from Joe.” The operations ofFIG.1Amay be performed during a first time period. The operations ofFIG.1Bmay be performed during a second time period, subsequent to the first time period. Referring toFIG.1B, the system(s)120may determine when providing an output in response to stored registration data that the output includes sensitive data and that privacy controls should be applied. The system(s)120determines (150) event data indicating occurrence of an event. The event data may be received from any of the components of the system(s)120or from a skill system225. The event data, for example, may indicate that the user5received an email, or a prescription is ready for pickup, or that the user arrived at home. The system(s)120determines (152) that the event data triggers an output with respect to the user profile, using the trigger data and action data associated with the user profile of user5. The system(s)120determines (154) that first output data includes sensitive data, where the first output data may be responsive to the event occurrence or indicating the event occurrence to the user. The first output data may be determined by the system(s)120using the stored action data associated with the trigger data. For example, the first output data for the user request to receive a notification when the user's prescription for Asthma is ready, may be an announcement including synthesized speech notifying the user “your prescription for Asthma is ready.” In other examples, the first output data may be text to be displayed on a device screen, text to be provided to the user via a push notification, text to be delivered via a message (SMS, email, voice message, etc.), or the first output data may other forms of output. Based on the first output data including prescription information, the system(s)120determines that the first output data includes sensitive data. In some embodiments, the system(s)120may determine that the first output data includes sensitive data by performing natural language understanding (NLU) and semantic analysis using the first output data. The system(s)120may determine that one or more words in the first output data corresponds to sensitive data based on understanding the meaning of the word(s) by relating syntactic structures, from the levels of phrases, clauses, sentences and paragraphs to the level of the writing as a whole, to their natural language meanings. For example, the user input may relate to receiving the user's bank account balance, and the system(s)120may determine, using the user input, the output and semantic analysis, that the output includes sensitive data relating to the user's account balance. In some embodiments, the system(s)120uses NLU and semantic analysis to determine that word(s) in the first output data correspond to sensitive data categories. For example, the system(s)120may determine that the word “prescription” corresponds to the medical data category, the word “account balance” corresponds to the financial/banking data category, etc. The system(s)120determines (156) second output data that does not include the sensitive data, where the second output data may include indication of the event occurrence without details that relate to the sensitive data or may be a general notification so that the user may provide authentication data to receive the sensitive data. For example, the second output data may be synthesized speech notifying the user “you have a medical notification” or “you have a prescription notification” but does not include what the prescription is for. The second output data may correspond to the output responsive to the user request with respect to the action the user wanted perform in response to the event occurrence. For example, the first output data is a notification based on the user wanting to be notified when an event occurred, and the second output data is also a notification. Thus, the second output data includes non-sensitive data. In some embodiments, the system(s)120may use natural language generation (NLG) to determine the second output data to not include the sensitive data of the first output data. Using NLG techniques, the system(s)120may determine a summary of the sensitive data such that the second output data does not include the sensitive data. In some embodiments, the system(s)120may use NLU and semantic analysis to determine the words/portion of the first output data that relates to non-sensitive data, and use that portion to determine the second output data. In some embodiments, the first/second output data may cause the device110to output an announcement. In some embodiments, the first/second output data may cause the device110to display text. In some embodiments, the first/second output data may cause the device110to present a visual output (e.g., a yellow light ring, an icon, etc.) or an audible output (e.g., a chirp, etc.). In some embodiments, the first/second output data may cause the device110to receive a message (SMS, email, voice message, etc.), a push notification, or other forms of output. The system(s)120sends (158) the second output data to the device110for presenting to the user5. The second output data may also request the user to provide an authentication input if the user wants to receive additional information. The system(s)120receives (160) authentication data from the user that satisfies a privacy control. The system(s)120may use the authentication data to authenticate the user identity using user profile data associated with the user5. For example, the authentication data requested by the system(s)120and provided by the user5may be a voice verification, a fingerprint, facial recognition, password or pin protection, input/approval in response to a push notification, other types of input via a device (e.g., pressing a button on the device, selecting a button/option displayed on a screen, etc.) or other types of authentication data. The system(s)120may compare the received authentication data with the profile data (e.g., voice, fingerprint, facial data, password, etc.) associated with the user5to authenticate the user identity. Details on how the authentication data is processed and the user identity is authenticated are described in relation toFIGS.5and6. The system(s)120sends (162) third output data to the device, where the third output data includes the sensitive data. The sensitive data is provided to the user in response to authenticating the user. In some embodiments, the user may specify how the sensitive data should be presented. For example, the user may indicate (via a voice input, graphical user interface input, or other types of input) that the sensitive data is announced via a speaker, displayed on a screen, provided via a message (SMS, email, push notification, etc.) or other provided to the user in another manner. The user may also indicate via which device the sensitive data is to be provided, for example, via a smartphone, a speech-controlled device, a smartwatch, or any of the devices110shown inFIG.9. In some embodiments, the system(s)120may store registration data including privacy control settings (as described with respect to operation140ofFIG.1A), and at operation160the system(s)120may determine that the received authentication data satisfies the privacy control associated with the registration data. In some embodiments, the registration data may not be associated with a privacy control, and the system(s)120may determine which privacy control to apply prior to presenting the sensitive data to the user. The system(s)120may determine to use a privacy control corresponding to the type of user recognition data already stored/available for the user profile. For example, the system(s)120may determine to use a privacy control that requires the user to provide voice authentication (instead of a fingerprint) because the user profile already includes voice recognition data for the user5(and does not include fingerprint data for the user5). The system(s)120may also determine a type of privacy control to be satisfied for the user to receive the sensitive data. The system(s)120may determine the type of privacy control based on the type of the sensitive data to be presented to the user. For example, if the sensitive data relates to banking or financial information, then the system(s)120may require a fingerprint, and if the sensitive data relates to personal correspondence, then the system(s)120may require a password. The system may operate using various components as illustrated inFIG.2. The various components may be located on the same or different physical devices. Communication between various components may occur directly or across a network(s)199. An audio capture component(s), such as a microphone or array of microphones of a device110, captures audio11. The device110processes audio data, representing the audio11, to determine whether speech is detected. The device110may use various techniques to determine whether audio data includes speech. In some examples, the device110may apply voice activity detection (VAD) techniques. Such techniques may determine whether speech is present in audio data based on various quantitative aspects of the audio data, such as the spectral slope between one or more frames of the audio data; the energy levels of the audio data in one or more spectral bands; the signal-to-noise ratios of the audio data in one or more spectral bands; or other quantitative aspects. In other examples, the device110may implement a limited classifier configured to distinguish speech from background noise. The classifier may be implemented by techniques such as linear classifiers, support vector machines, and decision trees. In still other examples, the device110may apply Hidden Markov Model (HMM) or Gaussian Mixture Model (GMM) techniques to compare the audio data to one or more acoustic models in storage, which acoustic models may include models corresponding to speech, noise (e.g., environmental noise or background noise), or silence. Still other techniques may be used to determine whether speech is present in audio data. Once speech is detected in audio data representing the audio11, the device110may use a wakeword detection component220to perform wakeword detection to determine when a user intends to speak an input to the device110. An example wakeword is “Alexa.” Wakeword detection is typically performed without performing linguistic analysis, textual analysis, or semantic analysis. Instead, the audio data, representing the audio11, is analyzed to determine if specific characteristics of the audio data match preconfigured acoustic waveforms, audio signatures, or other data to determine if the audio data “matches” stored audio data corresponding to a wakeword. Thus, the wakeword detection component220may compare audio data to stored models or data to detect a wakeword. One approach for wakeword detection applies general large vocabulary continuous speech recognition (LVCSR) systems to decode audio signals, with wakeword searching being conducted in the resulting lattices or confusion networks. LVCSR decoding may require relatively high computational resources. Another approach for wakeword detection builds HMMs for each wakeword and non-wakeword speech signals, respectively. The non-wakeword speech includes other spoken words, background noise, etc. There can be one or more HMMs built to model the non-wakeword speech characteristics, which are named filler models. Viterbi decoding is used to search the best path in the decoding graph, and the decoding output is further processed to make the decision on wakeword presence. This approach can be extended to include discriminative information by incorporating a hybrid DNN-HMM decoding framework. In another example, the wakeword detection component220may be built on deep neural network (DNN)/recursive neural network (RNN) structures directly, without HMM being involved. Such an architecture may estimate the posteriors of wakewords with context information, either by stacking frames within a context window for DNN, or using RNN. Follow-on posterior threshold tuning or smoothing is applied for decision making. Other techniques for wakeword detection, such as those known in the art, may also be used. Once the wakeword is detected, the device110may “wake” and begin transmitting audio data211, representing the audio11, to the system(s)120. The audio data211may include data corresponding to the wakeword, or the portion of the audio corresponding to the wakeword may be removed by the device110prior to sending the audio data211to the system(s)120. In some cases, the audio11may be an utterance from the user5relating to a request to receive an output when an event occurs. For example, the audio11may represent the utterance “Alexa, tell me when I get an email from ______” or “Alexa, tell me when my prescription for ______ is ready for pickup at the pharmacy.” The system(s)120may perform the steps described in connection toFIG.1Ato store registration data using sensitive data controls. In other cases, the audio11may be an utterance from the user5relating to a code or password required to receive sensitive data. For example, the audio11may represent an alphanumeric code that the user5set, and the system(s)120may perform the steps described in connection toFIG.1Bto generate output data and outputting sensitive data when the correct code is provided by the user5. Upon receipt by the system(s)120, the audio data211may be sent to an orchestrator component230. The orchestrator component230may include memory and logic that enables the orchestrator component230to transmit various pieces and forms of data to various components of the system, as well as perform other operations as described herein. The orchestrator component230sends the input audio data211to an ASR component250that transcribes the input audio data211into input text data representing one more hypotheses representing speech contained in the input audio data211. The text data output by the ASR component250may thus represent one or more than one (e.g., in the form of an N-best list) ASR hypotheses representing speech represented in the audio data211. The ASR component250interprets the speech in the audio data211based on a similarity between the audio data211and pre-established language models. For example, the ASR component250may compare the audio data211with models for sounds (e.g., subword units, such as phonemes, etc.) and sequences of sounds to identify words that match the sequence of sounds of the speech represented in the audio data211. The ASR component250outputs text data representing one or more ASR hypotheses. The ASR component250may also output respective scores for the one or more ASR hypotheses. Such text data and scores may be output, for example, following language model operations by the ASR component250. Thus the text data output by the ASR component250may include a top scoring ASR hypothesis or may include an N-best list of ASR hypotheses. An N-best list may additionally include a respective score associated with each ASR hypothesis represented therein. Each score may indicate a confidence of ASR processing performed to generate the ASR hypothesis with which the score is associated. Further details of the ASR processing are included below. The device110may send text data213to the system(s)120. Upon receipt by the systems(s)120, the text data213may be sent to the orchestrator component230, which may send the text data213to the NLU component260. The text data213may be derived from an input(s) provided by the user5via an application/app on the device110, where the user5may use the application/app to request output when an event occurs (as described in connection withFIG.1A). The text data213, for example, may be “notify me when I get an email from ______” or “tell me when my prescription for ______ is ready for pickup at the pharmacy.” The NLU component260receives the ASR hypothesis/hypotheses (i.e., text data) attempts to make a semantic interpretation of the phrase(s) or statement(s) represented therein. That is, the NLU component260determines one or more meanings associated with the phrase(s) or statement(s) represented in the text data based on words represented in the text data. The NLU component260determines an intent representing an action that a user desires be performed as well as pieces of the text data that allow a device (e.g., the device110, the system(s)120, a skill290, a skill system(s)225, etc.) to execute the intent. For example, if the text data corresponds to “play Adele music,” the NLU component260may determine an intent that the system(s)120output music and may identify “Adele” as an artist. For further example, if the text data corresponds to “what is the weather,” the NLU component260may determine an intent that the system(s)120output weather information associated with a geographic location of the device110. In another example, if the text data corresponds to “turn off the lights,” the NLU component260may determine an intent that the system(s)120turn off lights associated with the device(s)110or the user(s)5. The NLU component260may send NLU results data (which may include tagged text data, indicators of intent, etc.) to the orchestrator component230. The orchestrator component230may send the NLU results data to a skill(s)290. If the NLU results data includes a single NLU hypothesis, the orchestrator component230may send the NLU results data to the skill(s)290associated with the NLU hypothesis. If the NLU results data includes an N-best list of NLU hypotheses, the orchestrator component230may send the top scoring NLU hypothesis to a skill(s)290associated with the top scoring NLU hypothesis. A “skill” may be software running on the system(s)120that is akin to a software application running on a traditional computing device. That is, a skill290may enable the system(s)120to execute specific functionality in order to provide data or produce some other requested output. The system(s)120may be configured with more than one skill290. For example, a weather service skill may enable the system(s)120to provide weather information, a car service skill may enable the system(s)120to book a trip with respect to a taxi or ride sharing service, a restaurant skill may enable the system(s)120to order a pizza with respect to the restaurant's online ordering system, etc. A skill290may operate in conjunction between the system(s)120and other devices, such as the device110, in order to complete certain functions. Inputs to a skill290may come from speech processing interactions or through other interactions or input sources. A skill290may include hardware, software, firmware, or the like that may be dedicated to a particular skill290or shared among different skills290. In addition or alternatively to being implemented by the system(s)120, a skill290may be implemented by a skill system(s)225. Such may enable a skill system(s)225to execute specific functionality in order to provide data or perform some other action requested by a user. Types of skills include home automation skills (e.g., skills that enable a user to control home devices such as lights, door locks, cameras, thermostats, etc.), entertainment device skills (e.g., skills that enable a user to control entertainment devices such as smart televisions), video skills, flash briefing skills, as well as custom skills that are not associated with any pre-configured type of skill. The system(s)120may be configured with a single skill290dedicated to interacting with more than one skill system225. Unless expressly stated otherwise, reference to a skill, skill device, or skill component may include a skill290operated by the system(s)120and/or skill operated by the skill system(s)225. Moreover, the functionality described herein as a skill may be referred to using many different terms, such as an action, bot, app, or the like. The system(s)120may include a TTS component280that generates audio data (e.g., synthesized speech) from text data using one or more different methods. Text data input to the TTS component280may come from a skill290, the orchestrator component230, or another component of the system(s)120. In one method of synthesis called unit selection, the TTS component280matches text data against a database of recorded speech. The TTS component280selects matching units of recorded speech and concatenates the units together to form audio data. In another method of synthesis called parametric synthesis, the TTS component280varies parameters such as frequency, volume, and noise to create audio data including an artificial speech waveform. Parametric synthesis uses a computerized voice generator, sometimes called a vocoder. The system(s)120may include profile storage270. The profile storage270may include a variety of information related to individual users, groups of users, devices, etc. that interact with the system(s)120. A “profile” refers to a set of data associated with a user, device, etc. The data of a profile may include preferences specific to the user, device, etc.; input and output capabilities of the device; internet connectivity information; user bibliographic information; registration data; as well as other information. The profile storage270may include one or more user profiles, with each user profile being associated with a different user identifier. Each user profile may include various user identifying information. Each user profile may also include preferences of the user and/or one or more device identifiers, representing one or more devices registered to the user. The profile storage270may include one or more group profiles. Each group profile may be associated with a different group profile identifier. A group profile may be specific to a group of users. That is, a group profile may be associated with two or more individual user profiles. For example, a group profile may be a household profile that is associated with user profiles associated with multiple users of a single household. A group profile may include preferences shared by all the user profiles associated therewith. Each user profile associated with a group profile may additionally include preferences specific to the user associated therewith. That is, each user profile may include preferences unique from one or more other user profiles associated with the same group profile. A user profile may be a stand-alone profile or may be associated with a group profile. A group profile may include one or more device profiles representing one or more devices associated with the group profile. The profile storage270may include one or more device profiles. Each device profile may be associated with a different device identifier. Each device profile may include various device identifying information. Each device profile may also include one or more user identifiers, representing one or more user profiles associated with the device profile. For example, a household device's profile may include the user identifiers of users of the household. The profile storage270may include registration data corresponding to the user's request to receive an output identified by their respective user profiles. For example, the profile storage270may include trigger information (indicating when an action is to be executed) and action information (indicating the action that is to be executed). The profile storage270may also include information indicating the privacy preferences set by the user for receiving sensitive data. For example, the profile storage270may include the code set by the user that when the system(s)120receives it, it may provide the sensitive data to the user. The system may be configured to incorporate user permissions and may only perform activities disclosed herein if approved by a user. As such, the systems, devices, components, and techniques described herein would be typically configured to restrict processing where appropriate and only process user information in a manner that ensures compliance with all appropriate laws, regulations, standards, and the like. The system and techniques can be implemented on a geographic basis to ensure compliance with laws in various jurisdictions and entities in which the components of the system and/or user are located. The ASR engine may return an N-best list of paths along with their respective recognition scores, corresponding to the top N paths as determined by the ASR engine. An application (such as a program or component either internal or external to the ASR component250that receives the N-best list may then perform further operations or analysis on the list given the list and the associated recognition scores. For example, the N-best list may be used in correcting errors and training various options and processing conditions of the ASR module250. The ASR engine may compare the actual correct utterance with the best result and with other results on the N-best list to determine why incorrect recognitions received certain recognition scores. The ASR engine may correct its approach (and may update information in the ASR models) to reduce the recognition scores of incorrect approaches in future processing attempts. The system(s)120may also include a notification manager275. The notification manager275may process a user request to receive data, information or another output in the future based on occurrence of an event. The notification manager275may store the corresponding trigger data and the action data in the profile storage270. The privacy control component285may process the user input to determine if the type of trigger data requires privacy controls and/or whether the user provides any privacy preferences with respect to receiving the notification. The notification manager275may process event data from the skill(s)290to determine whether an action is triggered. The privacy control component285may determine if any privacy controls/preferences are to be applied when executing the action. FIG.3illustrates how NLU processing is performed on text data. Generally, the NLU component260attempts to make a semantic interpretation of text data input thereto. That is, the NLU component260determines the meaning behind text data based on the individual words and/or phrases represented therein. The NLU component260interprets text data to derive an intent of the user as well as pieces of the text data that allow a device (e.g., the device110, the system(s)120, skill system(s)225, etc.) to complete that action. The NLU component260may process text data including several ASR hypotheses. The NLU component260may process all (or a portion of) the ASR hypotheses input therein. Even though the ASR component250may output multiple ASR hypotheses, the NLU component260may be configured to only process with respect to the top scoring ASR hypothesis. The NLU component260may include one or more recognizers363. Each recognizer363may be associated with a different domain (e.g., smart home, video, music, weather, custom, etc.). Each recognizer363may process with respect to text data input to the NLU component260. Each recognizer363may operate at least partially in parallel with other recognizers363of the NLU component260. Each recognizer363may include a named entity recognition (NER) component362. The NER component362attempts to identify grammars and lexical information that may be used to construe meaning with respect to text data input therein. The NER component362identifies portions of text data that correspond to a named entity that may be applicable to processing performed by a domain. The NER component362(or other component of the NLU component260) may also determine whether a word refers to an entity whose identity is not explicitly mentioned in the text data, for example “him,” “her,” “it” or other anaphora, exophora or the like. Each recognizer363, and more specifically each NER component362, may be associated with a particular grammar model and/or database373, a particular set of intents/actions374, and a particular personalized lexicon386. Each gazetteer384may include skill-indexed lexical information associated with a particular user and/or device110. For example, a Gazetteer A (384a) includes skill-indexed lexical information386aato386an.A user's music skill lexical information might include album titles, artist names, and song names, for example, whereas a user's contact list skill lexical information might include the names of contacts. Since every user's music collection and contact list is presumably different, this personalized information improves entity resolution. An NER component362applies grammar models376and lexical information386to determine a mention of one or more entities in text data. In this manner, the NER component362identifies “slots” (corresponding to one or more particular words in text data) that may be used for later processing. The NER component362may also label each slot with a type (e.g., noun, place, city, artist name, song name, etc.). Each grammar model376includes the names of entities (i.e., nouns) commonly found in speech about the particular domain to which the grammar model376relates, whereas the lexical information386is personalized to the user and/or the device110from which the user input originated. For example, a grammar model376associated with a shopping domain may include a database of words commonly used when people discuss shopping. Each recognizer363may also include an intent classification (IC) component364. An IC component364parses text data to determine an intent(s). An intent represents an action a user desires be performed. An IC component364may communicate with a database374of words linked to intents. For example, a music intent database may link words and phrases such as “quiet,” “volume off,” and “mute” to a <Mute> intent. An IC component364identifies potential intents by comparing words and phrases in text data to the words and phrases in an intents database374. The intents identifiable by a specific IC component364are linked to domain-specific grammar frameworks376with “slots” to be filled. Each slot of a grammar framework376corresponds to a portion of text data that the system believes corresponds to an entity. For example, a grammar framework376corresponding to a <PlayMusic> intent may correspond to sentence structures such as “Play {Artist Name},” “Play {Album Name},” “Play {Song name},” “Play {Song name} by {Artist Name},” etc. However, to make resolution more flexible, grammar frameworks376may not be structured as sentences, but rather based on associating slots with grammatical tags. For example, an NER component362may parse text data to identify words as subject, object, verb, preposition, etc. based on grammar rules and/or models prior to recognizing named entities in the text data. An IC component364(e.g., implemented by the same recognizer363as the NER component362) may use the identified verb to identify an intent. The NER component362may then determine a grammar model376associated with the identified intent. For example, a grammar model376for an intent corresponding to <PlayMusic> may specify a list of slots applicable to play the identified “object” and any object modifier (e.g., a prepositional phrase), such as {Artist Name}, {Album Name}, {Song name}, etc. The NER component362may then search corresponding fields in a lexicon386, attempting to match words and phrases in text data the NER component362previously tagged as a grammatical object or object modifier with those identified in the lexicon386. An NER component362may perform semantic tagging, which is the labeling of a word or combination of words according to their type/semantic meaning. An NER component362may parse text data using heuristic grammar rules, or a model may be constructed using techniques such as hidden Markov models, maximum entropy models, log linear models, conditional random fields (CRF), and the like. For example, an NER component362implemented by a music recognizer may parse and tag text data corresponding to “play mother's little helper by the rolling stones” as {Verb}:“Play,” {Object}:“mother's little helper,” {Object Preposition}:“by,” and {Object Modifier}:“the rolling stones.” The NER component362identifies “Play” as a verb, which an IC component364may determine corresponds to a <PlayMusic> intent. At this stage, no determination has been made as to the meaning of “mother's little helper” and “the rolling stones,” but based on grammar rules and models, the NER component362has determined the text of these phrases relates to the grammatical object (i.e., entity) of the user input represented in the text data. The frameworks linked to the intent are then used to determine what database fields should be searched to determine the meaning of these phrases, such as searching a user's gazetteer384for similarity with the framework slots. For example, a framework for a <PlayMusic> intent might indicate to attempt to resolve the identified object based on {Artist Name}, {Album Name}, and {Song name}, and another framework for the same intent might indicate to attempt to resolve the object modifier based on {Artist Name}, and resolve the object based on {Album Name} and {Song Name} linked to the identified {Artist Name}. If the search of the gazetteer384does not resolve a slot/field using gazetteer information, the NER component362may search a database of generic words (e.g., in the knowledge base372). For example, if the text data includes “play songs by the rolling stones,” after failing to determine an album name or song name called “songs” by “the rolling stones,” the NER component362may search the database for the word “songs.” In the alternative, generic words may be checked before the gazetteer information, or both may be tried, potentially producing two different results. An NER component362may tag text data to attribute meaning thereto. For example, an NER component362may tag “play mother's little helper by the rolling stones” as:{domain} Music, {intent} <PlayMusic>, {artist name} rolling stones, {media type} SONG, and {song title} mother's little helper. For further example, the NER component362may tag “play songs by the rolling stones” as:{domain} Music, {intent} <PlayMusic>, {artist name} rolling stones, and {media type} SONG. FIG.4Ais a conceptual diagram of the notification manager275to generate registration data using sensitive data controls according to embodiments of the present disclosure. A user may provide input to receive a notification when an event occurs, where the system(s)120may perform an action indicated by the user when the indicated event occurs. As described above in relation toFIG.1A, the user5may provide a voice input that may be processed by the system(s)120using the ASR component250to determine text data. In other cases, the user5may provide an input via an app and the system(s)120may determine text data213representing information relating to the user request. The NLU component260may process the text data, as described above in relation toFIG.3, to determine the user's intent to receive an output when an event occurs. The NLU component260may also process the text data to determine trigger data402indicating when an indicated action is to be executed/triggered and may also determine action data404indicating the action that is to be executed. For example, the user input text data may be “notify me when I get an email from Joe.” The NLU component260may determine that “notify me when” indicates an intent to receive an output when an event occurs, “when I get an email from Joe” indicates the trigger data and the “notify” indicates the action data. In this case, the trigger data402may include <trigger:receive email>, <trigger:from ‘Joe’>, and the action data404may include <generate notification>. In another example, the user input text data may be “tell me when my prescription for Asthma is ready.” The NLU component260may determine that “tell me when” indicates an intent to receive an output when an event occurs, “when my prescription for Asthma is ready” indicates the trigger and “tell me” indicates the action. In this case, the trigger data402may include <trigger:prescription ready>, <trigger:prescription for ‘Asthma’>, and the action data404may include <generate notification>. The NLU component260may also provide other data406representing one or more NLU hypotheses determined by the NLU component260, the text data representing the user input, context data relating to user input, and other data. In some embodiments, the trigger data402, action data404and other data406may be provided by another component of the system. For example, a skill developer or another type of user may want to create a notification that is sent to end-users (e.g., user5) devices when an event occurs, and may provide the system(s)120the trigger data402, action data404, and other data406. The privacy control component285may process the trigger data402, the action data404and other data406to determine (at decision block410) if one or more privacy controls should be offered to the user for the particular request to receive an output when an event occurs. The privacy control component285may determine whether privacy controls should be offered based on the type of trigger and/or the type of action corresponding to the user request. For example, if a trigger or action relates to data that is considered private, confidential or otherwise sensitive, then the privacy control component285may ask if the user wants to set any privacy controls for receiving the output. The privacy control component285may generate output text data, which may be processed by the TTS component280to generate output audio data representing the synthesized speech “do you want to set any privacy controls for this notification?” The output audio data may be sent to the device110for output. The user may respond affirmatively and may provide input representing the privacy control to be used for the notification. The system(s)120(using ASR and NLU) may process the input to determine the privacy input data420. The notification manager275may determine the registration data415as including the privacy input420. If the user responds in the negative, and does not provide any privacy controls, then the notification manager275may generate the registration data415without privacy controls. In the case where the privacy control component285determines that the trigger and/or action is not the type where privacy controls are needed, then the notification manager275generates the registration data415without any privacy controls. The privacy input420may include the type of authentication required to receive sensitive data or other data indicated by the user. In some embodiments, the types of authentication that may be used by the system(s)120include, but is not limited to, voice recognition, facial recognition, fingerprint authentication, retinal scan, other types of biometric identifications, pin/password, and other types of input. The types of authentication may also include denial/approval via a push notification, selection input via a device, such as pressing a button on the device, selecting a button/element displayed on a screen, providing a gesture, and other types of user interactions. In some embodiments, the privacy control component285, may request approval from the user to present the sensitive data via a device, and the user may provide the approval by providing a voice input, a fingerprint, or other types of biometric information, a pin code/password, a selection input (by pressing a button on the device, selecting an option displayed on the screen, etc.) providing a gesture, or other forms of input to indicate approval. The authentication data may be provided by the user via a second device other than the first device that presented the notification/output triggered by the event occurrence. The second device may include a companion application for the system(s)120, may be a retinal scanner, or other biometric data scanner. For example, a speech-controlled device110a(the first device) may output “you have a medical notification. Please provide <authentication data>,” and the user may provide the requested authentication data (fingerprint, approval in response to a push notification, facial scan, etc.) via a smartphone110b(the second device). The system(s)120may then present the sensitive data via the first device, the second device, or a third device, as determined by the system(s)120or as specified by the user. The privacy input420may also include, where applicable, the input required to receive the sensitive/indicated data. For example, if the authentication type is entry of a pin/code/password, then the privacy input420may include the particular the pin/code/password. If the authentication type is a form of biometric identification, the privacy input420may include the user's biometric data. In other embodiments, the privacy input420may not include the user's biometric data, rather, the system(s)120may use the biometric data stored in the profile storage270to authenticate the user and provide the sensitive data to the user. The registration data415may include the trigger data402, the action data404and other data, and the registration data415may be stored in the profile storage270. Where applicable, the registration data415may also include the privacy input420representing the type of authentication (e.g., fingerprint, voice identification, pin/password, etc.) required and/or the input (e.g., code, password, etc.) required to receive sensitive data. In some embodiments, the registration data415may also include frequency data indicating the number of times an output is to be provided when the event occurs. For example, a user input may be “tell me each time is rains this week”, then the frequency data may indicate “each time for 7 days.” Another example user input may be “notify me the next two times my prescription is ready,” and the frequency data may be “two/twice.” Another example user input may be “notify me when my package is delivered,” and the system may determine that the user wants to receive a one-time notification, determining the frequency data to be “one.” In some embodiments, the notification manager275may ask the user if he/she wants to apply any privacy controls, regardless of the trigger type or action type. The user may provide the privacy input data420, and the notification manager275may store the registration data415as including the privacy input data420. In some cases, the user may indicate privacy controls to be applied when providing the input to generate the registration data. For example, the user input may be “Alexa, tell me when I get an email from Joe, and require a passcode before notifying me.” The privacy control component285may determine, from the NLU data, that the user input includes privacy controls, and may determine the privacy input data420from the user input. The notification manager275may store the registration data415as including the privacy input data420. FIG.4Bis a conceptual diagram of the notification manager275to generate output data, using sensitive data controls, when an event occurs according to embodiments of the present disclosure. The notification manager275determines when an event triggers an output according to the user request (as described in relation toFIG.1B). A skill component290may provide event data450indicating occurrence of an event. The notification manager275may process the event data450and the trigger data402from the profile storage270to determine if the event data450triggers an action associated with the trigger data402. If the action is triggered, then the notification manager275determines content data460and action data462to be provided to the user in response to the event occurring. The privacy control component285processes the content data460and the action data462to determine one or more privacy controls associated with the output. In some embodiments, the privacy control component285may determine that privacy controls should be applied based on the content of the output, for example, when the privacy control component285determines that the output includes sensitive data. In a non-limiting example, the notification manager275may have generated registration data as described above based on the user saying “Alexa, notify me when I get an email from Joe.” A skill component290may generate event data450based on the user profile receiving an email. In this case, the event data450may include the name/email address of the sender (e.g., Joe). The notification manager275may determine, using the event data450and the trigger data402, that the user is wants to be notified of this event because it is an email from Joe. The notification manager275may generate the action data462indicating a notification is to be sent to one or more devices associated with the user profile. The notification manager275may generate the content data460indicating the content of the notification as being “you have received an email from Joe.” In this case, the privacy control component285may determine that the content data460and the action data462do not indicate that any privacy controls need to be applied because no sensitive data is being outputted. The privacy control component285may also determine that the user did not specify any privacy controls that should be applied in this case. In another example, the notification manager275may have generated registration data as described above based on the user saying “Alexa, tell me when my prescription for Asthma is ready.” A skill component290may generate event data450based on data indicating that a prescription associated with the user profile is ready for pickup at a pharmacy. Using the event data450and the trigger data402, the notification manager275may determine that that the user wants to be notified of this event. The notification manager275may generate the action data462indicating a notification is to be sent to one or more devices associated with the user profile. The notification manager275may generate the content data460indicating the content of the notification as being “your prescription for Asthma is ready for pickup.” The privacy control component285may process this content data460, determine that it includes sensitive data relating to medical information, and determine to apply a privacy control to the output such that the output/notification does not include the sensitive data. For example, the privacy control component285may output a notification including “you have a prescription ready” or “you have a medical/pharmacy notification.” In this manner, the notification manager275determines that the user may not want other persons to know their private medical information, and limits the information provided in the notification. Continuing with the example, the privacy control component285, in another case, may process the action data462and may determine that an audio notification is to be provided, such that, other persons near the device110that is outputting the notification may be able to hear the content of the notification. Based on the device and the type of notification to be provided, the privacy control component285may determine to apply privacy controls to the output. In another case, the privacy control component285may process the action data462and determine that a visual notification is to be provided to a mobile device110associated with the user profile. The privacy control component285may determine that the mobile device110is designated as a personal device with respect to the user profile. Based on this, the privacy control component285may determine to not apply any privacy controls to the output, even though the content data460may include sensitive data, because the notification is being provided to a personal user device. In some embodiments, the privacy control component285may determine the privacy control associated with the registration data by retrieving privacy control data from the profile storage270associated with the user profile, where the privacy control data indicates the privacy settings defined/specified by the user when providing the user request. In a non-limiting example, the notification manager275may have stored registration data as described above based on the user saying “Alexa, tell me when I receive an email from Joe.” The system(s)120may respond by asking the user if they want to set any privacy controls:“do you want to enable password/pin protection for this notification?” The user may respond “yes” and may provide a password/pin. The notification manager275stores the provided password/pin in profile storage270and associates it with the stored registration data. When event data450indicating that an email from Joe is received is processed by the notification manager275to determine the content data460and the action data462, the privacy control component285determines that there is a privacy control (password/pin protection) associated with this notification. The privacy control component285generates the output470accordingly, by requesting the user for the password/pin. For example, the system(s)120may notify the user of a new email without providing details until the password/pin is received, and may output “you have a new email. Please provide your password/pin to receive more details.” In this manner, a user-specified privacy control is applied by the notification manager275when an output is triggered based on an event occurrence. The privacy control component285may determine output data corresponding to the triggered event (e.g., in response to the user request “tell me when my prescription for Asthma is ready” the notification output may be “your prescription for Asthma is ready.”). In applying the privacy controls, the privacy control component285modifies the output data to generate the output470, where the modified output470does not include the sensitive data (e.g., output470may be the notification “you have a medical notification.”). In some embodiments, the modified output470may also include a request for authentication data to receive the sensitive data (e.g., output470may include “please authenticate using voice to receive additional information regarding your prescription.”). In some embodiments, the privacy control component285may employ NLG techniques to determine output data including non-sensitive data or not including the sensitive data. The privacy control component285may generate a summary of the sensitive data such that the summary does not include the sensitive data. Using NLG techniques and the summary of the sensitive data, the privacy control component285may generate the modified output470corresponding to non-sensitive data. The privacy control component285may determine data that generally refers to the sensitive data or the category of the sensitive data. Using NLG techniques and the general reference to the sensitive data, the privacy control component285may generate the modified output470corresponding to non-sensitive data. The privacy control component285may determine the modified output470by removing or deleting the sensitive data from the original output that includes the sensitive data. The privacy control component285, as described herein, is configured to determine whether privacy controls should be applied when a user requests to receive an output in the future when an event occurs, and also determine whether privacy controls should be applied when an output is provided to the user. During generation of the registration data, the privacy control component285may process the trigger data402and the action data404to determine whether privacy controls should be applied and offered to the user based on determining that an output may include sensitive data. During generation of output data in response to an event occurrence, the privacy control component285may process the content data460and the action data462to determine if privacy controls should be applied to the output based on the output including sensitive data. Although described examples refer applying privacy controls to a user request/intent to receive an output in the future when an event occurs, it should be understood that the functionalities of the system(s)120and the privacy control component285can be performed with respect to the system(s)120generating an output for presentation to the user. In such cases, the orchestrator230may provide the output data to the privacy control component285, and the privacy control component285(as described above) may determine that the output data includes sensitive data or causes the device to output/display/announce sensitive data. The privacy control component285may determine to apply a privacy control (as described herein) to ensure that sensitive data is not outputted without authenticating the user identity or without user approval. The privacy control component285may determine an output includes sensitive data using various methods. In some embodiments, the privacy control component285may be a rule-based engine that determines based on the type of trigger, type of action and/or type of data to be outputted whether privacy controls should be applied. Examples of when privacy control may be applied include when the output data relates to or include medical information, health-related information, adult content, private/personal correspondence information, personal identification information, etc. In some embodiments, the privacy control component285may be a machine-learning model configured to determine whether the trigger data402, the action data404, the content data460, and/or the action data462indicates privacy controls should be applied for the particular output. The machine-learning model may be trained using trigger data, action data and/or content data labeled as requiring privacy control, and trigger data, action data and content data labeled as not requiring privacy control. The machine-learning model may process the trigger data402, the action data404, the content data460and the action data462and determine, based on the similarity between them and the training data, whether privacy controls should be applied in the particular case. In some embodiments, the privacy control component285may determine whether privacy controls should be applied for the particular output based on the type/category of the trigger data/event. For example, for outputs/events that relate to health or medical information/category, the privacy control component285may always offer privacy controls to the user. In another example, for outputs/events that relate to a taxi/ride booking, the privacy control component285may not offer any privacy controls to the user. In some embodiments, the privacy control component285may determine whether privacy controls should be applied for the trigger data402, the action data404, the content data460, and/or the action data462based on whether other users have applied privacy controls to similar outputs. For example, if other users frequently request privacy controls for outputs relating to certain smart-home functions, such as, unlocking the front door or another door within the home, then the privacy control component285may determine to apply privacy controls (when generating the registration data or when generating the output in response to event occurrence) to an output causing a door associated with the user profile to be unlocked. In some embodiments, the privacy control component285may determine the form of privacy control to apply based on the trigger data402or the action data404. The form of privacy control refers to a type of authentication that a user may provide to receive sensitive data or data indicated by the user that requires authentication. The type of privacy control may involve modifying the content of the output, sending the output to a particular device (based on user present data, personal device designation, device type, etc.), and may also involve modifying the type of output (e.g., send a visual notification instead of an audible notification). The type of user authentication may depend upon the type of sensitive data to be included in the output. User authentication may also depend upon the type of data being accessed. Each type of output and/or type of data may have a threshold confidence associated therewith. The threshold confidence may be used by the system to determine one or more data input techniques to use to authenticate the user. The privacy control may additionally be configured according to a contextual situation of a user. If the user is located a threshold distance away from a device, user authentication may involve analyzing speech captured by a microphone or microphone array and/or analyzing one or more image captured by a camera. If the user is, instead, located within a threshold distance of a device, user authentication may involve analyzing an input passcode and/or analyzing input biometric data. Various other combinations of user authentication techniques may be used. The system(s)120may determine threshold user authentication confidence score data that may represent a threshold user authentication confidence score required prior to providing user access to the sensitive data. Each type of sensitive data may have a different threshold user authentication type that must be satisfied. For example, sensitive data corresponding to banking information may have a first user authentication type (e.g., fingerprint recognition) , sensitive data corresponding to personal correspondence may have a second user authentication type (e.g., password), etc. The user authentication type may be specific to the data included in the output. For example, if the output includes information (such as name) related to a prescription, then the user authentication type may be voice recognition, whereas if the output did not include the name of the prescription, then the user authentication type may be a password or other user identification data. The system(s)120may determine the user authentication type based on the output device type and the capabilities of the output device type. For example, a speech-controlled device may be capable of capturing audio and/or image data, a wearable device (e.g., a smart watch) that may capture a pulse, a mobile device may be capable of capturing a fingerprint, a facial scan, or a retina scan, a keyboard that may capture a password, etc. In some embodiments, the user may specify how the sensitive data should be presented. For example, the user may indicate (via a voice input, graphical user interface input, or other types of input) that the sensitive data is announced via a speaker, displayed on a screen, provided via a message (SMS, email, push notification, etc.) or other provided to the user in another manner. The user may also indicate via which device the sensitive data is to be provided, for example, via a smartphone, a speech-controlled device, a smartwatch, or any of the devices110shown inFIG.9. One or more of the herein described system(s)120components may implement one or more trained machine learning models. Various machine learning techniques may be used to train and operate such models. Models may be trained and operated according to various machine learning techniques. Such techniques may include, for example, neural networks (such as deep neural networks and/or recurrent neural networks), inference engines, trained classifiers, etc. Examples of trained classifiers include Support Vector Machines (SVMs), neural networks, decision trees, AdaBoost (short for “Adaptive Boosting”) combined with decision trees, and random forests. Focusing on SVM as an example, SVM is a supervised learning model with associated learning algorithms that analyze data and recognize patterns in the data, and which are commonly used for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. More complex SVM models may be built with the training set identifying more than two categories, with the SVM determining which category is most similar to input data. An SVM model may be mapped so that the examples of the separate categories are divided by clear gaps. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gaps they fall on. Classifiers may issue a “score” indicating which category the data most closely matches. The score may provide an indication of how closely the data matches the category. In order to apply the machine learning techniques, the machine learning processes themselves need to be trained. Training a machine learning component such as, in this case, one of the trained models, requires establishing a “ground truth” for the training examples. In machine learning, the term “ground truth” refers to the accuracy of a training set's classification for supervised learning techniques. Various techniques may be used to train the models including backpropagation, statistical learning, supervised learning, semi-supervised learning, stochastic learning, or other known techniques. Neural networks may also be used to perform ASR processing including acoustic model processing and language model processing. In the case where an acoustic model uses a neural network, each node of the neural network input layer may represent an acoustic feature of a feature vector of acoustic features, such as those that may be output after the first pass of performing speech recognition, and each node of the output layer represents a score corresponding to a subword unit (such as a phone, triphone, etc.) and/or associated states that may correspond to the sound represented by the feature vector. For a given input to the neural network, it outputs a number of potential outputs each with an assigned score representing a probability that the particular output is the correct output given the particular input. The top scoring output of an acoustic model neural network may then be fed into an HMM which may determine transitions between sounds prior to passing the results to a language model. In the case where a language model uses a neural network, each node of the neural network input layer may represent a previous word and each node of the output layer may represent a potential next word as determined by the trained neural network language model. As a language model may be configured as a recurrent neural network which incorporates some history of words processed by the neural network, the prediction of the potential next word may be based on previous words in an utterance and not just on the most recent word. The language model neural network may also output weighted predictions for the next word. Processing by a neural network is determined by the learned weights on each node input and the structure of the network. Given a particular input, the neural network determines the output one layer at a time until the output layer of the entire network is calculated. Connection weights may be initially learned by the neural network during training, where given inputs are associated with known outputs. In a set of training data, a variety of training examples are fed into the network. Each example typically sets the weights of the correct connections from input to output to 1 and gives all connections a weight of 0. In another embodiment, the initial connection weights are assigned randomly. As examples in the training data are processed by the neural network, an input may be sent to the network and compared with the associated output to determine how the network performance compares to the target performance. Using a training technique, such as back propagation, the weights of the neural network may be updated to reduce errors made by the neural network when processing the training data. The system(s)120may include a user recognition component295that recognizes one or more users using a variety of data. As illustrated inFIG.5, the user recognition component295may include one or more subcomponents including a vision component508, an audio component510, a biometric component512, a radio frequency (RF) component514, a machine learning (ML) component516, and a recognition confidence component518. In some instances, the user recognition component295may monitor data and determinations from one or more subcomponents to determine an identity of one or more users associated with data input to the system(s)120. The user recognition component295may output user recognition data595, which may include a user identifier associated with a user the user recognition component295believes originated data input to the system(s)120. The user recognition data595may be used to inform processes performed by various components of the system(s)120. The vision component508may receive data from one or more sensors capable of providing images (e.g., cameras) or sensors indicating motion (e.g., motion sensors). The vision component508can perform facial recognition or image analysis to determine an identity of a user and to associate that identity with a user profile associated with the user. In some instances, when a user is facing a camera, the vision component508may perform facial recognition and identify the user with a high degree of confidence. In other instances, the vision component508may have a low degree of confidence of an identity of a user, and the user recognition component295may utilize determinations from additional components to determine an identity of a user. The vision component508can be used in conjunction with other components to determine an identity of a user. For example, the user recognition component295may use data from the vision component508with data from the audio component510to identify what user's face appears to be speaking at the same time audio is captured by a device110the user is facing for purposes of identifying a user who spoke an input to the system(s)120. The overall system of the present disclosure may include biometric sensors that transmit data to the biometric component512. For example, the biometric component512may receive data corresponding to fingerprints, iris or retina scans, thermal scans, weights of users, a size of a user, pressure (e.g., within floor sensors), etc., and may determine a biometric profile corresponding to a user. The biometric component512may distinguish between a user and sound from a television, for example. Thus, the biometric component512may incorporate biometric information into a confidence level for determining an identity of a user. Biometric information output by the biometric component512can be associated with specific user profile data such that the biometric information uniquely identifies a user profile of a user. The RF component514may use RF localization to track devices that a user may carry or wear. For example, a user (and a user profile associated with the user) may be associated with a device. The device may emit RF signals (e.g., Wi-Fi, Bluetooth®, etc.). A device may detect the signal and indicate to the RF component514the strength of the signal (e.g., as a received signal strength indication (RSSI)). The RF component514may use the RSSI to determine an identity of a user (with an associated confidence level). In some instances, the RF component514may determine that a received RF signal is associated with a mobile device that is associated with a particular user identifier. In some instances, a device110may include some RF or other detection processing capabilities so that a user who speaks an input may scan, tap, or otherwise acknowledge his/her personal device (such as a phone) to the device110. In this manner, the user may “register” with the system(s)120for purposes of the system(s)120determining who spoke a particular input. Such a registration may occur prior to, during, or after speaking of an input. The ML component516may track the behavior of various users as a factor in determining a confidence level of the identity of the user. By way of example, a user may adhere to a regular schedule such that the user is at a first location during the day (e.g., at work or at school). In this example, the ML component516would factor in past behavior and/or trends in determining the identity of the user that provided input to the system(s)120. Thus, the ML component516may use historical data and/or usage patterns over time to increase or decrease a confidence level of an identity of a user. In at least some instances, the recognition confidence component518receives determinations from the various components508,510,512,514, and516, and may determine a final confidence level associated with the identity of a user. In some instances, the confidence level may determine whether an action is performed in response to a user input. For example, if a user input includes a request to unlock a door, a confidence level may need to be above a threshold that may be higher than a threshold confidence level needed to perform a user request associated with playing a playlist or sending a message. The confidence level or other score data may be included in the user recognition data595. The audio component510may receive data from one or more sensors capable of providing an audio signal (e.g., one or more microphones) to facilitate recognition of a user. The audio component510may perform audio recognition on an audio signal to determine an identity of the user and associated user identifier. In some instances, aspects of the system(s)120may be configured at a computing device (e.g., a local server). Thus, in some instances, the audio component510operating on a computing device may analyze all sound to facilitate recognition of a user. In some instances, the audio component510may perform voice recognition to determine an identity of a user. The audio component510may also perform user identification based on audio data211input into the system(s)120for speech processing. The audio component510may determine scores indicating whether speech in the audio data211originated from particular users. For example, a first score may indicate a likelihood that speech in the audio data211originated from a first user associated with a first user identifier, a second score may indicate a likelihood that speech in the audio data211originated from a second user associated with a second user identifier, etc. The audio component510may perform user recognition by comparing speech characteristics represented in the audio data211to stored speech characteristics of users (e.g., stored voice profiles associated with the device110that captured the spoken user input). FIG.6illustrates user recognition processing as may be performed by the user recognition component295. The ASR component250performs ASR processing on ASR feature vector data650. ASR confidence data607may be passed to the user recognition component295. The user recognition component295performs user recognition using various data including the user recognition feature vector data640, feature vectors605representing voice profiles of users of the system(s)120, the ASR confidence data607, and other data609. The user recognition component295may output the user recognition data595, which reflects a certain confidence that the user input was spoken by one or more particular users. The user recognition data595may include one or more user identifiers (e.g., corresponding to one or more voice profiles). Each user identifier in the user recognition data595may be associated with a respective confidence value, representing a likelihood that the user input corresponds to the user identifier. A confidence value may be a numeric or binned value. The feature vector(s)605input to the user recognition component295may correspond to one or more voice profiles. The user recognition component295may use the feature vector(s)605to compare against the user recognition feature vector640, representing the present user input, to determine whether the user recognition feature vector640corresponds to one or more of the feature vectors605of the voice profiles. Each feature vector605may be the same size as the user recognition feature vector640. To perform user recognition, the user recognition component295may determine the device110from which the audio data211originated. For example, the audio data211may be associated with metadata including a device identifier representing the device110. Either the device110or the system(s)120may generate the metadata. The system(s)120may determine a group profile identifier associated with the device identifier, may determine user identifiers associated with the group profile identifier, and may include the group profile identifier and/or the user identifiers in the metadata. The system(s)120may associate the metadata with the user recognition feature vector640produced from the audio data211. The user recognition component295may send a signal to voice profile storage685, with the signal requesting only audio data and/or feature vectors605(depending on whether audio data and/or corresponding feature vectors are stored) associated with the device identifier, the group profile identifier, and/or the user identifiers represented in the metadata. This limits the universe of possible feature vectors605the user recognition component295considers at runtime and thus decreases the amount of time to perform user recognition processing by decreasing the amount of feature vectors605needed to be processed. Alternatively, the user recognition component295may access all (or some other subset of) the audio data and/or feature vectors605available to the user recognition component295. However, accessing all audio data and/or feature vectors605will likely increase the amount of time needed to perform user recognition processing based on the magnitude of audio data and/or feature vectors605to be processed. If the user recognition component295receives audio data from the voice profile storage685, the user recognition component295may generate one or more feature vectors605corresponding to the received audio data. The user recognition component295may attempt to identify the user that spoke the speech represented in the audio data211by comparing the user recognition feature vector640to the feature vector(s)605. The user recognition component295may include a scoring component622that determines respective scores indicating whether the user input (represented by the user recognition feature vector640) was spoken by one or more particular users (represented by the feature vector(s)605). The user recognition component295may also include a confidence component624that determines an overall accuracy of user recognition processing (such as those of the scoring component622) and/or an individual confidence value with respect to each user potentially identified by the scoring component622. The output from the scoring component622may include a different confidence value for each received feature vector605. For example, the output may include a first confidence value for a first feature vector605a(representing a first voice profile), a second confidence value for a second feature vector605b(representing a second voice profile), etc. Although illustrated as two separate components, the scoring component622and the confidence component624may be combined into a single component or may be separated into more than two components. The scoring component622and the confidence component624may implement one or more trained machine learning models (such as neural networks, classifiers, etc.) as known in the art. For example, the scoring component622may use probabilistic linear discriminant analysis (PLDA) techniques. PLDA scoring determines how likely it is that the user recognition feature vector640corresponds to a particular feature vector605. The PLDA scoring may generate a confidence value for each feature vector605considered and may output a list of confidence values associated with respective user identifiers. The scoring component622may also use other techniques, such as GMMs, generative Bayesian models, or the like, to determine confidence values. The confidence component624may input various data including information about the ASR confidence607, speech length (e.g., number of frames or other measured length of the user input), audio condition/quality data (such as signal-to-interference data or other metric data), fingerprint data, image data, or other factors to consider how confident the user recognition component295is with regard to the confidence values linking users to the user input. The confidence component624may also consider the confidence values and associated identifiers output by the scoring component622. For example, the confidence component624may determine that a lower ASR confidence607, or poor audio quality, or other factors, may result in a lower confidence of the user recognition component295. Whereas a higher ASR confidence607, or better audio quality, or other factors, may result in a higher confidence of the user recognition component295. Precise determination of the confidence may depend on configuration and training of the confidence component624and the model(s) implemented thereby. The confidence component624may operate using a number of different machine learning models/techniques such as GMM, neural networks, etc. For example, the confidence component624may be a classifier configured to map a score output by the scoring component622to a confidence value. The user recognition component295may output user recognition data595specific to a one or more user identifiers. For example, the user recognition component295may output user recognition data595with respect to each received feature vector605. The user recognition data595may include numeric confidence values (e.g., 0.0-1.0, 0-1000, or whatever scale the system is configured to operate). Thus, the user recognition data595may output an n-best list of potential users with numeric confidence values (e.g., user identifier 123—0.2, user identifier 234—0.8). Alternatively or in addition, the user recognition data595may include binned confidence values. For example, a computed recognition score of a first range (e.g., 0.0-0.33) may be output as “low,” a computed recognition score of a second range (e.g., 0.34-0.66) may be output as “medium,” and a computed recognition score of a third range (e.g., 0.67-1.0) may be output as “high.” The user recognition component295may output an n-best list of user identifiers with binned confidence values (e.g., user identifier 123—low, user identifier 234—high). Combined binned and numeric confidence value outputs are also possible. Rather than a list of identifiers and their respective confidence values, the user recognition data595may only include information related to the top scoring identifier as determined by the user recognition component295. The user recognition component295may also output an overall confidence value that the individual confidence values are correct, where the overall confidence value indicates how confident the user recognition component295is in the output results. The confidence component624may determine the overall confidence value. The confidence component624may determine differences between individual confidence values when determining the user recognition data595. For example, if a difference between a first confidence value and a second confidence value is large, and the first confidence value is above a threshold confidence value, then the user recognition component295is able to recognize a first user (associated with the feature vector605associated with the first confidence value) as the user that spoke the user input with a higher confidence than if the difference between the confidence values were smaller. The user recognition component295may perform thresholding to avoid incorrect user recognition data595being output. For example, the user recognition component295may compare a confidence value output by the confidence component624to a threshold confidence value. If the confidence value does not satisfy (e.g., does not meet or exceed) the threshold confidence value, the user recognition component295may not output user recognition data595, or may only include in that data595an indicator that a user that spoke the user input could not be recognized. Further, the user recognition component295may not output user recognition data595until enough user recognition feature vector data640is accumulated and processed to verify a user above a threshold confidence value. Thus, the user recognition component295may wait until a sufficient threshold quantity of audio data of the user input has been processed before outputting user recognition data595. The quantity of received audio data may also be considered by the confidence component624. The user recognition component295may be defaulted to output binned (e.g., low, medium, high) user recognition confidence values. However, such may be problematic in certain situations. For example, if the user recognition component295computes a single binned confidence value for multiple feature vectors605, the system may not be able to determine which particular user originated the user input. In this situation, the user recognition component295may override its default setting and output numeric confidence values. This enables the system to determine a user, associated with the highest numeric confidence value, originated the user input. The user recognition component295may use other data609to inform user recognition processing. A trained model(s) or other component of the user recognition component295may be trained to take other data609as an input feature when performing user recognition processing. Other data609may include a variety of data types depending on system configuration and may be made available from other sensors, devices, or storage. The other data609may include a time of day at which the audio data211was generated by the device110or received from the device110, a day of a week in which the audio data audio data211was generated by the device110or received from the device110, etc. The other data609may include image data or video data. For example, facial recognition may be performed on image data or video data received from the device110from which the audio data211was received (or another device). Facial recognition may be performed by the user recognition component295. The output of facial recognition processing may be used by the user recognition component295. That is, facial recognition output data may be used in conjunction with the comparison of the user recognition feature vector640and one or more feature vectors605to perform more accurate user recognition processing. The other data609may include location data of the device110. The location data may be specific to a building within which the device110is located. For example, if the device110is located in user A′s bedroom, such location may increase a user recognition confidence value associated with user A and/or decrease a user recognition confidence value associated with user B. The other data609may include data indicating a type of the device110. Different types of devices may include, for example, a smart watch, a smart phone, a tablet, and a vehicle. The type of the device110may be indicated in a profile associated with the device110. For example, if the device110from which the audio data211was received is a smart watch or vehicle belonging to a user A, the fact that the device110belongs to user A may increase a user recognition confidence value associated with user A and/or decrease a user recognition confidence value associated with user B. The other data609may include geographic coordinate data associated with the device110. For example, a group profile associated with a vehicle may indicate multiple users (e.g., user A and user B). The vehicle may include a global positioning system (GPS) indicating latitude and longitude coordinates of the vehicle when the vehicle generated the audio data211. As such, if the vehicle is located at a coordinate corresponding to a work location/building of user A, such may increase a user recognition confidence value associated with user A and/or decrease user recognition confidence values of all other users indicated in a group profile associated with the vehicle. A profile associated with the device110may indicate global coordinates and associated locations (e.g., work, home, etc.). One or more user profiles may also or alternatively indicate the global coordinates. The other data609may include data representing activity of a particular user that may be useful in performing user recognition processing. For example, a user may have recently entered a code to disable a home security alarm. A device110, represented in a group profile associated with the home, may have generated the audio data211. The other data609may reflect signals from the home security alarm about the disabling user, time of disabling, etc. If a mobile device (such as a smart phone, Tile, dongle, or other device) known to be associated with a particular user is detected proximate to (for example physically close to, connected to the same WiFi network as, or otherwise nearby) the device110, this may be reflected in the other data609and considered by the user recognition component295. Depending on system configuration, the other data609may be configured to be included in the user recognition feature vector data640so that all the data relating to the user input to be processed by the scoring component622may be included in a single feature vector. Alternatively, the other data609may be reflected in one or more different data structures to be processed by the scoring component622. FIG.7is a block diagram conceptually illustrating a device110that may be used with the system.FIG.8is a block diagram conceptually illustrating example components of a remote device, such as the system(s)120, which may assist with ASR processing, NLU processing, etc., and the skill system(s)225. A system (120/225) may include one or more servers. A “server” as used herein may refer to a traditional server as understood in a server/client computing structure but may also refer to a number of different computing components that may assist with the operations discussed herein. For example, a server may include one or more physical computing components (such as a rack server) that are connected to other devices/components either physically and/or over a network and is capable of performing computing operations. A server may also include one or more virtual machines that emulates a computer system and is run on one or across multiple devices. A server may also include other combinations of hardware, software, firmware, or the like to perform operations discussed herein. The server(s) may be configured to operate using one or more of a client-server model, a computer bureau model, grid computing techniques, fog computing techniques, mainframe techniques, utility computing techniques, a peer-to-peer model, sandbox techniques, or other computing techniques. Multiple systems (120/225) may be included in the overall system of the present disclosure, such as one or more systems120for performing ASR processing, one or more systems120for performing NLU processing, one or more skill systems225for performing actions responsive to user inputs, etc. In operation, each of these systems may include computer-readable and computer-executable instructions that reside on the respective device (120/225), as will be discussed further below. Each of these devices (110/120/225) may include one or more controllers/processors (704/804), which may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory (706/806) for storing data and instructions of the respective device. The memories (706/806) may individually include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive memory (MRAM), and/or other types of memory. Each device (110/120/225) may also include a data storage component (708/808) for storing data and controller/processor-executable instructions. Each data storage component (708/808) may individually include one or more non-volatile storage types such as magnetic storage, optical storage, solid-state storage, etc. Each device (110/120/225) may also be connected to removable or external non-volatile memory and/or storage (such as a removable memory card, memory key drive, networked storage, etc.) through respective input/output device interfaces (702/802). Computer instructions for operating each device (110/120/225) and its various components may be executed by the respective device's controller(s)/processor(s) (704/804), using the memory (706/806) as temporary “working” storage at runtime. A device's computer instructions may be stored in a non-transitory manner in non-volatile memory (706/806), storage (708/808), or an external device(s). Alternatively, some or all of the executable instructions may be embedded in hardware or firmware on the respective device in addition to or instead of software. Each device (110/120/225) includes input/output device interfaces (702/802). A variety of components may be connected through the input/output device interfaces (702/802), as will be discussed further below. Additionally, each device (110/120/225) may include an address/data bus (724/824) for conveying data among components of the respective device. Each component within a device (110/120/225) may also be directly connected to other components in addition to (or instead of) being connected to other components across the bus (724/824). Referring toFIG.7, the device110may include input/output device interfaces702that connect to a variety of components such as an audio output component such as a speaker712, a wired headset or a wireless headset (not illustrated), or other component capable of outputting audio. The device110may also include an audio capture component. The audio capture component may be, for example, a microphone720or array of microphones, a wired headset or a wireless headset (not illustrated), etc. If an array of microphones is included, approximate distance to a sound's point of origin may be determined by acoustic localization based on time and amplitude differences between sounds captured by different microphones of the array. The device110may additionally include a display716for displaying content. The device110may further include a camera718. Via antenna(s)714, the input/output device interfaces702may connect to one or more networks199via a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, 4G network, 5G network, etc. A wired connection such as Ethernet may also be supported. Through the network(s)199, the system may be distributed across a networked environment. The I/O device interface (702/802) may also include communication components that allow data to be exchanged between devices such as different physical servers in a collection of servers or other components. The components of the device(s)110, the system(s)120, or the skill system(s)225may include their own dedicated processors, memory, and/or storage. Alternatively, one or more of the components of the device(s)110, the system(s)120, or the skill system(s)225may utilize the I/O interfaces (702/802), processor(s) (704/804), memory (706/806), and/or storage (708/808) of the device(s)110system(s)120, or the skill system(s)225, respectively. Thus, the ASR component250may have its own I/O interface(s), processor(s), memory, and/or storage; the NLU component260may have its own I/O interface(s), processor(s), memory, and/or storage; and so forth for the various components discussed herein. As noted above, multiple devices may be employed in a single system. In such a multi-device system, each of the devices may include different components for performing different aspects of the system's processing. The multiple devices may include overlapping components. The components of the device110, the system(s)120, and the skill system(s)225, as described herein, are illustrative, and may be located as a stand-alone device or may be included, in whole or in part, as a component of a larger device or system. As illustrated inFIG.9, multiple devices (110a-110j,120,225) may contain components of the system and the devices may be connected over a network(s)199. The network(s)199may include a local or private network or may include a wide network such as the Internet. Devices may be connected to the network(s)199through either wired or wireless connections. For example, a speech-detection device110a,a smart phone110b,a smart watch110c,a tablet computer110d,a vehicle110e,a display device110f,a smart television110g,a washer/dryer110h,a refrigerator110i,and/or a toaster110jmay be connected to the network(s)199through a wireless service provider, over a WiFi or cellular network connection, or the like. Other devices are included as network-connected support devices, such as the system(s)120, the skill system(s)225, and/or others. The support devices may connect to the network(s)199through a wired connection or wireless connection. Networked devices may capture audio using one-or-more built-in or connected microphones or other audio capture devices, with processing performed by ASR components, NLU components, or other components of the same device or another device connected via the network(s)199, such as the ASR component250, the NLU component260, etc. of one or more systems120. The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, speech processing systems, and distributed computing environments. The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers and speech processing should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein. Aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk, and/or other media. In addition, components of system may be implemented as in firmware or hardware, such as an acoustic front end (AFE), which comprises, among other things, analog and/or digital filters (e.g., filters configured as firmware to a digital signal processor (DSP)). Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
110,561
11862171
DETAILED DESCRIPTION Various embodiments are generally directed to techniques for improving the accuracy of speech-to-text conversion and efficacy of associated text analytics. More specifically, a framework for the derivation of insights into the content of pieces of speech audio may incorporate a chain of pre-processing, processing and post-processing operations that are selected to provide improved insights. During pre-processing, as an alternative to the commonplace approach of simply dividing speech audio into equal-length segments without regard to its content, a combination of pause detection techniques is used to identify likely sentence pauses. Additionally speaker diarization may also be performed to identify likely changes between speakers. The speech audio is then divided into speech segments at likely sentence pauses and/or at likely speaker changes so that the resulting speech segments are more likely to contain the pronunciations of complete sentences by individual speakers. During speech-to-text processing, the derived probability distributions associated with the identification of more likely graphemes (e.g., text characters representing phonemes) and/or pauses by an acoustic model, as well as the probability distributions associated with the identification of more likely n-grams by a language model, are used in identifying the sentences spoken in the speech audio to generate a corresponding transcript. During text analytics post-processing, the corresponding transcript is analyzed to select words that are pertinent to identifying topics or sentiments about topics, and/or analyzed along with other transcripts to identify relationships between different pieces of speech audio. Turning to the pre-processing operations, as will be familiar to those skilled in the art, many of the components employed in performing many of the processing operations of speech-to-text conversion (e.g., acoustic feature detection, acoustic models, language models, etc.) have capacity limits on how large a portion of speech audio is able to be accepted as input. Thus, speech audio must be divided into smaller portions that fit within such capacity limits. As part of an improved approach to dividing speech audio into segments, a combination of multiple pause detection techniques is used to provide improved identification of pauses in the speech audio that are likely to be pauses between sentences to enable the division of the speech audio into segments at least at the midpoints within such likely sentence pauses. By dividing speech audio at least at midpoints within likely sentence pauses to form the segments, each segment is caused to include a higher proportion of complete pronunciations of whole phonemes, whole words, whole phrases and/or whole sentences, thereby enabling greater accuracy in the performance of subsequent processing operations. Also, with fewer phonemes and/or other speech parts being split across the divides between pairs of adjacent segments, there are fewer fragments of phonemes or other speech parts to potentially cause the errant identification of extra text characters and/or words that aren't actually present. Thus, such improvements in the identification of likely sentence pauses during pre-processing serves to enable corresponding improvements in subsequent processing operations to identify text characters, whole words, phrases and/or sentences. As will be familiar to those skilled in the art, there are many linguistic characteristics that vary greatly among the wide variety of languages that are spoken around the world. By way of example, the manner in which combinations of tone, volume, generation of vowels versus consonants, etc., are used to form words may differ greatly between languages. However, the manner in which the relative lengths of pauses are used to separate sounds within words, to separate words within sentences, and to separate sentences tend to be quite similar. More specifically, the relatively short lengths of pauses between sounds within words tend to arise more out of the time needed to reposition portions of the vocal tract when transitioning from producing one sound to producing another sound amidst pronouncing a word. In contrast, the somewhat longer lengths of pauses between words tend to be dictated more by linguistic rules that provide a mechanism to enable a listener to hear the pronunciations of individual words more easily. Similarly, the still longer lengths of pauses between sentences also tend to be dictated by linguistic rules that provide a mechanism to make clear where the speaking of one sentence ends, and the speaking of the next sentence begins. Thus, the ability to identify pauses and/or to distinguish among pauses within words, pauses between words and/or pauses between sentences may be used by each of the multiple pause detection techniques to identify likely sentence pauses at which speech audio may be divided into segments in a manner that may be independent of the language that is spoken. In preparation for the performance of the multiple pause detection techniques, the speech audio may be initially divided into equal-length chunks. The full set of chunks of the speech audio may then be provided as an input to each of multiple pause detection techniques, which may be performed, at least partially in parallel, to each independently generate its corresponding data structure specifying its corresponding set of likely sentence pauses present within the speech audio. In some embodiments, the multiple pause detection techniques may include an adaptive peak amplitude (APA) pause detection technique in which a peak amplitude is separately determined for each chunk of the speech audio, with a threshold amplitude being derived therefrom that is used to distinguish pauses from speech sounds. More precisely, the peak amplitude that occurs within each chunk is measured, and then a preselected percentile amplitude across all of peak amplitudes of all of the chunks is derived to become a threshold amplitude. With the threshold amplitude so derived, all of the chunks with a peak amplitude above the threshold amplitude are deemed to be speech chunks, while all of the chunks with a peak amplitude below the threshold amplitude are deemed to be pause chunks. In this way, the threshold amplitude used in distinguishing pauses from speech sounds is caused to be adaptive to provide some degree of resiliency in addressing differences in speech audio amplitude and/or in audio noise levels that may thwart the typical use of a fixed threshold amplitude to distinguish between pauses and speech sounds. Another adaptive mechanism may then be used to distinguish a pause occurring between sentences from other shorter pauses occurring between words or occurring within words, as well as to distinguish from still other shorter pauses that may occur as a result of various anomalies in capturing the speech audio. Starting at the beginning of the speech audio, a window that covers a preselected quantity of temporally adjacent chunks may be shifted across the length of the speech audio, starting with the earliest chunk and proceeding through temporally adjacent chunks toward the temporally latest chunk. More specifically, with the window positioned to begin with the earliest chunk, measurements of the lengths of each identified pause within the window may be taken to identify the longest pause thereamong (i.e., the pause made up of the longest set of consecutive pause chunks). The longest pause that is so identified within the window may then be deemed likely to be a sentence pause. The window may then be shifted away from the earliest chunk and along the speech audio so as to cause the window to now begin with the chunk just after the just-identified likely sentence pause. With the window so repositioned, again, measurements of the lengths of each identified pause within the window may be taken to again identify the longest pause thereamong. Again, the longest pause that is so identified within the window may be deemed likely to be a sentence pause. This may be repeated until the window has been shifted along the entirety of the length of the speech audio to the temporally latest chunk. An indication of each of the pauses that has been deemed a likely sentence pause may be added to a set of indications of likely sentence pauses identified by the APA pause detection technique, which may be stored as a distinct data structure. The length of the window may be selected to ensure that there cannot be a distance between any adjacent pair of likely sentence pauses that is greater than a capacity limitation that may be present in subsequent processing. Alternatively or additionally, it may be that instances of any adjacent pair of likely sentence pauses that are closer to each other than a predetermined threshold period of time are not permitted. Wherever such a pair of all-too-close adjacent likely sentence pauses might occur, one or the other may be removed from (or not be permitted to be added to) the set of indications of likely sentence pauses identified by the APA pause detection technique. Alternatively or additionally, in some embodiments, the multiple pause detection techniques may include the use of a connectionist temporal classification (CTC) pause detection technique in which instances of consecutive blank symbols (sometimes also referred to as “non-alphabetical symbols” or “artificial symbols”) generated by a CTC output of an acoustic model neural network trained to implement an acoustic model are used to identify likely sentence pauses. Such an acoustic model neural network incorporating a CTC output would normally be used to identify likely graphemes, such as text characters representing likely phoneme(s), in speech audio based on various acoustic features that are identified as present therein. In such normal use, the CTC output serves to augment the probabilistic indications of such text characters (graphemes) that are generated by the acoustic model neural network with blank symbols that serve to identify instances of consecutive occurrences of the same text character (e.g., the pair of “s” characters in the word “chess”), despite the absence of an acoustic feature that would specifically indicate such a situation (e.g., no acoustic feature in the pronunciation of the “s” sound in the word “chess” that indicates that there are two consecutive “s” characters therein). However, it has been observed through experimentation that the CTC output of such an acoustic model neural network may also be useful in identifying sentence pauses, as it has been observed that its CTC output has a tendency to generate relatively long strings of consecutive blank symbols that tend to correspond to where sentence pauses occur. In using such an acoustic model neural network for the detection of sentence pauses, each chunk is provided to the acoustic model neural network as an input, and the CTC output for that chunk is monitored for occurrences of strings of consecutive blank symbols, and the length of each such string is compared to a threshold blank string length. Each string of consecutive blank symbols that is at least as long as the threshold blank string length may be deemed to correspond to what is likely a sentence pause. In some embodiments, the threshold blank string length may be derived during training of the acoustic model neural network to implement an acoustic model, and/or during testing of the results of that training. Portions of speech audio that are known to include pauses between sentences may be provided as input to the acoustic model neural network and the lengths of the strings of consecutive blank symbols that are output may be monitored to determine what the threshold blank string length should be. Regardless of the exact manner in which the threshold blank string length is arrived at, an indication of each of the pauses that has been deemed a likely sentence pause may be added to the set of indications of likely sentence pauses identified by the CTC pause detection technique, which may be stored as a distinct data structure. It should be noted that, in some embodiments, the same acoustic model neural network with CTC output that is employed in the CTC pause detection technique during pre-processing may also be employed during the subsequent processing to perform the function for which it was trained. Specifically, that same acoustic model neural network may be used to identify likely text characters from acoustic features detected in the speech audio, including using its CTC output to augment such probabilistic indications of text characters with blank symbols indicative of instances in which there are likely consecutive occurrences of the same text character. In some embodiments, following the completion of the performances of all of the multiple pause detection techniques, the resulting multiple sets of indications of likely sentence pauses may then be combined in any of a variety of ways to generate a single set of indications that describe the manner in which the speech audio is to be divided into segments based on likely sentence pauses. However, in other embodiments, it may be that the multiple sets of indication of likely sentence pauses may, instead, be used as an input to the performance of at least one speaker diarization technique to identify instances in the speech audio at which there is a change in speaker(s). As will be familiar to those skilled in the art, while there may be instances in a conversation among two or more speakers in which at least a subset of sentence pauses may also mark instances in which there is a change in who is speaking, it is also not uncommon for there to be instances in a conversation among two or more speakers in which there are overlapping speakers, such as instances where one speaker starts speaking while not waiting for another to finish speaking. As a result, there may be instances where there are changes in who is speaking that are not coincident with any form of pause. Therefore, it may be deemed desirable to use at least one speaker diarization technique to identify instances in the speech audio at which it is likely there was a change in speakers to further enhance the segmentation of the speech audio that is to be performed in preparation for the subsequent speech-to-text processing operations. In some embodiments, a speaker diarization technique that may be used may include the use of a speaker diarization neural network that has been trained to generate speaker vectors that are each indicative of various vocal characteristics of a speaker (or of a combination of speakers). More precisely, such a speaker diarization neural network may be trained to derive binary values that each indicate the presence or absence of a particular vocal characteristic, and/or to derive numeric values that each indicate a measure (e.g., a level) associated with a particular vocal characteristic. These binary and/or numeric values of various vocal characteristics may be combined into a speaker vector (e.g., a one-dimensional array of those binary and/or numeric values). In a manner somewhat similar to each of the aforedescribed pause detection techniques, it may be that the speech audio is, again, divided into equal-length chunks. Following this division into chunks, each chunk may be further divided into fragments. Following this division into fragments, the separate sets of indications of likely sentence pauses derived by each of the pause detection techniques may then be used to identify, within each chunk, any fragments that likely include a sentence pause such that there is at least a portion of the speech audio within such fragments that likely does not include speech sounds. Such “non-speech” fragments may then be removed from each chunk. Following such removal of non-speech fragment(s) from each chunk, each remaining fragment of each chunk may then be provided as an input to the speaker diarization neural network so that a separate speaker vector is generated by the speaker diarization neural network for each fragment. For each chunk, the speaker vectors that are generated from the fragments within that chunk may be used together to identify all of the speakers who spoke within the portion of speech audio represented by that chunk, as well as each occurrence of a change that occurred during that portion of speech audio. It is envisioned that each speaker vector will include binary and/or numerical values for each of numerous vocal characteristic such that each speaker vector may effectively represent a point in a multi-dimensional space. Indeed, a clustering technique may be used where the clustering of points corresponding to the speaker vectors may be used to identify individual speakers (or combinations of speakers). In such clustering, there may be a threshold distance between points that may be used, at least initially, to distinguish between points that belong together in a single cluster that is associated with a single speaker (or a single combination of speakers), and points that belong to different clusters. Alternatively or additionally, there may be a threshold number of occurrences of outlier points that must be identified and that must be closely clustered enough for a new speaker to be deemed as having been identified. Such clustering may be carried out in a chronological order in which the point associated with each speaker vector is plotted in an order that proceeds from the earliest fragment within a chunk to the latest fragment within that chunk. In this way, there may be one or more initial clusters that develop from the speaker vectors of the earliest fragments in a chunk. The one or more initial clusters may correspond to one or more speakers who were speaking at the start of the portion of speech audio represented by the chunk. As speaker vectors associated with increasingly later fragments are also plotted, a change in speakers may become evident where there ceases to be further points added to existing cluster(s), and/or as there begin to be points added that begin to form new cluster(s). For each instance in which a speaker begins speaking and/or in which a speaker ceases speaking, an indication of a likely speaker change may be added to a set of indications of likely speaker changes, which may be stored as a distinct data structure. Following the completion of the performances of the multiple pause detection techniques, and following the completion of the performance of the at least one speaker diarization technique, the resulting sets of indications of likely sentence pauses and likely speaker changes may then be combined in any of a variety of ways to generate a single set of segmentation indications that describe the manner in which the speech audio is to be divided into segments. In some embodiments, such a single set of segmentation indications may be implemented as a set of indications of each location in the speech audio at which a division between segments is to occur, thereby indicating where each segment of speech audio begins and/or ends. The manner in which the multiple sets of indications of likely sentence pauses and of likely speaker changes are combined to derive such a single set of segmentation indications may include the use of relative weighting factors for at least the multiple sets of likely sentence pauses that may be dynamically adjusted based on levels of audio noise detected as being present within the speech audio. This may be done in recognition of each of the different pause detection techniques being more or less susceptible than others to audio noise. Thus, the multiple sets of indications of likely sentence pauses may be combined, first, to derive a single set of indications of likely sentence pauses within the speech audio. It should be noted that, where more than one speaker diarization technique was used, a similar approach of using relative weighting may be applied in combining multiple sets of indications of speaker changes to derive a single set of indications of speaker changes within the speech audio. Then, the single set of indications of likely sentence pauses and the single set of indications of likely speaker changes may be combined to derive the single set of segmentation indications. Upon completion of the pre-processing operations, including segmentation based on a combination of likely sentence pauses and likely speaker changes, there may be no further use made of the chunks into which the speech audio was initially divided, and those chunks may be discarded from storage. Instead, the speech audio may be divided, again, to form speech segments, where each such division between two segments occurs at the midpoint of one of the likely sentence pauses and/or of one of the likely speaker changes. Thus, unlike the chunks of speech audio used in the pre-processing operations, each of the speech segments generated for the text-to-speech processing operations is more likely to contain the pronunciation of an entire sentence as spoken by a speaker, thereby decreasing the likelihood that the pronunciations of words may be split across segments, and increasing the likelihood that the entire context of each word will be present within a single segment. In this way, each speech segment is more likely to contain a more complete set of the acoustic information needed to identify graphemes, phonemes, text characters, words, phrases, sentences etc. in the speech-to-text processing operations, thereby enabling greater accuracy in doing so. Turning to the speech-to-text processing operations, each of the speech segments may be provided as input to a feature detector, in which the speech audio within each speech segment is searched for any instances of a pre-selected set of particular acoustic features. It may be that multiple instances of the feature detector are executed, at least partially in parallel, across multiple threads of execution within a single device, and/or across multiple node devices. As part of such feature detection, each speech segment may be divided into multiple speech frames that are each of an equal temporal length, and each speech frame of a speech segment may be provided, one at a time, as input to a feature detector. As each instance of an acoustic feature is identified within a speech frame, an indication of the type of acoustic feature identified and when it occurs within the span of time covered by the speech frame may be stored within the feature vector that corresponds to the speech frame. The feature vectors for each speech segment may then be used by a combination of acoustic and language models to identify spoken words and generate a transcript. More precisely, the feature vectors for each speech segment may be provided as input to an acoustic model. The acoustic model may be implemented using any of a variety of technologies, including and not limited to, a neural network, a hidden Markov model, or a finite state machine. It may be that multiple instances of the acoustic model are instantiated and used, at least partially in parallel, across multiple threads of execution within a single device, and/or across multiple node devices. Based on the acoustic features that are identified by each feature vector as present within its corresponding speech frame, the acoustic model may generate probability distributions of the grapheme(s) that were spoken within each speech frame, and/or of the pauses that occurred within each speech frame. Such probability distributions may then be grouped in temporal order to form sets of probability distributions that correspond to the speech segments, and each such set may then be provided as input to a decoder that is implemented using an n-gram language model. Using such a set of probability distributions, and using the contextual information inherently provided by their temporal ordering, the decoder may identify the most likely combinations of words spoken to form sentences (or at least phrases) within the corresponding speech segment. In this way, the decoder may derive a transcript of what was spoken in the speech audio, and such a transcript may be stored in a manner that is associated with the speech audio for future reference. As will be familiar to those skilled in the art, it has become commonplace (at least in speech recognition systems having sufficient processing and storage resources) to employ a two-stage combination of an acoustic model and a language model to identify the words spoken in speech audio based on the identified acoustic features. In such speech recognition systems, the acoustic model is typically relied upon to perform a first pass at identifying words that are likely to be the ones that were spoken, and the language model is typically relied upon to perform the next and final pass by refining the identification of such spoken words such that the words identified by the language model are the ones from which a transcript is generated. Such a two-stage use of a combination of acoustic and language models has proven to be significantly more accurate in performing speech recognition than the earlier commonplace practice of applying an acoustic model, alone. However, while the reduction in errors in speech recognition that has been achieved through using such a two-pass combination of acoustic and language models is significant, even this reduced error rate is still frequently undesirably high enough as to have merited further efforts over a number of years to further reduce it. A possible source of this still elevated error rate, at least in some situations, has been such reliance on using a language model to always perform the final pass to provide the final identification of each word spoken in speech audio. It should be remembered that a good language model is usually one that closely models a language as that language is used correctly. Thus, part of the still elevated error rate may arise from the fact that a person may make mistakes in vocabulary and/or syntax when speaking, while the language model may tend to fight against correctly identifying that person's words as actually spoken as it effectively attempts to enforce its model of what that person's words should have been. As illustrated by at least this one example, there can be situations in which it may be desirable to rely more on an acoustic model, than on a language model, to correctly identify spoken words. It has long been recognized that an acoustic model can be highly accurate in identifying spoken words where the pronunciation of words is of sufficient clarity, and where the acoustic conditions associated with the reception of those spoken words are sufficiently favorable (e.g., sufficiently free of noise). As will be familiar to those skilled in the art, the longstanding practice of reliance on a language model to provide the final identification of words was largely influenced by a need to accommodate less ideal conditions in which the pronunciation of words may not be as clear and/or where the acoustic conditions may not be so favorable. In such situations, gaps may occur in the reception of spoken words, and on many of such occasions, a language model can compensate for such instances of missing acoustic information. To further improve upon the error rate of such typical two-stage use of a combination of an acoustic model and a language model, some embodiments may dynamically vary the relative weighting assigned to each of the acoustic model and the language model per-word based on the degree of uncertainty in the per-grapheme probability distributions output by the acoustic model for each word. Stated differently, it may be that the probability distributions of graphemes that are output by the acoustic model for a single word are analyzed to derive a corresponding degree of perplexity for each probability distribution. Such a degree of perplexity may serve as an indication of the degree to which a probability distribution presents an indefinite indication of which utterance occurred during a corresponding portion of speech audio. Where the degree of perplexity of probability distributions for graphemes associated with a word are deemed to be lower than a pre-determined threshold, then greater weight may be dynamically assigned to the identification of that word based on those probability distributions such that the acoustic model is relied upon to identify that word. However, where the degree of perplexity of such probability distributions associated with a word are deemed to be higher than a pre-determined threshold, then greater weight may be dynamically assigned to the identification of that word based on the language model. In some embodiments, both of the acoustic model and the language model may always be utilized in combination for each spoken word, regardless of whether the per-word determination is made in a manner that gives greater weight to relying more on the acoustic model or to the language model to identify a word. Thus, the beam searches associated with such use of a language model implemented with an n-gram corpus may always be performed regardless of such dynamic per-word assignment of relative weighting. In some of such embodiments, it may be that the probability (and/or another measure or statistic) associated with the word identified by the language model is used as an input to the dynamic per-word relative weighting in addition to the degree of perplexity derived for the probability distributions for the corresponding graphemes. Alternatively, in other embodiments, it may be that the language model is not used to provide any input to the dynamic per-word relative weighting. In such other embodiments, such a situation may provide the opportunity to entirely refrain from consuming processing and/or storage resources to perform beam searches associated with using the language model if the results of the dynamic per-word relative weighting are such that the results of using the language model will not be used. In this way, use of the language model may be made contingent on such dynamic per-word relative weighting. Regarding the use of a language model as part of the speech-to-text processing operations, as will be readily recognized by those skilled in the art, when using a language model based on a corpus of n-grams, it is generally accepted that a larger n-gram corpus is capable of achieving higher accuracies in speech-to-text operations than a smaller one. However, as will also be familiar to those skilled in the art, each increase of one word in the quantity of words that may be included in each n-gram can result in an exponential increase in the size of the n-gram corpus. As a result, it has become commonplace to limit the quantity of words that may be included in each n-gram to 4, 5 or 6 words to avoid so overtaxing available processing and/or storage resources of typical computing devices as to become impractical for use. To overcome such limitations, the processing and storage resources of multiple node devices may be employed in particular ways that make more efficient use of distributed processing to make the use of a larger n-gram corpus more practical. More specifically, in preparation for performing beam searches of a relatively large n-gram corpus of an n-gram language model, complete copies of such a relatively large n-gram corpus may be distributed among the multiple node devices such that each is caused to locally store the complete n-gram corpus. Proceeding in temporal order through probability distributions of graphemes that may have been pronounced throughout speech segment, the control device may derive candidate sets of n-grams to be searched for within the n-gram corpus to retrieve their corresponding probabilities. As each such n-gram candidate set is derived, the control device may provide it to all of the node devices2300to which the n-gram corpus has been provided to enable beam searches for each of the different candidate n-grams to be searched for, at least partially in parallel. As part of causing different ones of the n-grams to be searched for by different ones of the node devices, a modulo calculation may be used based on identifiers assigned to each of the node devices to enable each node device to independently determine which one(s) of the n-grams within the n-gram candidate set will be searched for therein. Alternatively, the n-gram searches may be distributed among multiple execution threads of processor(s) within a single device (e.g., the control device or a single node device). As each of the node devices completes the beam search(es) for its corresponding one(s) of the candidate n-grams, indications of the relative probabilities of occurrence for each n-gram may be provided to the control device to enable the control device to identify the next word that was most likely spoken in the speech segment, and accordingly, to identify the next word to be added to the transcript of what was spoken in the speech audio. Upon completion of the transcript, the transcript may be stored by the control device within the one or more storage devices as a text data set that may be subsequently retrieved and analyzed to derive various insights therefrom, as previously discussed. In a further effort to make the use of a relatively large n-gram corpus more practical, the corpus data sets may be generated to employ a two-dimensional (2D) array data structure, instead of the more conventional ASCII text file data structure of the widely known and used “ARPA” text format originally introduced by Doug B. Paul of the Massachusetts Institute of Technology. Avoiding the use of such a relatively unstructured text format obviates the need to use text parsing routines that can greatly decrease the speed of access to individual n-grams, and/or individual words within individual n-grams. In this way, the speed with which the n-gram corpus is able to be generated, put through deduplication, and used in beam searches may be greatly increased. Still further, in deriving probabilities for the occurrence of each n-gram, a novel technique may be used for deriving a backoff value that is relatively simple to perform, and that is better suited to the larger n-gram corpuses that may be made practical to use by way of the various approaches described herein. Regardless of the exact manner in which each word spoken in speech audio is identified through use of an acoustic model and/or through the use of a language model, and regardless of the size and/or format of the n-gram corpus that may be used, the length of transcript(s) that are generated from speech audio may advantageously or adversely affect automated text analyses that may be subsequently performed in post-processing (e.g., analyses to identify topics, to identify sentiments of topics, and/or to identify other related pieces of speech audio and/or transcripts generated therefrom). From experimentation and observation, it has been found that, generally, many forms of automated text analyses are able to be more successfully used with longer transcripts. More specifically, it has been found that shorter transcripts tend to cause an overemphasis on the more frequently used words in a language, even after removal of non-content stopwords, with the result that analyses to derive topics and/or other insights of a transcript tend to produce less useful results. To counteract this, in some embodiments, all of the text of speech audio on which speech-to-text processing has been performed may be stored and/or otherwise handled as a single transcript, thereby increasing the likelihood of generating longer transcripts. However, where the speech audio is sufficiently long as to include multiple presentations and/or conversations on unrelated subjects, automated text analyses performed on a single transcript encompassing such lengthy and varied speech audio may also produce less useful results. Thus, in some embodiments, rules concerning lengths of transcripts and/or acoustic features such as relatively lengthy pauses may be used to bring about the generation of lengths and/or quantities of transcripts for each piece of speech audio that are more amenable to providing useful results from automated text analyses. Turning to the text analytics post-processing operations, the resulting one or more transcripts of the speech audio may be provided to one or more text analyzers to derive, based on such factors as the frequency with which each word was spoken, such insights as topic(s) spoken about, relative importance of topics, sentiments expressed concerning each topic, etc. It may be that each such stored transcript(s) may be accompanied in storage with metadata indicative of such insights. Alternatively or additionally, it may be that such insights are used to identify other transcript(s) generated from other pieces of speech audio that are deemed to be related. In embodiments in which a distributed processing system is used that includes multiple node devices, various one(s) of the pre-processing, text-to-speech processing and/or post-processing operations within the framework may be performed in a manner that is distributed across those multiple node devices to improve the efficiency with which those operations are able to be performed. As will be explained in greater detail, such improvements in efficiency may also enable improvements in the handling of data such that greater use may be made of contextual information to provide improved results. By way of example, each of the different pause detection techniques may be performed within a separate one of the node devices, at least partially in parallel, such that a different one of the corresponding set of likely sentence pauses may be independently derived within each such node device. Also by way of example, multiple instances of the feature detector may be executed across the multiple node devices, and the speech segments may be distributed thereamong to enable speech detection to be performed with multiple ones of the speech segments at least partially in parallel. Further, along with the multiple instances of the feature detector, multiple instances of the acoustic model may be instantiated across the multiple node devices, thereby enabling the feature vectors derived from a speech segment by an instance of the feature detector within a node device to be directly provided to the corresponding instance of the acoustic model within the node device to enable the derivation of the set of probability distributions that correspond to that speech segment. Also by way of example, multiple copies of the n-gram corpus may be distributed among the multiple node devices to enable each beam search across multiple n-grams for each next word in a sentence to be performed in a distributed manner without need of communication among the node devices. With general reference to notations and nomenclature used herein, portions of the detailed description that follows may be presented in terms of program procedures executed by a processor of a machine or of multiple networked machines. These procedural descriptions and representations are used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. A procedure is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. These operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical communications capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to what is communicated as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to those quantities. Further, these manipulations are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. However, no such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein that form part of one or more embodiments. Rather, these operations are machine operations. Useful machines for performing operations of various embodiments include machines selectively activated or configured by a routine stored within that is written in accordance with the teachings herein, and/or include apparatus specially constructed for the required purpose. Various embodiments also relate to apparatus or systems for performing these operations. These apparatus may be specially constructed for the required purpose or may include a general purpose computer. The required structure for a variety of these machines will appear from the description given. Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives within the scope of the claims. Systems depicted in some of the figures may be provided in various configurations. In some embodiments, the systems may be configured as a distributed system where one or more components of the system are distributed across one or more networks in a cloud computing system and/or a fog computing system. FIG.1is a block diagram that provides an illustration of the hardware components of a data transmission network100, according to embodiments of the present technology. Data transmission network100is a specialized computer system that may be used for processing large amounts of data where a large number of computer processing cycles are required. Data transmission network100may also include computing environment114. Computing environment114may be a specialized computer or other machine that processes the data received within the data transmission network100. Data transmission network100also includes one or more network devices102. Network devices102may include client devices that attempt to communicate with computing environment114. For example, network devices102may send data to the computing environment114to be processed, may send signals to the computing environment114to control different aspects of the computing environment or the data it is processing, among other reasons. Network devices102may interact with the computing environment114through a number of ways, such as, for example, over one or more networks108. As shown inFIG.1, computing environment114may include one or more other systems. For example, computing environment114may include a database system118and/or a communications grid120. In other embodiments, network devices may provide a large amount of data, either all at once or streaming over a period of time (e.g., using event stream processing (ESP), described further with respect toFIGS.8-10), to the computing environment114via networks108. For example, network devices102may include network computers, sensors, databases, or other devices that may transmit or otherwise provide data to computing environment114. For example, network devices may include local area network devices, such as routers, hubs, switches, or other computer networking devices. These devices may provide a variety of stored or generated data, such as network data or data specific to the network devices themselves. Network devices may also include sensors that monitor their environment or other devices to collect data regarding that environment or those devices, and such network devices may provide data they collect over time. Network devices may also include devices within the internet of things, such as devices within a home automation network. Some of these devices may be referred to as edge devices, and may involve edge computing circuitry. Data may be transmitted by network devices directly to computing environment114or to network-attached data stores, such as network-attached data stores110for storage so that the data may be retrieved later by the computing environment114or other portions of data transmission network100. Data transmission network100may also include one or more network-attached data stores110. Network-attached data stores110are used to store data to be processed by the computing environment114as well as any intermediate or final data generated by the computing system in non-volatile memory. However in certain embodiments, the configuration of the computing environment114allows its operations to be performed such that intermediate and final data results can be stored solely in volatile memory (e.g., RAM), without a requirement that intermediate or final data results be stored to non-volatile types of memory (e.g., disk). This can be useful in certain situations, such as when the computing environment114receives ad hoc queries from a user and when responses, which are generated by processing large amounts of data, need to be generated on-the-fly. In this non-limiting situation, the computing environment114may be configured to retain the processed information within memory so that responses can be generated for the user at different levels of detail as well as allow a user to interactively query against this information. Network-attached data stores may store a variety of different types of data organized in a variety of different ways and from a variety of different sources. For example, network-attached data storage may include storage other than primary storage located within computing environment114that is directly accessible by processors located therein. Network-attached data storage may include secondary, tertiary or auxiliary storage, such as large hard drives, servers, virtual memory, among other types. Storage devices may include portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing data. A machine-readable storage medium or computer-readable storage medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals. Examples of a non-transitory medium may include, for example, a magnetic disk or tape, optical storage media such as compact disk or digital versatile disk, flash memory, memory or memory devices. A computer-program product may include code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, among others. Furthermore, the data stores may hold a variety of different types of data. For example, network-attached data stores110may hold unstructured (e.g., raw) data, such as manufacturing data (e.g., a database containing records identifying products being manufactured with parameter data for each product, such as colors and models) or product sales databases (e.g., a database containing individual data records identifying details of individual product sales). The unstructured data may be presented to the computing environment114in different forms such as a flat file or a conglomerate of data records, and may have data values and accompanying time stamps. The computing environment114may be used to analyze the unstructured data in a variety of ways to determine the best way to structure (e.g., hierarchically) that data, such that the structured data is tailored to a type of further analysis that a user wishes to perform on the data. For example, after being processed, the unstructured time stamped data may be aggregated by time (e.g., into daily time period units) to generate time series data and/or structured hierarchically according to one or more dimensions (e.g., parameters, attributes, and/or variables). For example, data may be stored in a hierarchical data structure, such as a ROLAP OR MOLAP database, or may be stored in another tabular form, such as in a flat-hierarchy form. Data transmission network100may also include one or more server farms106. Computing environment114may route select communications or data to the one or more sever farms106or one or more servers within the server farms. Server farms106can be configured to provide information in a predetermined manner. For example, server farms106may access data to transmit in response to a communication. Server farms106may be separately housed from each other device within data transmission network100, such as computing environment114, and/or may be part of a device or system. Server farms106may host a variety of different types of data processing as part of data transmission network100. Server farms106may receive a variety of different data from network devices, from computing environment114, from cloud network116, or from other sources. The data may have been obtained or collected from one or more sensors, as inputs from a control database, or may have been received as inputs from an external system or device. Server farms106may assist in processing the data by turning raw data into processed data based on one or more rules implemented by the server farms. For example, sensor data may be analyzed to determine changes in an environment over time or in real-time. Data transmission network100may also include one or more cloud networks116. Cloud network116may include a cloud infrastructure system that provides cloud services. In certain embodiments, services provided by the cloud network116may include a host of services that are made available to users of the cloud infrastructure system on demand. Cloud network116is shown inFIG.1as being connected to computing environment114(and therefore having computing environment114as its client or user), but cloud network116may be connected to or utilized by any of the devices inFIG.1. Services provided by the cloud network can dynamically scale to meet the needs of its users. The cloud network116may include one or more computers, servers, and/or systems. In some embodiments, the computers, servers, and/or systems that make up the cloud network116are different from the user's own on-premises computers, servers, and/or systems. For example, the cloud network116may host an application, and a user may, via a communication network such as the Internet, on demand, order and use the application. While each device, server and system inFIG.1is shown as a single device, it will be appreciated that multiple devices may instead be used. For example, a set of network devices can be used to transmit various communications from a single user, or remote server140may include a server stack. As another example, data may be processed as part of computing environment114. Each communication within data transmission network100(e.g., between client devices, between servers106and computing environment114or between a server and a device) may occur over one or more networks108. Networks108may include one or more of a variety of different types of networks, including a wireless network, a wired network, or a combination of a wired and wireless network. Examples of suitable networks include the Internet, a personal area network, a local area network (LAN), a wide area network (WAN), or a wireless local area network (WLAN). A wireless network may include a wireless interface or combination of wireless interfaces. As an example, a network in the one or more networks108may include a short-range communication channel, such as a BLUETOOTH® communication channel or a BLUETOOTH® Low Energy communication channel. A wired network may include a wired interface. The wired and/or wireless networks may be implemented using routers, access points, bridges, gateways, or the like, to connect devices in the network114, as will be further described with respect toFIG.2. The one or more networks108can be incorporated entirely within or can include an intranet, an extranet, or a combination thereof. In one embodiment, communications between two or more systems and/or devices can be achieved by a secure communications protocol, such as secure sockets layer (SSL) or transport layer security (TLS). In addition, data and/or transactional details may be encrypted. Some aspects may utilize the Internet of Things (IoT), where things (e.g., machines, devices, phones, sensors) can be connected to networks and the data from these things can be collected and processed within the things and/or external to the things. For example, the IoT can include sensors in many different devices, and high value analytics can be applied to identify hidden relationships and drive increased efficiencies. This can apply to both big data analytics and real-time (e.g., ESP) analytics. This will be described further below with respect toFIG.2. As noted, computing environment114may include a communications grid120and a transmission network database system118. Communications grid120may be a grid-based computing system for processing large amounts of data. The transmission network database system118may be for managing, storing, and retrieving large amounts of data that are distributed to and stored in the one or more network-attached data stores110or other data stores that reside at different locations within the transmission network database system118. The compute nodes in the grid-based computing system120and the transmission network database system118may share the same processor hardware, such as processors that are located within computing environment114. FIG.2illustrates an example network including an example set of devices communicating with each other over an exchange system and via a network, according to embodiments of the present technology. As noted, each communication within data transmission network100may occur over one or more networks. System200includes a network device204configured to communicate with a variety of types of client devices, for example client devices230, over a variety of types of communication channels. As shown inFIG.2, network device204can transmit a communication over a network (e.g., a cellular network via a base station210). The communication can be routed to another network device, such as network devices205-209, via base station210. The communication can also be routed to computing environment214via base station210. For example, network device204may collect data either from its surrounding environment or from other network devices (such as network devices205-209) and transmit that data to computing environment214. Although network devices204-209are shown inFIG.2as a mobile phone, laptop computer, tablet computer, temperature sensor, motion sensor, and audio sensor respectively, the network devices may be or include sensors that are sensitive to detecting aspects of their environment. For example, the network devices may include sensors such as water sensors, power sensors, electrical current sensors, chemical sensors, optical sensors, pressure sensors, geographic or position sensors (e.g., GPS), velocity sensors, acceleration sensors, flow rate sensors, among others. Examples of characteristics that may be sensed include force, torque, load, strain, position, temperature, air pressure, fluid flow, chemical properties, resistance, electromagnetic fields, radiation, irradiance, proximity, acoustics, moisture, distance, speed, vibrations, acceleration, electrical potential, and electrical current, among others. The sensors may be mounted to various components used as part of a variety of different types of systems (e.g., an oil drilling operation). The network devices may detect and record data related to the environment that it monitors, and transmit that data to computing environment214. As noted, one type of system that may include various sensors that collect data to be processed and/or transmitted to a computing environment according to certain embodiments includes an oil drilling system. For example, the one or more drilling operation sensors may include surface sensors that measure a hook load, a fluid rate, a temperature and a density in and out of the wellbore, a standpipe pressure, a surface torque, a rotation speed of a drill pipe, a rate of penetration, a mechanical specific energy, etc. and downhole sensors that measure a rotation speed of a bit, fluid densities, downhole torque, downhole vibration (axial, tangential, lateral), a weight applied at a drill bit, an annular pressure, a differential pressure, an azimuth, an inclination, a dog leg severity, a measured depth, a vertical depth, a downhole temperature, etc. Besides the raw data collected directly by the sensors, other data may include parameters either developed by the sensors or assigned to the system by a client or other controlling device. For example, one or more drilling operation control parameters may control settings such as a mud motor speed to flow ratio, a bit diameter, a predicted formation top, seismic data, weather data, etc. Other data may be generated using physical models such as an earth model, a weather model, a seismic model, a bottom hole assembly model, a well plan model, an annular friction model, etc. In addition to sensor and control settings, predicted outputs, of for example, the rate of penetration, mechanical specific energy, hook load, flow in fluid rate, flow out fluid rate, pump pressure, surface torque, rotation speed of the drill pipe, annular pressure, annular friction pressure, annular temperature, equivalent circulating density, etc. may also be stored in the data warehouse. In another example, another type of system that may include various sensors that collect data to be processed and/or transmitted to a computing environment according to certain embodiments includes a home automation or similar automated network in a different environment, such as an office space, school, public space, sports venue, or a variety of other locations. Network devices in such an automated network may include network devices that allow a user to access, control, and/or configure various home appliances located within the user's home (e.g., a television, radio, light, fan, humidifier, sensor, microwave, iron, and/or the like), or outside of the user's home (e.g., exterior motion sensors, exterior lighting, garage door openers, sprinkler systems, or the like). For example, network device102may include a home automation switch that may be coupled with a home appliance. In another embodiment, a network device can allow a user to access, control, and/or configure devices, such as office-related devices (e.g., copy machine, printer, or fax machine), audio and/or video related devices (e.g., a receiver, a speaker, a projector, a DVD player, or a television), media-playback devices (e.g., a compact disc player, a CD player, or the like), computing devices (e.g., a home computer, a laptop computer, a tablet, a personal digital assistant (PDA), a computing device, or a wearable device), lighting devices (e.g., a lamp or recessed lighting), devices associated with a security system, devices associated with an alarm system, devices that can be operated in an automobile (e.g., radio devices, navigation devices), and/or the like. Data may be collected from such various sensors in raw form, or data may be processed by the sensors to create parameters or other data either developed by the sensors based on the raw data or assigned to the system by a client or other controlling device. In another example, another type of system that may include various sensors that collect data to be processed and/or transmitted to a computing environment according to certain embodiments includes a power or energy grid. A variety of different network devices may be included in an energy grid, such as various devices within one or more power plants, energy farms (e.g., wind farm, solar farm, among others) energy storage facilities, factories, homes and businesses of consumers, among others. One or more of such devices may include one or more sensors that detect energy gain or loss, electrical input or output or loss, and a variety of other efficiencies. These sensors may collect data to inform users of how the energy grid, and individual devices within the grid, may be functioning and how they may be made more efficient. Network device sensors may also perform processing on data it collects before transmitting the data to the computing environment114, or before deciding whether to transmit data to the computing environment114. For example, network devices may determine whether data collected meets certain rules, for example by comparing data or values calculated from the data and comparing that data to one or more thresholds. The network device may use this data and/or comparisons to determine if the data should be transmitted to the computing environment214for further use or processing. Computing environment214may include machines220and240. Although computing environment214is shown inFIG.2as having two machines,220and240, computing environment214may have only one machine or may have more than two machines. The machines that make up computing environment214may include specialized computers, servers, or other machines that are configured to individually and/or collectively process large amounts of data. The computing environment214may also include storage devices that include one or more databases of structured data, such as data organized in one or more hierarchies, or unstructured data. The databases may communicate with the processing devices within computing environment214to distribute data to them. Since network devices may transmit data to computing environment214, that data may be received by the computing environment214and subsequently stored within those storage devices. Data used by computing environment214may also be stored in data stores235, which may also be a part of or connected to computing environment214. Computing environment214can communicate with various devices via one or more routers225or other inter-network or intra-network connection components. For example, computing environment214may communicate with devices230via one or more routers225. Computing environment214may collect, analyze and/or store data from or pertaining to communications, client device operations, client rules, and/or user-associated actions stored at one or more data stores235. Such data may influence communication routing to the devices within computing environment214, how data is stored or processed within computing environment214, among other actions. Notably, various other devices can further be used to influence communication routing and/or processing between devices within computing environment214and with devices outside of computing environment214. For example, as shown inFIG.2, computing environment214may include a web server240. Thus, computing environment214can retrieve data of interest, such as client information (e.g., product information, client rules, etc.), technical product details, news, current or predicted weather, and so on. In addition to computing environment214collecting data (e.g., as received from network devices, such as sensors, and client devices or other sources) to be processed as part of a big data analytics project, it may also receive data in real time as part of a streaming analytics environment. As noted, data may be collected using a variety of sources as communicated via different kinds of networks or locally. Such data may be received on a real-time streaming basis. For example, network devices may receive data periodically from network device sensors as the sensors continuously sense, monitor and track changes in their environments. Devices within computing environment214may also perform pre-analysis on data it receives to determine if the data received should be processed as part of an ongoing project. The data received and collected by computing environment214, no matter what the source or method or timing of receipt, may be processed over a period of time for a client to determine results data based on the client's needs and rules. FIG.3illustrates a representation of a conceptual model of a communications protocol system, according to embodiments of the present technology. More specifically,FIG.3identifies operation of a computing environment in an Open Systems Interaction model that corresponds to various connection components. The model300shows, for example, how a computing environment, such as computing environment314(or computing environment214inFIG.2) may communicate with other devices in its network, and control how communications between the computing environment and other devices are executed and under what conditions. The model can include layers301-307. The layers are arranged in a stack. Each layer in the stack serves the layer one level higher than it (except for the application layer, which is the highest layer), and is served by the layer one level below it (except for the physical layer, which is the lowest layer). The physical layer is the lowest layer because it receives and transmits raw bites of data, and is the farthest layer from the user in a communications system. On the other hand, the application layer is the highest layer because it interacts directly with a software application. As noted, the model includes a physical layer301. Physical layer301represents physical communication, and can define parameters of that physical communication. For example, such physical communication may come in the form of electrical, optical, or electromagnetic signals. Physical layer301also defines protocols that may control communications within a data transmission network. Link layer302defines links and mechanisms used to transmit (i.e., move) data across a network. The link layer302manages node-to-node communications, such as within a grid computing environment. Link layer302can detect and correct errors (e.g., transmission errors in the physical layer301). Link layer302can also include a media access control (MAC) layer and logical link control (LLC) layer. Network layer303defines the protocol for routing within a network. In other words, the network layer coordinates transferring data across nodes in a same network (e.g., such as a grid computing environment). Network layer303can also define the processes used to structure local addressing within the network. Transport layer304can manage the transmission of data and the quality of the transmission and/or receipt of that data. Transport layer304can provide a protocol for transferring data, such as, for example, a Transmission Control Protocol (TCP). Transport layer304can assemble and disassemble data frames for transmission. The transport layer can also detect transmission errors occurring in the layers below it. Session layer305can establish, maintain, and manage communication connections between devices on a network. In other words, the session layer controls the dialogues or nature of communications between network devices on the network. The session layer may also establish checkpointing, adjournment, termination, and restart procedures. Presentation layer306can provide translation for communications between the application and network layers. In other words, this layer may encrypt, decrypt and/or format data based on data types and/or encodings known to be accepted by an application or network layer. Application layer307interacts directly with software applications and end users, and manages communications between them. Application layer307can identify destinations, local resource states or availability and/or communication content or formatting using the applications. Intra-network connection components321and322are shown to operate in lower levels, such as physical layer301and link layer302, respectively. For example, a hub can operate in the physical layer, a switch can operate in the link layer, and a router can operate in the network layer. Inter-network connection components323and328are shown to operate on higher levels, such as layers303-307. For example, routers can operate in the network layer and network devices can operate in the transport, session, presentation, and application layers. As noted, a computing environment314can interact with and/or operate on, in various embodiments, one, more, all or any of the various layers. For example, computing environment314can interact with a hub (e.g., via the link layer) so as to adjust which devices the hub communicates with. The physical layer may be served by the link layer, so it may implement such data from the link layer. For example, the computing environment314may control which devices it will receive data from. For example, if the computing environment314knows that a certain network device has turned off, broken, or otherwise become unavailable or unreliable, the computing environment314may instruct the hub to prevent any data from being transmitted to the computing environment314from that network device. Such a process may be beneficial to avoid receiving data that is inaccurate or that has been influenced by an uncontrolled environment. As another example, computing environment314can communicate with a bridge, switch, router or gateway and influence which device within the system (e.g., system200) the component selects as a destination. In some embodiments, computing environment314can interact with various layers by exchanging communications with equipment operating on a particular layer by routing or modifying existing communications. In another embodiment, such as in a grid computing environment, a node may determine how data within the environment should be routed (e.g., which node should receive certain data) based on certain parameters or information provided by other layers within the model. As noted, the computing environment314may be a part of a communications grid environment, the communications of which may be implemented as shown in the protocol ofFIG.3. For example, referring back toFIG.2, one or more of machines220and240may be part of a communications grid computing environment. A gridded computing environment may be employed in a distributed system with non-interactive workloads where data resides in memory on the machines, or compute nodes. In such an environment, analytic code, instead of a database management system, controls the processing performed by the nodes. Data is co-located by pre-distributing it to the grid nodes, and the analytic code on each node loads the local data into memory. Each node may be assigned a particular task such as a portion of a processing project, or to organize or control other nodes within the grid. FIG.4illustrates a communications grid computing system400including a variety of control and worker nodes, according to embodiments of the present technology. Communications grid computing system400includes three control nodes and one or more worker nodes. Communications grid computing system400includes control nodes402,404, and406. The control nodes are communicatively connected via communication paths451,453, and455. Therefore, the control nodes may transmit information (e.g., related to the communications grid or notifications), to and receive information from each other. Although communications grid computing system400is shown inFIG.4as including three control nodes, the communications grid may include more or less than three control nodes. Communications grid computing system (or just “communications grid”)400also includes one or more worker nodes. Shown inFIG.4are six worker nodes410-420. AlthoughFIG.4shows six worker nodes, a communications grid according to embodiments of the present technology may include more or less than six worker nodes. The number of worker nodes included in a communications grid may be dependent upon how large the project or data set is being processed by the communications grid, the capacity of each worker node, the time designated for the communications grid to complete the project, among others. Each worker node within the communications grid400may be connected (wired or wirelessly, and directly or indirectly) to control nodes402-406. Therefore, each worker node may receive information from the control nodes (e.g., an instruction to perform work on a project) and may transmit information to the control nodes (e.g., a result from work performed on a project). Furthermore, worker nodes may communicate with each other (either directly or indirectly). For example, worker nodes may transmit data between each other related to a job being performed or an individual task within a job being performed by that worker node. However, in certain embodiments, worker nodes may not, for example, be connected (communicatively or otherwise) to certain other worker nodes. In an embodiment, worker nodes may only be able to communicate with the control node that controls it, and may not be able to communicate with other worker nodes in the communications grid, whether they are other worker nodes controlled by the control node that controls the worker node, or worker nodes that are controlled by other control nodes in the communications grid. A control node may connect with an external device with which the control node may communicate (e.g., a grid user, such as a server or computer, may connect to a controller of the grid). For example, a server or computer may connect to control nodes and may transmit a project or job to the node. The project may include a data set. The data set may be of any size. Once the control node receives such a project including a large data set, the control node may distribute the data set or projects related to the data set to be performed by worker nodes. Alternatively, for a project including a large data set, the data set may be received or stored by a machine other than a control node (e.g., a HADOOP® standard-compliant data node employing the HADOOP® Distributed File System, or HDFS). Control nodes may maintain knowledge of the status of the nodes in the grid (i.e., grid status information), accept work requests from clients, subdivide the work across worker nodes, and coordinate the worker nodes, among other responsibilities. Worker nodes may accept work requests from a control node and provide the control node with results of the work performed by the worker node. A grid may be started from a single node (e.g., a machine, computer, server, etc.). This first node may be assigned or may start as the primary control node that will control any additional nodes that enter the grid. When a project is submitted for execution (e.g., by a client or a controller of the grid) it may be assigned to a set of nodes. After the nodes are assigned to a project, a data structure (i.e., a communicator) may be created. The communicator may be used by the project for information to be shared between the project codes running on each node. A communication handle may be created on each node. A handle, for example, is a reference to the communicator that is valid within a single process on a single node, and the handle may be used when requesting communications between nodes. A control node, such as control node402, may be designated as the primary control node. A server, computer or other external device may connect to the primary control node. Once the control node receives a project, the primary control node may distribute portions of the project to its worker nodes for execution. For example, when a project is initiated on communications grid400, primary control node402controls the work to be performed for the project in order to complete the project as requested or instructed. The primary control node may distribute work to the worker nodes based on various factors, such as which subsets or portions of projects may be completed most efficiently and in the correct amount of time. For example, a worker node may perform analysis on a portion of data that is already local (e.g., stored on) the worker node. The primary control node also coordinates and processes the results of the work performed by each worker node after each worker node executes and completes its job. For example, the primary control node may receive a result from one or more worker nodes, and the control node may organize (e.g., collect and assemble) the results received and compile them to produce a complete result for the project received from the end user. Any remaining control nodes, such as control nodes404and406, may be assigned as backup control nodes for the project. In an embodiment, backup control nodes may not control any portion of the project. Instead, backup control nodes may serve as a backup for the primary control node and take over as primary control node if the primary control node were to fail. If a communications grid were to include only a single control node, and the control node were to fail (e.g., the control node is shut off or breaks) then the communications grid as a whole may fail and any project or job being run on the communications grid may fail and may not complete. While the project may be run again, such a failure may cause a delay (severe delay in some cases, such as overnight delay) in completion of the project. Therefore, a grid with multiple control nodes, including a backup control node, may be beneficial. To add another node or machine to the grid, the primary control node may open a pair of listening sockets, for example. A socket may be used to accept work requests from clients, and the second socket may be used to accept connections from other grid nodes. The primary control node may be provided with a list of other nodes (e.g., other machines, computers, servers) that will participate in the grid, and the role that each node will fill in the grid. Upon startup of the primary control node (e.g., the first node on the grid), the primary control node may use a network protocol to start the server process on every other node in the grid. Command line parameters, for example, may inform each node of one or more pieces of information, such as: the role that the node will have in the grid, the host name of the primary control node, the port number on which the primary control node is accepting connections from peer nodes, among others. The information may also be provided in a configuration file, transmitted over a secure shell tunnel, recovered from a configuration server, among others. While the other machines in the grid may not initially know about the configuration of the grid, that information may also be sent to each other node by the primary control node. Updates of the grid information may also be subsequently sent to those nodes. For any control node other than the primary control node added to the grid, the control node may open three sockets. The first socket may accept work requests from clients, the second socket may accept connections from other grid members, and the third socket may connect (e.g., permanently) to the primary control node. When a control node (e.g., primary control node) receives a connection from another control node, it first checks to see if the peer node is in the list of configured nodes in the grid. If it is not on the list, the control node may clear the connection. If it is on the list, it may then attempt to authenticate the connection. If authentication is successful, the authenticating node may transmit information to its peer, such as the port number on which a node is listening for connections, the host name of the node, information about how to authenticate the node, among other information. When a node, such as the new control node, receives information about another active node, it will check to see if it already has a connection to that other node. If it does not have a connection to that node, it may then establish a connection to that control node. Any worker node added to the grid may establish a connection to the primary control node and any other control nodes on the grid. After establishing the connection, it may authenticate itself to the grid (e.g., any control nodes, including both primary and backup, or a server or user controlling the grid). After successful authentication, the worker node may accept configuration information from the control node. When a node joins a communications grid (e.g., when the node is powered on or connected to an existing node on the grid or both), the node is assigned (e.g., by an operating system of the grid) a universally unique identifier (UUID). This unique identifier may help other nodes and external entities (devices, users, etc.) to identify the node and distinguish it from other nodes. When a node is connected to the grid, the node may share its unique identifier with the other nodes in the grid. Since each node may share its unique identifier, each node may know the unique identifier of every other node on the grid. Unique identifiers may also designate a hierarchy of each of the nodes (e.g., backup control nodes) within the grid. For example, the unique identifiers of each of the backup control nodes may be stored in a list of backup control nodes to indicate an order in which the backup control nodes will take over for a failed primary control node to become a new primary control node. However, a hierarchy of nodes may also be determined using methods other than using the unique identifiers of the nodes. For example, the hierarchy may be predetermined, or may be assigned based on other predetermined factors. The grid may add new machines at any time (e.g., initiated from any control node). Upon adding a new node to the grid, the control node may first add the new node to its table of grid nodes. The control node may also then notify every other control node about the new node. The nodes receiving the notification may acknowledge that they have updated their configuration information. Primary control node402may, for example, transmit one or more communications to backup control nodes404and406(and, for example, to other control or worker nodes within the communications grid). Such communications may be sent periodically, at fixed time intervals, between known fixed stages of the project's execution, among other protocols. The communications transmitted by primary control node402may be of varied types and may include a variety of types of information. For example, primary control node402may transmit snapshots (e.g., status information) of the communications grid so that backup control node404always has a recent snapshot of the communications grid. The snapshot or grid status may include, for example, the structure of the grid (including, for example, the worker nodes in the grid, unique identifiers of the nodes, or their relationships with the primary control node) and the status of a project (including, for example, the status of each worker node's portion of the project). The snapshot may also include analysis or results received from worker nodes in the communications grid. The backup control nodes may receive and store the backup data received from the primary control node. The backup control nodes may transmit a request for such a snapshot (or other information) from the primary control node, or the primary control node may send such information periodically to the backup control nodes. As noted, the backup data may allow the backup control node to take over as primary control node if the primary control node fails without requiring the grid to start the project over from scratch. If the primary control node fails, the backup control node that will take over as primary control node may retrieve the most recent version of the snapshot received from the primary control node and use the snapshot to continue the project from the stage of the project indicated by the backup data. This may prevent failure of the project as a whole. A backup control node may use various methods to determine that the primary control node has failed. In one example of such a method, the primary control node may transmit (e.g., periodically) a communication to the backup control node that indicates that the primary control node is working and has not failed, such as a heartbeat communication. The backup control node may determine that the primary control node has failed if the backup control node has not received a heartbeat communication for a certain predetermined period of time. Alternatively, a backup control node may also receive a communication from the primary control node itself (before it failed) or from a worker node that the primary control node has failed, for example because the primary control node has failed to communicate with the worker node. Different methods may be performed to determine which backup control node of a set of backup control nodes (e.g., backup control nodes404and406) will take over for failed primary control node402and become the new primary control node. For example, the new primary control node may be chosen based on a ranking or “hierarchy” of backup control nodes based on their unique identifiers. In an alternative embodiment, a backup control node may be assigned to be the new primary control node by another device in the communications grid or from an external device (e.g., a system infrastructure or an end user, such as a server or computer, controlling the communications grid). In another alternative embodiment, the backup control node that takes over as the new primary control node may be designated based on bandwidth or other statistics about the communications grid. A worker node within the communications grid may also fail. If a worker node fails, work being performed by the failed worker node may be redistributed amongst the operational worker nodes. In an alternative embodiment, the primary control node may transmit a communication to each of the operable worker nodes still on the communications grid that each of the worker nodes should purposefully fail also. After each of the worker nodes fail, they may each retrieve their most recent saved checkpoint of their status and restart the project from that checkpoint to minimize lost progress on the project being executed. FIG.5illustrates a flow chart showing an example process500for adjusting a communications grid or a work project in a communications grid after a failure of a node, according to embodiments of the present technology. The process may include, for example, receiving grid status information including a project status of a portion of a project being executed by a node in the communications grid, as described in operation502. For example, a control node (e.g., a backup control node connected to a primary control node and a worker node on a communications grid) may receive grid status information, where the grid status information includes a project status of the primary control node or a project status of the worker node. The project status of the primary control node and the project status of the worker node may include a status of one or more portions of a project being executed by the primary and worker nodes in the communications grid. The process may also include storing the grid status information, as described in operation504. For example, a control node (e.g., a backup control node) may store the received grid status information locally within the control node. Alternatively, the grid status information may be sent to another device for storage where the control node may have access to the information. The process may also include receiving a failure communication corresponding to a node in the communications grid in operation506. For example, a node may receive a failure communication including an indication that the primary control node has failed, prompting a backup control node to take over for the primary control node. In an alternative embodiment, a node may receive a failure that a worker node has failed, prompting a control node to reassign the work being performed by the worker node. The process may also include reassigning a node or a portion of the project being executed by the failed node, as described in operation508. For example, a control node may designate the backup control node as a new primary control node based on the failure communication upon receiving the failure communication. If the failed node is a worker node, a control node may identify a project status of the failed worker node using the snapshot of the communications grid, where the project status of the failed worker node includes a status of a portion of the project being executed by the failed worker node at the failure time. The process may also include receiving updated grid status information based on the reassignment, as described in operation510, and transmitting a set of instructions based on the updated grid status information to one or more nodes in the communications grid, as described in operation512. The updated grid status information may include an updated project status of the primary control node or an updated project status of the worker node. The updated information may be transmitted to the other nodes in the grid to update their stale stored information. FIG.6illustrates a portion of a communications grid computing system600including a control node and a worker node, according to embodiments of the present technology. Communications grid600computing system includes one control node (control node602) and one worker node (worker node610) for purposes of illustration, but may include more worker and/or control nodes. The control node602is communicatively connected to worker node610via communication path650. Therefore, control node602may transmit information (e.g., related to the communications grid or notifications), to and receive information from worker node610via path650. Similar to inFIG.4, communications grid computing system (or just “communications grid”)600includes data processing nodes (control node602and worker node610). Nodes602and610include multi-core data processors. Each node602and610includes a grid-enabled software component (GESC)620that executes on the data processor associated with that node and interfaces with buffer memory622also associated with that node. Each node602and610includes database management software (DBMS)628that executes on a database server (not shown) at control node602and on a database server (not shown) at worker node610. Each node also includes a data store624. Data stores624, similar to network-attached data stores110inFIG.1and data stores235inFIG.2, are used to store data to be processed by the nodes in the computing environment. Data stores624may also store any intermediate or final data generated by the computing system after being processed, for example in non-volatile memory. However in certain embodiments, the configuration of the grid computing environment allows its operations to be performed such that intermediate and final data results can be stored solely in volatile memory (e.g., RAM), without a requirement that intermediate or final data results be stored to non-volatile types of memory. Storing such data in volatile memory may be useful in certain situations, such as when the grid receives queries (e.g., ad hoc) from a client and when responses, which are generated by processing large amounts of data, need to be generated quickly or on-the-fly. In such a situation, the grid may be configured to retain the data within memory so that responses can be generated at different levels of detail and so that a client may interactively query against this information. Each node also includes a user-defined function (UDF)626. The UDF provides a mechanism for the DBMS628to transfer data to or receive data from the database stored in the data stores624that are managed by the DBMS. For example, UDF626can be invoked by the DBMS to provide data to the GESC for processing. The UDF626may establish a socket connection (not shown) with the GESC to transfer the data. Alternatively, the UDF626can transfer data to the GESC by writing data to shared memory accessible by both the UDF and the GESC. The GESC620at the nodes602and620may be connected via a network, such as network108shown inFIG.1. Therefore, nodes602and620can communicate with each other via the network using a predetermined communication protocol such as, for example, the Message Passing Interface (MPI). Each GESC620can engage in point-to-point communication with the GESC at another node or in collective communication with multiple GESCs via the network. The GESC620at each node may contain identical (or nearly identical) software instructions. Each node may be capable of operating as either a control node or a worker node. The GESC at the control node602can communicate, over a communication path652, with a client deice630. More specifically, control node602may communicate with client application632hosted by the client device630to receive queries and to respond to those queries after processing large amounts of data. DBMS628may control the creation, maintenance, and use of database or data structure (not shown) within a nodes602or610. The database may organize data stored in data stores624. The DBMS628at control node602may accept requests for data and transfer the appropriate data for the request. With such a process, collections of data may be distributed across multiple physical locations. In this example, each node602and610stores a portion of the total data managed by the management system in its associated data store624. Furthermore, the DBMS may be responsible for protecting against data loss using replication techniques. Replication includes providing a backup copy of data stored on one node on one or more other nodes. Therefore, if one node fails, the data from the failed node can be recovered from a replicated copy residing at another node. However, as described herein with respect toFIG.4, data or status information for each node in the communications grid may also be shared with each node on the grid. FIG.7illustrates a flow chart showing an example method700for executing a project within a grid computing system, according to embodiments of the present technology. As described with respect toFIG.6, the GESC at the control node may transmit data with a client device (e.g., client device630) to receive queries for executing a project and to respond to those queries after large amounts of data have been processed. The query may be transmitted to the control node, where the query may include a request for executing a project, as described in operation702. The query can contain instructions on the type of data analysis to be performed in the project and whether the project should be executed using the grid-based computing environment, as shown in operation704. To initiate the project, the control node may determine if the query requests use of the grid-based computing environment to execute the project. If the determination is no, then the control node initiates execution of the project in a solo environment (e.g., at the control node), as described in operation710. If the determination is yes, the control node may initiate execution of the project in the grid-based computing environment, as described in operation706. In such a situation, the request may include a requested configuration of the grid. For example, the request may include a number of control nodes and a number of worker nodes to be used in the grid when executing the project. After the project has been completed, the control node may transmit results of the analysis yielded by the grid, as described in operation708. Whether the project is executed in a solo or grid-based environment, the control node provides the results of the project, as described in operation712. As noted with respect toFIG.2, the computing environments described herein may collect data (e.g., as received from network devices, such as sensors, such as network devices204-209inFIG.2, and client devices or other sources) to be processed as part of a data analytics project, and data may be received in real time as part of a streaming analytics environment (e.g., ESP). Data may be collected using a variety of sources as communicated via different kinds of networks or locally, such as on a real-time streaming basis. For example, network devices may receive data periodically from network device sensors as the sensors continuously sense, monitor and track changes in their environments. More specifically, an increasing number of distributed applications develop or produce continuously flowing data from distributed sources by applying queries to the data before distributing the data to geographically distributed recipients. An event stream processing engine (ESPE) may continuously apply the queries to the data as it is received and determines which entities should receive the data. Client or other devices may also subscribe to the ESPE or other devices processing ESP data so that they can receive data after processing, based on for example the entities determined by the processing engine. For example, client devices230inFIG.2may subscribe to the ESPE in computing environment214. In another example, event subscription devices1024a-c, described further with respect toFIG.10, may also subscribe to the ESPE. The ESPE may determine or define how input data or event streams from network devices or other publishers (e.g., network devices204-209inFIG.2) are transformed into meaningful output data to be consumed by subscribers, such as for example client devices230inFIG.2. FIG.8illustrates a block diagram including components of an Event Stream Processing Engine (ESPE), according to embodiments of the present technology. ESPE800may include one or more projects802. A project may be described as a second-level container in an engine model managed by ESPE800where a thread pool size for the project may be defined by a user. Each project of the one or more projects802may include one or more continuous queries804that contain data flows, which are data transformations of incoming event streams. The one or more continuous queries804may include one or more source windows806and one or more derived windows808. The ESPE may receive streaming data over a period of time related to certain events, such as events or other data sensed by one or more network devices. The ESPE may perform operations associated with processing data created by the one or more devices. For example, the ESPE may receive data from the one or more network devices204-209shown inFIG.2. As noted, the network devices may include sensors that sense different aspects of their environments, and may collect data over time based on those sensed observations. For example, the ESPE may be implemented within one or more of machines220and240shown inFIG.2. The ESPE may be implemented within such a machine by an ESP application. An ESP application may embed an ESPE with its own dedicated thread pool or pools into its application space where the main application thread can do application-specific work and the ESPE processes event streams at least by creating an instance of a model into processing objects. The engine container is the top-level container in a model that manages the resources of the one or more projects802. In an illustrative embodiment, for example, there may be only one ESPE800for each instance of the ESP application, and ESPE800may have a unique engine name. Additionally, the one or more projects802may each have unique project names, and each query may have a unique continuous query name and begin with a uniquely named source window of the one or more source windows806. ESPE800may or may not be persistent. Continuous query modeling involves defining directed graphs of windows for event stream manipulation and transformation. A window in the context of event stream manipulation and transformation is a processing node in an event stream processing model. A window in a continuous query can perform aggregations, computations, pattern-matching, and other operations on data flowing through the window. A continuous query may be described as a directed graph of source, relational, pattern matching, and procedural windows. The one or more source windows806and the one or more derived windows808represent continuously executing queries that generate updates to a query result set as new event blocks stream through ESPE800. A directed graph, for example, is a set of nodes connected by edges, where the edges have a direction associated with them. An event object may be described as a packet of data accessible as a collection of fields, with at least one of the fields defined as a key or unique identifier (ID). The event object may be created using a variety of formats including binary, alphanumeric, XML, etc. Each event object may include one or more fields designated as a primary identifier (ID) for the event so ESPE800can support operation codes (opcodes) for events including insert, update, upsert, and delete. Upsert opcodes update the event if the key field already exists; otherwise, the event is inserted. For illustration, an event object may be a packed binary representation of a set of field values and include both metadata and field data associated with an event. The metadata may include an opcode indicating if the event represents an insert, update, delete, or upsert, a set of flags indicating if the event is a normal, partial-update, or a retention generated event from retention policy management, and a set of microsecond timestamps that can be used for latency measurements. An event block object may be described as a grouping or package of event objects. An event stream may be described as a flow of event block objects. A continuous query of the one or more continuous queries804transforms a source event stream made up of streaming event block objects published into ESPE800into one or more output event streams using the one or more source windows806and the one or more derived windows808. A continuous query can also be thought of as data flow modeling. The one or more source windows806are at the top of the directed graph and have no windows feeding into them. Event streams are published into the one or more source windows806, and from there, the event streams may be directed to the next set of connected windows as defined by the directed graph. The one or more derived windows808are all instantiated windows that are not source windows and that have other windows streaming events into them. The one or more derived windows808may perform computations or transformations on the incoming event streams. The one or more derived windows808transform event streams based on the window type (that is operators such as join, filter, compute, aggregate, copy, pattern match, procedural, union, etc.) and window settings. As event streams are published into ESPE800, they are continuously queried, and the resulting sets of derived windows in these queries are continuously updated. FIG.9illustrates a flow chart showing an example process including operations performed by an event stream processing engine, according to some embodiments of the present technology. As noted, the ESPE800(or an associated ESP application) defines how input event streams are transformed into meaningful output event streams. More specifically, the ESP application may define how input event streams from publishers (e.g., network devices providing sensed data) are transformed into meaningful output event streams consumed by subscribers (e.g., a data analytics project being executed by a machine or set of machines). Within the application, a user may interact with one or more user interface windows presented to the user in a display under control of the ESPE independently or through a browser application in an order selectable by the user. For example, a user may execute an ESP application, which causes presentation of a first user interface window, which may include a plurality of menus and selectors such as drop down menus, buttons, text boxes, hyperlinks, etc. associated with the ESP application as understood by a person of skill in the art. As further understood by a person of skill in the art, various operations may be performed in parallel, for example, using a plurality of threads. At operation900, an ESP application may define and start an ESPE, thereby instantiating an ESPE at a device, such as machine220and/or240. In an operation902, the engine container is created. For illustration, ESPE800may be instantiated using a function call that specifies the engine container as a manager for the model. In an operation904, the one or more continuous queries804are instantiated by ESPE800as a model. The one or more continuous queries804may be instantiated with a dedicated thread pool or pools that generate updates as new events stream through ESPE800. For illustration, the one or more continuous queries804may be created to model business processing logic within ESPE800, to predict events within ESPE800, to model a physical system within ESPE800, to predict the physical system state within ESPE800, etc. For example, as noted, ESPE800may be used to support sensor data monitoring and management (e.g., sensing may include force, torque, load, strain, position, temperature, air pressure, fluid flow, chemical properties, resistance, electromagnetic fields, radiation, irradiance, proximity, acoustics, moisture, distance, speed, vibrations, acceleration, electrical potential, or electrical current, etc.). ESPE800may analyze and process events in motion or “event streams.” Instead of storing data and running queries against the stored data, ESPE800may store queries and stream data through them to allow continuous analysis of data as it is received. The one or more source windows806and the one or more derived windows808may be created based on the relational, pattern matching, and procedural algorithms that transform the input event streams into the output event streams to model, simulate, score, test, predict, etc. based on the continuous query model defined and application to the streamed data. In an operation906, a publish/subscribe (pub/sub) capability is initialized for ESPE800. In an illustrative embodiment, a pub/sub capability is initialized for each project of the one or more projects802. To initialize and enable pub/sub capability for ESPE800, a port number may be provided. Pub/sub clients can use a host name of an ESP device running the ESPE and the port number to establish pub/sub connections to ESPE800. FIG.10illustrates an ESP system1000interfacing between publishing device1022and event subscribing devices1024a-c, according to embodiments of the present technology. ESP system1000may include ESP device or subsystem851, event publishing device1022, an event subscribing device A1024a, an event subscribing device B1024b, and an event subscribing device C1024c. Input event streams are output to ESP device851by publishing device1022. In alternative embodiments, the input event streams may be created by a plurality of publishing devices. The plurality of publishing devices further may publish event streams to other ESP devices. The one or more continuous queries instantiated by ESPE800may analyze and process the input event streams to form output event streams output to event subscribing device A1024a, event subscribing device B1024b, and event subscribing device C1024c. ESP system1000may include a greater or a fewer number of event subscribing devices of event subscribing devices. Publish-subscribe is a message-oriented interaction paradigm based on indirect addressing. Processed data recipients specify their interest in receiving information from ESPE800by subscribing to specific classes of events, while information sources publish events to ESPE800without directly addressing the receiving parties. ESPE800coordinates the interactions and processes the data. In some cases, the data source receives confirmation that the published information has been received by a data recipient. A publish/subscribe API may be described as a library that enables an event publisher, such as publishing device1022, to publish event streams into ESPE800or an event subscriber, such as event subscribing device A1024a, event subscribing device B1024b, and event subscribing device C1024c, to subscribe to event streams from ESPE800. For illustration, one or more publish/subscribe APIs may be defined. Using the publish/subscribe API, an event publishing application may publish event streams into a running event stream processor project source window of ESPE800, and the event subscription application may subscribe to an event stream processor project source window of ESPE800. The publish/subscribe API provides cross-platform connectivity and endianness compatibility between ESP application and other networked applications, such as event publishing applications instantiated at publishing device1022, and event subscription applications instantiated at one or more of event subscribing device A1024a, event subscribing device B1024b, and event subscribing device C1024c. Referring back toFIG.9, operation906initializes the publish/subscribe capability of ESPE800. In an operation908, the one or more projects802are started. The one or more started projects may run in the background on an ESP device. In an operation910, an event block object is received from one or more computing device of the event publishing device1022. ESP subsystem800may include a publishing client1002, ESPE800, a subscribing client A1004, a subscribing client B1006, and a subscribing client C1008. Publishing client1002may be started by an event publishing application executing at publishing device1022using the publish/subscribe API. Subscribing client A1004may be started by an event subscription application A, executing at event subscribing device A1024ausing the publish/subscribe API. Subscribing client B1006may be started by an event subscription application B executing at event subscribing device B1024busing the publish/subscribe API. Subscribing client C1008may be started by an event subscription application C executing at event subscribing device C1024cusing the publish/subscribe API. An event block object containing one or more event objects is injected into a source window of the one or more source windows806from an instance of an event publishing application on event publishing device1022. The event block object may be generated, for example, by the event publishing application and may be received by publishing client1002. A unique ID may be maintained as the event block object is passed between the one or more source windows806and/or the one or more derived windows808of ESPE800, and to subscribing client A1004, subscribing client B1006, and subscribing client C1008and to event subscription device A1024a, event subscription device B1024b, and event subscription device C1024c. Publishing client1002may further generate and include a unique embedded transaction ID in the event block object as the event block object is processed by a continuous query, as well as the unique ID that publishing device1022assigned to the event block object. In an operation912, the event block object is processed through the one or more continuous queries804. In an operation914, the processed event block object is output to one or more computing devices of the event subscribing devices1024a-c. For example, subscribing client A1004, subscribing client B1006, and subscribing client C1008may send the received event block object to event subscription device A1024a, event subscription device B1024b, and event subscription device C1024c, respectively. ESPE800maintains the event block containership aspect of the received event blocks from when the event block is published into a source window and works its way through the directed graph defined by the one or more continuous queries804with the various event translations before being output to subscribers. Subscribers can correlate a group of subscribed events back to a group of published events by comparing the unique ID of the event block object that a publisher, such as publishing device1022, attached to the event block object with the event block ID received by the subscriber. In an operation916, a determination is made concerning whether or not processing is stopped. If processing is not stopped, processing continues in operation910to continue receiving the one or more event streams containing event block objects from the, for example, one or more network devices. If processing is stopped, processing continues in an operation918. In operation918, the started projects are stopped. In operation920, the ESPE is shutdown. As noted, in some embodiments, big data is processed for an analytics project after the data is received and stored. In other embodiments, distributed applications process continuously flowing data in real-time from distributed sources by applying queries to the data before distributing the data to geographically distributed recipients. As noted, an event stream processing engine (ESPE) may continuously apply the queries to the data as it is received and determines which entities receive the processed data. This allows for large amounts of data being received and/or collected in a variety of environments to be processed and distributed in real time. For example, as shown with respect toFIG.2, data may be collected from network devices that may include devices within the internet of things, such as devices within a home automation network. However, such data may be collected from a variety of different resources in a variety of different environments. In any such situation, embodiments of the present technology allow for real-time processing of such data. Aspects of the current disclosure provide technical solutions to technical problems, such as computing problems that arise when an ESP device fails which results in a complete service interruption and potentially significant data loss. The data loss can be catastrophic when the streamed data is supporting mission critical operations such as those in support of an ongoing manufacturing or drilling operation. An embodiment of an ESP system achieves a rapid and seamless failover of ESPE running at the plurality of ESP devices without service interruption or data loss, thus significantly improving the reliability of an operational system that relies on the live or real-time processing of the data streams. The event publishing systems, the event subscribing systems, and each ESPE not executing at a failed ESP device are not aware of or effected by the failed ESP device. The ESP system may include thousands of event publishing systems and event subscribing systems. The ESP system keeps the failover logic and awareness within the boundaries of out-messaging network connector and out-messaging network device. In one example embodiment, a system is provided to support a failover when event stream processing (ESP) event blocks. The system includes, but is not limited to, an out-messaging network device and a computing device. The computing device includes, but is not limited to, a processor and a computer-readable medium operably coupled to the processor. The processor is configured to execute an ESP engine (ESPE). The computer-readable medium has instructions stored thereon that, when executed by the processor, cause the computing device to support the failover. An event block object is received from the ESPE that includes a unique identifier. A first status of the computing device as active or standby is determined. When the first status is active, a second status of the computing device as newly active or not newly active is determined. Newly active is determined when the computing device is switched from a standby status to an active status. When the second status is newly active, a last published event block object identifier that uniquely identifies a last published event block object is determined. A next event block object is selected from a non-transitory computer-readable medium accessible by the computing device. The next event block object has an event block object identifier that is greater than the determined last published event block object identifier. The selected next event block object is published to an out-messaging network device. When the second status of the computing device is not newly active, the received event block object is published to the out-messaging network device. When the first status of the computing device is standby, the received event block object is stored in the non-transitory computer-readable medium. FIG.11is a flow chart of an example of a process for generating and using a machine-learning model according to some aspects. Machine learning is a branch of artificial intelligence that relates to mathematical models that can learn from, categorize, and make predictions about data. Such mathematical models, which can be referred to as machine-learning models, can classify input data among two or more classes; cluster input data among two or more groups; predict a result based on input data; identify patterns or trends in input data; identify a distribution of input data in a space; or any combination of these. Examples of machine-learning models can include (i) neural networks; (ii) decision trees, such as classification trees and regression trees; (iii) classifiers, such as Naïve bias classifiers, logistic regression classifiers, ridge regression classifiers, random forest classifiers, least absolute shrinkage and selector (LASSO) classifiers, and support vector machines; (iv) clusterers, such as k-means clusterers, mean-shift clusterers, and spectral clusterers; (v) factorizers, such as factorization machines, principal component analyzers and kernel principal component analyzers; and (vi) ensembles or other combinations of machine-learning models. In some examples, neural networks can include deep neural networks, feed-forward neural networks, recurrent neural networks, convolutional neural networks, radial basis function (RBF) neural networks, echo state neural networks, long short-term memory neural networks, bi-directional recurrent neural networks, gated neural networks, hierarchical recurrent neural networks, stochastic neural networks, modular neural networks, spiking neural networks, dynamic neural networks, cascading neural networks, neuro-fuzzy neural networks, or any combination of these. Different machine-learning models may be used interchangeably to perform a task. Examples of tasks that can be performed at least partially using machine-learning models include various types of scoring; bioinformatics; cheminformatics; software engineering; fraud detection; customer segmentation; generating online recommendations; adaptive websites; determining customer lifetime value; search engines; placing advertisements in real time or near real time; classifying DNA sequences; affective computing; performing natural language processing and understanding; object recognition and computer vision; robotic locomotion; playing games; optimization and metaheuristics; detecting network intrusions; medical diagnosis and monitoring; or predicting when an asset, such as a machine, will need maintenance. Any number and combination of tools can be used to create machine-learning models. Examples of tools for creating and managing machine-learning models can include SAS® Enterprise Miner, SAS® Rapid Predictive Modeler, and SAS® Model Manager, SAS Cloud Analytic Services (CAS)®, SAS Viya® of all which are by SAS Institute Inc. of Cary, North Carolina. Machine-learning models can be constructed through an at least partially automated (e.g., with little or no human involvement) process called training. During training, input data can be iteratively supplied to a machine-learning model to enable the machine-learning model to identify patterns related to the input data or to identify relationships between the input data and output data. With training, the machine-learning model can be transformed from an untrained state to a trained state. Input data can be split into one or more training sets and one or more validation sets, and the training process may be repeated multiple times. The splitting may follow a k-fold cross-validation rule, a leave-one-out-rule, a leave-p-out rule, or a holdout rule. An overview of training and using a machine-learning model is described below with respect to the flow chart ofFIG.11. In block1102, training data is received. In some examples, the training data is received from a remote database or a local database, constructed from various subsets of data, or input by a user. The training data can be used in its raw form for training a machine-learning model or pre-processed into another form, which can then be used for training the machine-learning model. For example, the raw form of the training data can be smoothed, truncated, aggregated, clustered, or otherwise manipulated into another form, which can then be used for training the machine-learning model. In block1104, a machine-learning model is trained using the training data. The machine-learning model can be trained in a supervised, unsupervised, or semi-supervised manner. In supervised training, each input in the training data is correlated to a desired output. This desired output may be a scalar, a vector, or a different type of data structure such as text or an image. This may enable the machine-learning model to learn a mapping between the inputs and desired outputs. In unsupervised training, the training data includes inputs, but not desired outputs, so that the machine-learning model has to find structure in the inputs on its own. In semi-supervised training, only some of the inputs in the training data are correlated to desired outputs. In block1106, the machine-learning model is evaluated. For example, an evaluation dataset can be obtained, for example, via user input or from a database. The evaluation dataset can include inputs correlated to desired outputs. The inputs can be provided to the machine-learning model and the outputs from the machine-learning model can be compared to the desired outputs. If the outputs from the machine-learning model closely correspond with the desired outputs, the machine-learning model may have a high degree of accuracy. For example, if 90% or more of the outputs from the machine-learning model are the same as the desired outputs in the evaluation dataset, the machine-learning model may have a high degree of accuracy. Otherwise, the machine-learning model may have a low degree of accuracy. The 90% number is an example only. A realistic and desirable accuracy percentage is dependent on the problem and the data. In some examples, if, at1108, the machine-learning model has an inadequate degree of accuracy for a particular task, the process can return to block1104, where the machine-learning model can be further trained using additional training data or otherwise modified to improve accuracy. However, if, at1108. the machine-learning model has an adequate degree of accuracy for the particular task, the process can continue to block1110. In block1110, new data is received. In some examples, the new data is received from a remote database or a local database, constructed from various subsets of data, or input by a user. The new data may be unknown to the machine-learning model. For example, the machine-learning model may not have previously processed or analyzed the new data. In block1112, the trained machine-learning model is used to analyze the new data and provide a result. For example, the new data can be provided as input to the trained machine-learning model. The trained machine-learning model can analyze the new data and provide a result that includes a classification of the new data into a particular class, a clustering of the new data into a particular group, a prediction based on the new data, or any combination of these. In block1114, the result is post-processed. For example, the result can be added to, multiplied with, or otherwise combined with other data as part of a job. As another example, the result can be transformed from a first format, such as a time series format, into another format, such as a count series format. Any number and combination of operations can be performed on the result during post-processing. A more specific example of a machine-learning model is the neural network1200shown inFIG.12. The neural network1200is represented as multiple layers of neurons1208that can exchange data between one another via connections1255that may be selectively instantiated thereamong. The layers include an input layer1202for receiving input data provided at inputs1222, one or more hidden layers1204, and an output layer1206for providing a result at outputs1277. The hidden layer(s)1204are referred to as hidden because they may not be directly observable or have their inputs or outputs directly accessible during the normal functioning of the neural network1200. Although the neural network1200is shown as having a specific number of layers and neurons for exemplary purposes, the neural network1200can have any number and combination of layers, and each layer can have any number and combination of neurons. The neurons1208and connections1255thereamong may have numeric weights, which can be tuned during training of the neural network1200. For example, training data can be provided to at least the inputs1222to the input layer1202of the neural network1200, and the neural network1200can use the training data to tune one or more numeric weights of the neural network1200. In some examples, the neural network1200can be trained using backpropagation. Backpropagation can include determining a gradient of a particular numeric weight based on a difference between an actual output of the neural network1200at the outputs1277and a desired output of the neural network1200. Based on the gradient, one or more numeric weights of the neural network1200can be updated to reduce the difference therebetween, thereby increasing the accuracy of the neural network1200. This process can be repeated multiple times to train the neural network1200. For example, this process can be repeated hundreds or thousands of times to train the neural network1200. In some examples, the neural network1200is a feed-forward neural network. In a feed-forward neural network, the connections1255are instantiated and/or weighted so that every neuron1208only propagates an output value to a subsequent layer of the neural network1200. For example, data may only move one direction (forward) from one neuron1208to the next neuron1208in a feed-forward neural network. Such a “forward” direction may be defined as proceeding from the input layer1202through the one or more hidden layers1204, and toward the output layer1206. In other examples, the neural network1200may be a recurrent neural network. A recurrent neural network can include one or more feedback loops among the connections1255, thereby allowing data to propagate in both forward and backward through the neural network1200. Such a “backward” direction may be defined as proceeding in the opposite direction of forward, such as from the output layer1206through the one or more hidden layers1204, and toward the input layer1202. This can allow for information to persist within the recurrent neural network. For example, a recurrent neural network can determine an output based at least partially on information that the recurrent neural network has seen before, giving the recurrent neural network the ability to use previous input to inform the output. In some examples, the neural network1200operates by receiving a vector of numbers from one layer; transforming the vector of numbers into a new vector of numbers using a matrix of numeric weights, a nonlinearity, or both; and providing the new vector of numbers to a subsequent layer (“subsequent” in the sense of moving “forward”) of the neural network1200. Each subsequent layer of the neural network1200can repeat this process until the neural network1200outputs a final result at the outputs1277of the output layer1206. For example, the neural network1200can receive a vector of numbers at the inputs1222of the input layer1202. The neural network1200can multiply the vector of numbers by a matrix of numeric weights to determine a weighted vector. The matrix of numeric weights can be tuned during the training of the neural network1200. The neural network1200can transform the weighted vector using a nonlinearity, such as a sigmoid tangent or the hyperbolic tangent. In some examples, the nonlinearity can include a rectified linear unit, which can be expressed using the equation y=max(x, 0) where y is the output and x is an input value from the weighted vector. The transformed output can be supplied to a subsequent layer (e.g., a hidden layer1204) of the neural network1200. The subsequent layer of the neural network1200can receive the transformed output, multiply the transformed output by a matrix of numeric weights and a nonlinearity, and provide the result to yet another layer of the neural network1200(e.g., another, subsequent, hidden layer1204). This process continues until the neural network1200outputs a final result at the outputs1277of the output layer1206. As also depicted inFIG.12, the neural network1200may be implemented either through the execution of the instructions of one or more routines1244by central processing units (CPUs), or through the use of one or more neuromorphic devices1250that incorporate a set of memristors (or other similar components) that each function to implement one of the neurons1208in hardware. Where multiple neuromorphic devices1250are used, they may be interconnected in a depth-wise manner to enable implementing neural networks with greater quantities of layers, and/or in a width-wise manner to enable implementing neural networks having greater quantities of neurons1208per layer. The neuromorphic device1250may incorporate a storage interface1299by which neural network configuration data1293that is descriptive of various parameters and hyper parameters of the neural network1200may be stored and/or retrieved. More specifically, the neural network configuration data1293may include such parameters as weighting and/or biasing values derived through the training of the neural network1200, as has been described. Alternatively or additionally, the neural network configuration data1293may include such hyperparameters as the manner in which the neurons1208are to be interconnected (e.g., feed-forward or recurrent), the trigger function to be implemented within the neurons1208, the quantity of layers and/or the overall quantity of the neurons1208. The neural network configuration data1293may provide such information for more than one neuromorphic device1250where multiple ones have been interconnected to support larger neural networks. Other examples of the present disclosure may include any number and combination of machine-learning models having any number and combination of characteristics. The machine-learning model(s) can be trained in a supervised, semi-supervised, or unsupervised manner, or any combination of these. The machine-learning model(s) can be implemented using a single computing device or multiple computing devices, such as the communications grid computing system400discussed above. Implementing some examples of the present disclosure at least in part by using machine-learning models can reduce the total number of processing iterations, time, memory, electrical power, or any combination of these consumed by a computing device when analyzing data. For example, a neural network may more readily identify patterns in data than other approaches. This may enable the neural network to analyze the data using fewer processing cycles and less memory than other approaches, while obtaining a similar or greater level of accuracy. Some machine-learning approaches may be more efficiently and speedily executed and processed with machine-learning specific processors (e.g., not a generic CPU). Such processors may also provide an energy savings when compared to generic CPUs. For example, some of these processors can include a graphical processing unit (GPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), an artificial intelligence (AI) accelerator, a neural computing core, a neural computing engine, a neural processing unit, a purpose-built chip architecture for deep learning, and/or some other machine-learning specific processor that implements a machine learning approach or one or more neural networks using semiconductor (e.g., silicon (Si), gallium arsenide(GaAs)) devices. These processors may also be employed in heterogeneous computing architectures with a number of and/or a variety of different types of cores, engines, nodes, and/or layers to achieve various energy efficiencies, processing speed improvements, data communication speed improvements, and/or data efficiency targets and improvements throughout various parts of the system when compared to a homogeneous computing architecture that employs CPUs for general purpose computing. FIG.13illustrates various aspects of the use of containers1336as a mechanism to allocate processing, storage and/or other resources of a processing system1300to the performance of various analyses. More specifically, in a processing system1300that includes one or more node devices1330(e.g., the aforedescribed grid system400), the processing, storage and/or other resources of each node device1330may be allocated through the instantiation and/or maintenance of multiple containers1336within the node devices1330to support the performance(s) of one or more analyses. As each container1336is instantiated, predetermined amounts of processing, storage and/or other resources may be allocated thereto as part of creating an execution environment therein in which one or more executable routines1334may be executed to cause the performance of part or all of each analysis that is requested to be performed. It may be that at least a subset of the containers1336are each allocated a similar combination and amounts of resources so that each is of a similar configuration with a similar range of capabilities, and therefore, are interchangeable. This may be done in embodiments in which it is desired to have at least such a subset of the containers1336already instantiated prior to the receipt of requests to perform analyses, and thus, prior to the specific resource requirements of each of those analyses being known. Alternatively or additionally, it may be that at least a subset of the containers1336are not instantiated until after the processing system1300receives requests to perform analyses where each request may include indications of the resources required for one of those analyses. Such information concerning resource requirements may then be used to guide the selection of resources and/or the amount of each resource allocated to each such container1336. As a result, it may be that one or more of the containers1336are caused to have somewhat specialized configurations such that there may be differing types of containers to support the performance of different analyses and/or different portions of analyses. It may be that the entirety of the logic of a requested analysis is implemented within a single executable routine1334. In such embodiments, it may be that the entirety of that analysis is performed within a single container1336as that single executable routine1334is executed therein. However, it may be that such a single executable routine1334, when executed, is at least intended to cause the instantiation of multiple instances of itself that are intended to be executed at least partially in parallel. This may result in the execution of multiple instances of such an executable routine1334within a single container1336and/or across multiple containers1336. Alternatively or additionally, it may be that the logic of a requested analysis is implemented with multiple differing executable routines1334. In such embodiments, it may be that at least a subset of such differing executable routines1334are executed within a single container1336. However, it may be that the execution of at least a subset of such differing executable routines1334is distributed across multiple containers1336. Where an executable routine1334of an analysis is under development, and/or is under scrutiny to confirm its functionality, it may be that the container1336within which that executable routine1334is to be executed is additionally configured assist in limiting and/or monitoring aspects of the functionality of that executable routine1334. More specifically, the execution environment provided by such a container1336may be configured to enforce limitations on accesses that are allowed to be made to memory and/or I/O addresses to control what storage locations and/or I/O devices may be accessible to that executable routine1334. Such limitations may be derived based on comments within the programming code of the executable routine1334and/or other information that describes what functionality the executable routine1334is expected to have, including what memory and/or I/O accesses are expected to be made when the executable routine1334is executed. Then, when the executable routine1334is executed within such a container1336, the accesses that are attempted to be made by the executable routine1334may be monitored to identify any behavior that deviates from what is expected. Where the possibility exists that different executable routines1334may be written in different programming languages, it may be that different subsets of containers1336are configured to support different programming languages. In such embodiments, it may be that each executable routine1334is analyzed to identify what programming language it is written in, and then what container1336is assigned to support the execution of that executable routine1334may be at least partially based on the identified programming language. Where the possibility exists that a single requested analysis may be based on the execution of multiple executable routines1334that may each be written in a different programming language, it may be that at least a subset of the containers1336are configured to support the performance of various data structure and/or data format conversion operations to enable a data object output by one executable routine1334written in one programming language to be accepted as an input to another executable routine1334written in another programming language. As depicted, at least a subset of the containers1336may be instantiated within one or more VMs1331that may be instantiated within one or more node devices1330. Thus, in some embodiments, it may be that the processing, storage and/or other resources of at least one node device1330may be partially allocated through the instantiation of one or more VMs1331, and then in turn, may be further allocated within at least one VM1331through the instantiation of one or more containers1336. In some embodiments, it may be that such a nested allocation of resources may be carried out to effect an allocation of resources based on two differing criteria. By way of example, it may be that the instantiation of VMs1331is used to allocate the resources of a node device1330to multiple users or groups of users in accordance with any of a variety of service agreements by which amounts of processing, storage and/or other resources are paid for each such user or group of users. Then, within each VM1331or set of VMs1331that is allocated to a particular user or group of users, containers1336may be allocated to distribute the resources allocated to each VM1331among various analyses that are requested to be performed by that particular user or group of users. As depicted, where the processing system1300includes more than one node device1330, the processing system1300may also include at least one control device1350within which one or more control routines1354may be executed to control various aspects of the use of the node device(s)1330to perform requested analyses. By way of example, it may be that at least one control routine1354implements logic to control the allocation of the processing, storage and/or other resources of each node device1300to each VM1331and/or container1336that is instantiated therein. Thus, it may be the control device(s)1350that effects a nested allocation of resources, such as the aforedescribed example allocation of resources based on two differing criteria. As also depicted, the processing system1300may also include one or more distinct requesting devices1370from which requests to perform analyses may be received by the control device(s)1350. Thus, and by way of example, it may be that at least one control routine1354implements logic to monitor for the receipt of requests from authorized users and/or groups of users for various analyses to be performed using the processing, storage and/or other resources of the node device(s)1330of the processing system1300. The control device(s)1350may receive indications of the availability of resources, the status of the performances of analyses that are already underway, and/or still other status information from the node device(s)1330in response to polling, at a recurring interval of time, and/or in response to the occurrence of various preselected events. More specifically, the control device(s)1350may receive indications of status for each container1336, each VM1331and/or each node device1330. At least one control routine1354may implement logic that may use such information to select container(s)1336, VM(s)1331and/or node device(s)1330that are to be used in the execution of the executable routine(s)1334associated with each requested analysis. As further depicted, in some embodiments, the one or more control routines1354may be executed within one or more containers1356and/or within one or more VMs1351that may be instantiated within the one or more control devices1350. It may be that multiple instances of one or more varieties of control routine1354may be executed within separate containers1356, within separate VMs1351and/or within separate control devices1350to better enable parallelized control over parallel performances of requested analyses, to provide improved redundancy against failures for such control functions, and/or to separate differing ones of the control routines1354that perform different functions. By way of example, it may be that multiple instances of a first variety of control routine1354that communicate with the requesting device(s)1370are executed in a first set of containers1356instantiated within a first VM1351, while multiple instances of a second variety of control routine1354that control the allocation of resources of the node device(s)1330are executed in a second set of containers1356instantiated within a second VM1351. It may be that the control of the allocation of resources for performing requested analyses may include deriving an order of performance of portions of each requested analysis based on such factors as data dependencies thereamong, as well as allocating the use of containers1336in a manner that effectuates such a derived order of performance. Where multiple instances of control routine1354are used to control the allocation of resources for performing requested analyses, such as the assignment of individual ones of the containers1336to be used in executing executable routines1334of each of multiple requested analyses, it may be that each requested analysis is assigned to be controlled by just one of the instances of control routine1354. This may be done as part of treating each requested analysis as one or more “ACID transactions” that each have the four properties of atomicity, consistency, isolation and durability such that a single instance of control routine1354is given full control over the entirety of each such transaction to better ensure that either all of each such transaction is either entirely performed or is entirely not performed. As will be familiar to those skilled in the art, allowing partial performances to occur may cause cache incoherencies and/or data corruption issues. As additionally depicted, the control device(s)1350may communicate with the requesting device(s)1370and with the node device(s)1330through portions of a network1399extending thereamong. Again, such a network as the depicted network1399may be based on any of a variety of wired and/or wireless technologies, and may employ any of a variety of protocols by which commands, status, data and/or still other varieties of information may be exchanged. It may be that one or more instances of a control routine1354cause the instantiation and maintenance of a web portal or other variety of portal that is based on any of a variety of communication protocols, etc. (e.g., a restful API). Through such a portal, requests for the performance of various analyses may be received from requesting device(s)1370, and/or the results of such requested analyses may be provided thereto. Alternatively or additionally, it may be that one or more instances of a control routine1354cause the instantiation of and maintenance of a message passing interface and/or message queues. Through such an interface and/or queues, individual containers1336may each be assigned to execute at least one executable routine1334associated with a requested analysis to cause the performance of at least a portion of that analysis. Although not specifically depicted, it may be that at least one control routine1354may include logic to implement a form of management of the containers1336based on the Kubernetes container management platform promulgated by Could Native Computing Foundation of San Francisco, CA, USA. In such embodiments, containers1336in which executable routines1334of requested analyses may be instantiated within “pods” (not specifically shown) in which other containers may also be instantiated for the execution of other supporting routines. Such supporting routines may cooperate with control routine(s)1354to implement a communications protocol with the control device(s)1350via the network1399(e.g., a message passing interface, one or more message queues, etc.). Alternatively or additionally, such supporting routines may serve to provide access to one or more storage repositories (not specifically shown) in which at least data objects may be stored for use in performing the requested analyses. FIGS.14A,14B,14C,14D,14E and14F, together, illustrate two different example embodiments of a processing system2000and framework for the performance of multiple operations to convert speech to text and/or to derive insights from such text. Each of these two processing systems2000incorporates one or more storage devices2100that may form a storage grid2001, one or more node devices2300that may form of a node device grid2003, at least one control device2500and/or at least one requesting device2700, all coupled by a network2999. However, aspects of the manner in which the devices2100,2300,2500and/or2700are used to perform these operations differ between these two embodiments. More specifically,FIGS.14A-Care block diagrams of various aspects of an example embodiment of a distributed processing system2000in which, for each speech data set3100and/or for each text data set3700, the parallel processing of one or more operations is effected through the use of multiple processors2350and/or cores2351of processors2350across multiple node devices2300.FIGS.14D-Fare block diagrams of various aspects of an alternate example of a distributed processing system2000in which, for each speech data set3100and/or each text data set3700, parallel processing of various operations is effected through the use of multiple threads2454across one or more processors2350and/or cores2351of processor(s)2350within a single one of the node devices2300. For both embodiments of the distributed processing system2000ofFIGS.14A-Cand ofFIGS.14D-F, the storage device(s)2100may store one or more speech data sets3100in which speech audio may be stored in any of a variety of digital audio storage formats. Where there are multiple storage devices2100, at least a subset of the one or more speech data sets3100may be stored in a distributed manner in which different portions thereof are stored within different ones of the storage devices2100. As will be explained in greater detail, in support of the performance of pre-processing operations, of speech-to-text processing operations and/or of text analytics post-processing operations, a speech data set3100may be divided into data chunks3110that each represent a chunk of the speech audio of the speech data set3100, and/or may be divided into data segments3140that each represent a speech segment of that speech audio. Those data chunks3110and/or those data segments3140may then be provided to either a single node device2300or multiple ones of the node devices2300, depending on which of the distributed processing systems2000ofFIGS.14A-Cor14D-F is implemented. The storage device(s)2100may also store one or more corpus data sets3400that each represent a language model implemented as a corpus of a particular language, and/or one or more text data sets3700that each represent a transcript of speech audio that may each have been originally stored as a speech data set3100. As with the one or more speech data sets3100, where there are multiple storage devices2100, at least a subset of the one or more corpus data sets3400, and/or at least a subset of the one or more text data sets3700, may be stored in a distributed manner in which different portions thereof are stored within different ones of the storage devices2100. In support of distributed speech-to-text processing operations, and/or in support of distributed text analytics post-processing operations, multiple copies of the entirety of a corpus data set3400may be provided to either multiple node devices2300of the distributed processing system ofFIGS.14A-C, or multiple threads2454of a single one of the node devices2300of the distributed processing system ofFIGS.14D-F. Thus, in support of such operations, the devices2100,2300,2500and/or2700may exchange such portions of a speech data set3100, may exchange copies of a corpus data set3400, and/or may exchange other information concerning speech audio pre-processing operations, speech-to-text conversion and/or text analyses through the network2999. In various embodiments, the network2999may be a single network that may extend within a single building or other relatively limited area, a combination of connected networks that may extend a considerable distance, and/or may include the Internet. Thus, the network2999may be based on any of a variety (or combination) of communications technologies by which communications may be effected, including without limitation, wired technologies employing electrically and/or optically conductive cabling, and wireless technologies employing infrared, radio frequency (RF) or other forms of wireless transmission. Each of the speech data sets3100may be any of a variety of types of digital data representation of any of a variety of types of speech audio. Such representations of speech audio may include a series of amplitude values of one or more audio channels of any of a variety of bit widths (e.g., 8-bit, 12-bit, 16-bit, 20-bit or 24-bit), captured at any of a variety of sampling rates (e.g., 41.1 kHz, 48 kHz, 88.2 kHz or 96 kHz), and stored in any of a variety of widely used compressed or uncompressed audio data formats (e.g., MP3 (Motion Picture Experts Group layer3), WAV (Waveform Audio), PCM (Pulse-Code Modulation), FLAC (Free Lossless Audio Codec)), Dolby Digital or TrueHD of Dolby Laboratories of San Francisco, California, USA, or THX Ultra2 or Select2 of THX Ltd. of San Francisco, California, USA). In some embodiments, the speech data set3100may include other data beyond speech audio, such as corresponding video, corresponding still images (e.g., a corresponding slide show of still images), alternate corresponding speech audio in a different language, etc. In some of such embodiments, the speech data set3100may be any of a variety of types of “container” format or other data format that supports the provision of a multimedia or other combined audio and video presentation (e.g., MP4 of the International Organization for Standardization of Geneva, Switzerland). The speech audio that is so represented within each speech data set3100may include any of a variety of types of speech made up of words that spoken by one or more speakers, including and not limited to, telephone and/or radio conversations (e.g., telephone service calls, or air traffic control communications), telephone messages or other forms of voice mail, audio from in-person and/or remote conferences, lecture speech, podcasts, audio tracks from entertainment programs that include speech audio (e.g., audio from movies or from musical performances), verbal narrations of stories and/or of events in progress (e.g., narrations of sports events or other news events), and/or verbal commands to local electronic devices and/or to servers providing online services, etc. To be clear, the term “speaker” as used herein to refer to source(s) of the speech audio that is represented by the speech data set(s)3100is envisioned as referring to talking people (human beings). As will be explained in greater detail, various characteristics of the speech sounds produced by the vocal tracts of each such person (along with the language(s) they speak and/or the accent(s) they speak with) may be relied upon in identifying sentence pauses and/or in identifying individual speakers. However, it should be noted that, in some embodiments, one or more speakers of speech audio represented by a speech data set3100may be a machine-based speaker (e.g., a computer or other electronic device employing speech-to-text synthesizer components to generate synthesized speech sounds). Alternatively or additionally, it may be that one or more speakers of speech audio represented by a speech data set3100may be a non-human animal that may have learned to generate human speech sounds (e.g., a parrot or a great ape). At least a subset of the speech data sets3100stored by the one or more storage devices2100may each represent a stored recording of speech audio that was fully captured at an earlier time. Thus, such speech data set(s)3100may represent speech audio that may have been recorded either relatively recently (e.g., within recent minutes or hours), or long ago (e.g., weeks, months or years earlier). Alternatively or additionally, at least another subset of the speech data sets3100may each represent just a stored portion of speech audio that is still in the process of being captured. Thus, such speech data set(s)3100may serve, at least temporarily, as buffer(s) of portions of ongoing speech audio that have already been captured, with more portions thereof still in the process of being captured. It is envisioned that at least a subset of the speech data sets3100may be sufficiently large in size such that storage and/or processing of the entirety thereof within a single device may be deemed to be at least impractical, if not impossible. Therefore, to facilitate storage and/or processing of such larger speech data sets3100in a distributed manner across multiple devices, each of such larger speech data sets3100may be divided into multiple portions that may be distributed among multiple storage devices2100and/or among multiple node devices2300. In some embodiments, multiple ones of the storage devices2100may be operated together (e.g., as a network-attached drive array, etc.) primarily for the purpose of persistently storing data, such as the one or more speech data sets3100. In such embodiments, the multiple storage devices2100may be capable of exchanging the entirety of a relatively large speech data set3100with multiple node devices2300in a set of data transfers of portions thereof (e.g., data chunks3110thereof, or data segments3140thereof) performed at least partially in parallel through the network2999, and such transfers may be coordinated by the control device2500. In some embodiments, processor(s) of the one or more storage devices2100may each independently implement a local file system by which at least relatively small speech data sets3100may each be stored entirely within a single one of the storage devices2100. Alternatively or additionally, multiple ones of the storage devices2100may cooperate through the network2999to implement a distributed file system to store larger speech data sets3100as multiple portions in a distributed manner across multiple ones of the storage devices2100. As still another alternative, it may be that one or more of the storage devices2100store a combination of whole speech data sets3100that are of relatively small data size such that they are able to be stored entirely within a single storage device2100, and a portion of at least one speech data set3100that is too large in data size to be able to be stored entirely within any single one of the storage devices2100. Referring more specifically toFIGS.14A-C, and the embodiment of distributed processing system2000depicted therein, each of the multiple node devices2300may incorporate one or more processors2350, one or more neuromorphic devices2355, a storage2360, and/or a network interface2390to couple each of the node devices2300to the network2999. The processor(s)2350may incorporate multiple processing cores2351and/or other features to support the execution of multiple executable routines and/or multiple instances of executable routine(s) across multiple execution threads. The storage2360may store control routines2310,2340and/or2370; one or more data chunks3110; one or more data segments3140; and/or a corpus data set3400. Each of the control routines2310,2340and2370may incorporate a sequence of instructions operative on the processor(s)2350to implement logic to perform various functions. In executing the control routine2310, the processor(s)2350of each of the node devices2300may be caused to perform various pre-processing operations, such as normalization of the digital audio storage format in which the chunk of speech audio within each data chunk3110is stored, speaker diarization to identify which speaker(s) spoke which portions of the speech audio of the speech data set3100, and/or determining the manner in which a speech data set3100is to be divided into data segments3140thereof as input to speech-to-text processing operations. In executing the control routine2340, the processor(s)2350of each of the node devices2300may be caused to perform various speech-to-text processing operations, such as feature detection to identify acoustic features within the speech segment of each data segment3140, using multiple instances of an acoustic model to identify likely graphemes, and/or use multiple instances of an n-gram language model (stored as a corpus data set3400) to assist in identifying likely words to generate a transcript of the speech audio of the speech data set3100, which may then be stored within the one or more storage devices2100as a corresponding text data set3700. In executing the control routine2370, the processor(s)2350of each of the node devices2300may be caused to perform various post-processing operations, such as text analytics to derive various insights concerning the contents of speech audio stored as a speech data set3100, and/or the generation of various visualizations for presenting such insights. Where such visualizations are generated by the node devices2300(and/or by the control device2500), such visualizations may be stored as part of (or in a manner that accompanies) the text metadata3779. However, where such visualizations are to be subsequently generated by the requesting device2700, such generation of such visualizations may be based on the text metadata3779. In performing at least a subset of pre-processing operations, at least a subset of text-to-speech processing operations and/or at least a subset of post-processing operations, the processor(s)2350of multiple ones of the node devices2300may be caused to perform such operations at least partially in parallel for a single speech data set3100and/or a single text data set3700. As has been explained, this may be at least partially due to the size of speech data set3100. Alternatively or additionally, this may be at least partially due to a need or desire to increase the speed and/or efficiency with which one or more of such operations are performed, regardless of the size of a speech data set3100. Regardless of the motivation, such at least partially parallel performances of such operations may be coordinated by the control device2500through the network2999. As will also be explained in greater detail, at least a subset of the pre-processing operations, text-to-speech processing operations and/or post-processing operations may employ neural network(s). In embodiments of the node device(s)2300that incorporate the neuromorphic device(s)2355, the neuromorphic device(s)2355may be employed to implement one or more of such neural networks in hardware, and the processor(s)2350may be caused by one or more of the control routine(s)2310,2340and/or2370to configure the neuromorphic device(s)2355to do so. However, in embodiments of the node device(s)2300that do not incorporate the neuromorphic device(s)2355, the processor(s)2350may, as an alternative, be caused to execute routine(s) to implement such neural networks in software. The control device2500may incorporate one or more processors2550, a storage2560, and/or a network interface2590to couple the control device2500to the network2999. The processor(s)2550may incorporate multiple processing cores2551and/or other features to support the execution of multiple executable routines and/or multiple instances of executable routine(s) across multiple execution threads. The storage2560may store control routines2510,2540and/or2570, a resource routine2640, configuration data2335, a text data set3700and/or text metadata3779. Each of the control routines2510,2540and2570, and/or the resource routine2640may incorporate a sequence of instructions operative on the processor(s)2550to implement logic to perform various functions. In executing the resource routine2640, processor(s)2550of the control device2500may be caused to operate the network interface2590to monitor the availability of processing, storage and/or other resources of each of the node devices2300. The processor(s)2550of the control device2500may then use such information to determine what combination of node devices2300is to be employed in performing pre-processing operations and/or speech-to-text processing operations with each speech data set3100, and/or what combination of node devices2300is to be employed in performing post-processing operations with each text data set3700. In executing the control routine2510, it may be that processor(s)2550of the control device2500are caused to operate the network interface2590to coordinate, via the network2999, at least a subset of the pre-processing operations performed, at least partially in parallel, by processors2350of multiple ones of the node devices2300for each speech data set3100as a result of executing corresponding instances of the control routine2310. More specifically, the processor(s)2550may be caused to coordinate the performances of multiple pause detection techniques and/or speaker diarization techniques across multiple ones of the node devices2300. Alternatively or additionally, as pause sets of indications of likely sentence pauses are derived from the performance of each pause detection technique, and/or as changes sets of indications of likely speaker changes are derived from the performance of at least one speaker diarization technique, it may be that processor(s)2550of the control device2500are caused by the control routine2510to use the pause sets and/or change sets received from node devices2300to derive a segmentation set3119of indications of the manner in which the speech audio of a speech data set3100is to be divided into segments. In executing the control routine2540, it may be that processor(s)2550of the control device2500are caused to operate the network interface2590to coordinate, via the network2999, at least a subset of the speech-to-text processing operations performed, at least partially in parallel, by processors2350of multiple ones of the node devices2300for each speech data set3100as a result of executing corresponding instances of the control routine2340. More specifically, the processor(s)2550may be caused to coordinate the generation of data segments3140(or of sets of data segments3140) among the node devices2300based on the indications of likely sentence pauses within the segmentation set3119derived earlier during pre-processing. Alternatively or additionally, the processor(s)2550may be caused to coordinate the detection of acoustic features within the speech segment of each of the data segments3140, and/or to coordinate the use of multiple instances of an acoustic model to identify likely graphemes across multiple ones of the node devices2300. Alternatively or additionally, as sets of probability distributions of likely graphemes are derived from such use of acoustic models, it may be that the processor(s)2550of the control device2500are caused by the control routine2540to use the sets of probability distributions received from multiple node devices2300as inputs to coordinate beam searches of multiple instances of an n-gram language model across multiple node devices2300(at least partially in parallel) to generate the transcript of the speech audio of the speech data set3100. More specifically, and turning momentarily to a highly simplified example presented inFIG.14C, where the storage device(s)2100store at least a speech data set3100xand another speech data set3100y, it may be that the processor(s)2550of the control device2500are caused by execution of the resource routine2640to monitor the availability of processing, storage and/or other resources of each of the node devices2300. As will be familiar to those skilled in the art, each of the node devices2300may recurringly provide indications of such status to the control device2500via the network2999. The processor(s)2550of the control device2500may use such information to identify a combination of node devices2300(labeled as2300x1and2300x2inFIG.14C) as having sufficient available resources as to be available for use in performing pre-processing and/or speech-to-text processing operations to generate a text data set3700from the speech data set3100x, and may assign those node devices2300to do so. Similarly, the processor(s)2550of the control device2500may use such information to identify another combination of node devices2300(labeled as2300y1and2300y2inFIG.14C) as having sufficient available resources as to be available for use in performing pre-processing and/or speech-to-text processing operations to generate another text data set3700from the speech data set3100y, and may assign those node devices to do so. It should be noted that, although the set of node devices2300x1and2300x2assigned to the speech data set3100x, and the set of node devices2300y1and2300y2assigned to the speech data set3100y, are depicted as not including any node devices2300that belong to both sets, it is entirely possible that there may be one or more node devices2300that are identified as having sufficient available resources as to allow their inclusion within more than one of such sets of node devices2300. As will be familiar to those skilled in the art, of the various pre-processing and processing operations that may be performed as part of converting speech to text, beam searches through a corpus that implements a language model have often been found to consume the greatest quantities of processing and/or storage resources, such that the performance of beam searches are often found to be a persistent bottleneck in performances of speech-to-text conversion. In view of this, as also depicted inFIG.14C, and as will be explained in greater detail, it is envisioned that it may be performances of beam searches through a corpus data set3400that may be the one type of speech-to-text operation that would be most useful to arrange to be performed in parallel across multiple node devices2300. In view of this, and as depicted, it may be that multiple instances of at least a beam search component2347of the control routine2340may be executed at least partially in parallel by multiple processors2350across multiple node devices2300for both the speech data set3100xand the speech data set3100y. Returning toFIGS.14A-C, in executing the control routine2570, the processor(s)2550of the control device2500may be caused to operate the network interface2590to coordinate, via the network2999, at least a subset of post-processing operations performed, at least partially in parallel, by processors2350of multiple ones of the node device2300for each text data set3700as a result of executing corresponding instances of the control routine2370. More specifically, the processors2550may be caused to coordinate the distributed use of various forms of text analytics among the node devices2300to derive insights concerning the speech audio of the speech data set3100. Referring more specifically toFIGS.14D-F, and the embodiment of distributed processing system2000depicted therein, each of the multiple node devices2300may incorporate one or more processors2350, one or more neuromorphic devices2355, a storage2360, and/or a network interface2390to couple each of the node devices2300to the network2999. The processor(s)2350may incorporate multiple processing cores2351and/or other features to support the execution of multiple executable routines and/or multiple instances of executable routine(s) across multiple threads2454. The storage2360may store control routines2310,2340and/or2370; a resource routine2440; one or more data chunks3110; one or more data segments3140; a corpus data set3400; a text data set3700and/or text metadata3779. Each of the control routines2310,2340and2370, and/or the resource routine2440may incorporate a sequence of instructions operative on the processor(s)2350to implement logic to perform various functions. In executing the resource routine2440, processor(s)2350of a node device2300may be caused to monitor the availability of processing resources (including threads2454), storage resources and/or other resources within that node device2300. The processor(s)2350may then use such information to determine what quantity of threads2454is to be employed in performing pre-processing operations and/or speech-to-text processing operations with each speech data set3100, and/or what quantity of threads2454is to be employed in performing post-processing operations with each text data set3700. In executing the control routine2310, the processor(s)2350of a node devices2300may be caused to perform various pre-processing operations using one or more threads2454, such as normalization of the digital audio storage format in which the chunk of speech audio within each data chunk3110is stored, speaker diarization to identify which speaker(s) spoke which portions of the speech audio of the speech data set3100, and/or determining the manner in which a speech data set3100is to be divided into data segments3140thereof as input to speech-to-text processing operations. In executing the control routine2340, the processor(s)2350of a node device2300may be caused to perform various speech-to-text processing operations using one or more threads2454, such as feature detection to identify acoustic features within the speech segment of each data segment3140, using multiple instances of an acoustic model to identify likely graphemes, and/or use multiple instances of an n-gram language model (stored as a corpus data set3400) to assist in identifying likely words to generate a transcript of the speech audio of the speech data set3100, which may then be stored within the one or more storage devices2100as a corresponding text data set3700. In executing the control routine2370, the processor(s)2350of a node device2300may be caused to perform various post-processing operations using one or more threads2454, such as text analytics to derive various insights concerning the contents of speech audio stored as a speech data set3100, and/or the generation of various visualizations for presenting such insights. Where such visualizations are generated by the node device2300, such visualizations may be stored as part of (or in a manner that accompanies) the text metadata3779. However, where such visualizations are to be subsequently generated by the requesting device2700, such generation of such visualizations may be based on the text metadata3779. In performing at least a subset of pre-processing operations, at least a subset of text-to-speech processing operations and/or at least a subset of post-processing operations, the processor(s)2350of a node devices2300may be caused to perform such operations at least partially in parallel across multiple threads2454for a single speech data set3100and/or a single text data set3700. As will be explained in greater detail, this may be at least partially due to experimental observations that the performance of particular operations, such as beam searches in speech-to-text processing operations, tend to become bottlenecks, while other operations are able to be performed significantly more quickly. Again, as will also be explained in greater detail, at least a subset of the pre-processing operations, text-to-speech processing operations and/or post-processing operations may employ neural network(s). In embodiments of the node device(s)2300that incorporate the neuromorphic device(s)2355, the neuromorphic device(s)2355may be employed to implement one or more of such neural networks in hardware, and the processor(s)2350may be caused by one or more of the control routine(s)2310,2340and/or2370to configure the neuromorphic device(s)2355to do so. However, in embodiments of the node device(s)2300that do not incorporate the neuromorphic device(s)2355, the processor(s)2350may, as an alternative, be caused to execute routine(s) to implement such neural networks in software. The control device2500may incorporate one or more processors2550, a storage2560, and/or a network interface2590to couple the control device2500to the network2999. The processor(s)2550may incorporate multiple processing cores2551and/or other features to support the execution of multiple executable routines and/or multiple instances of executable routine(s) across multiple execution threads. The storage2560may store a resource routine2640and/or configuration data2335. The resource routine2640may incorporate a sequence of instructions operative on the processor(s)2550to implement logic to perform various functions. In executing the resource routine2640, processor(s)2550of the control device2500may be caused to operate the network interface2590to monitor the availability of processing, storage and/or other resources of each of the node devices2300. In so doing, the processor(s)2550of the control device2500and the processor(s)2350of each of the node devices2300may be caused by execution of the resource routine2640and of the resource routine2440, respectively, to cooperate to provide the processor(s)2550with indications of whether there are sufficient processing resources available within each node device2300to support the allocation of an appropriate quantity of threads2454to the performance of pre-processing operations and/or speech-to-text processing operations with another speech data set3100, and/or to support the allocation of an appropriate quantity of threads2454to the performance of post-processing operations with another text data set3700. The processor(s)2550may then use such information to determine availability of node devices2300to perform pre-processing operations and/or speech-to-text processing operations with a speech data set3100, and/or availability to perform post-processing operations with a text data set3700. The processor(s)2550of the control device2500may also use such information to determine which single node device2300to assign to perform such pre-processing and/or processing operations with each speech data set3100, and/or which single node device2300to assign to perform such post-processing operations with each text data set3700. More specifically, and turning momentarily to a highly simplified example presented inFIG.14F, where the storage device(s)2100store at least three speech data sets3100x,3100yand3100z, it may be that the processor(s)2550of the control device2500are caused by execution of the resource routine2640to monitor the availability of processing resources (such as threads2454), storage resources and/or other resources of each of the node devices2300. As previously discussed, the processor(s)2350within each of the node devices2300may be caused by execution of the resource routine2440to recurringly provide indications of such status (perhaps as indications of available threads2454) to the control device2500via the network2999. The processor(s)2550of the control device2500may use such information to identify a node device2300(labeled as2300xyinFIG.14F) as having sufficient available resources to support a sufficient quantity of threads2454as to be available for use in performing pre-processing and/or speech-to-text processing operations to generate a text data set3700from the speech data set3100x, and may assign that node device2300xyto do so. Similarly, the processor(s)2550of the control device2500may use such information to determine that the same node device2300xyis also available for use in performing pre-processing and/or speech-to-text processing operations to generate another text data set3700from the speech data set3100y, and may assign that node device2300xyto do so. Also, similarly, the processor(s)2550of the control device2500may use such information to determine that another node device2300(labeled as node device2300zinFIG.14F) is available for use in performing pre-processing and/or speech-to-text processing operations to generate still another text data set3700from the speech data set3100z, and may assign that node device2300zto do so. Again, of the various pre-processing and processing operations that may be performed as part of converting speech to text, beam searches through a corpus that implements a language model have often been found to consume the greatest quantities of processing and/or storage resources, such that the performance of beam searches are often found to be a persistent bottleneck in performances of speech-to-text conversion. In view of this, as also depicted inFIG.14F, and as will be explained in greater detail, it is envisioned that it may be performances of beam searches through a corpus data set3400that may be the one type of speech-to-text operation that would be most useful to arrange to be performed in parallel across multiple threads2454. In view of this, and as depicted, it may be that multiple thread pools2450x,2450yand2450zare formed, each made up of multiple threads2454, to enable multiple instances of at least a beam search component2347of the control routine2340to be executed at least partially in parallel for each one of the speech data sets3100x,3100yand3100z, respectively. As depicted, the thread pools2450xand2450yare each formed entirely within the node device2300xy, and the thread pool2450zis formed entirely within the node device2300z. Referring again to both embodiments of the distributed processing system2000ofFIGS.14A-Cand14D-F, the requesting device2700may incorporate one or more of a processor2750, a storage2760, an input device2720, a display2780, and a network interface2790to couple the requesting device2700to the network2999. The storage2760may store a control routine2740, a text data set3700and/or text metadata3779. The control routine2740may incorporate a sequence of instructions operative on the processor2750to implement logic to perform various functions. In executing the control routine2740, the processor2750of the requesting device2700may be caused to operate the input device2720and/or the display2780to provide a user interface (UI) by which an operator of the requesting device2700may transmit a request to the control device2500to perform one or more operations that may include speech-to-text conversion of the speech audio represented by a specified one of the speech data sets3100and/or that include the provision of insights concerning the contents of speech audio stored as a specified one of the speech data sets3100. The processor2750may be subsequently caused to similarly provide a UI by which the operator of the requesting device2700is able to view the text of that speech audio upon receipt of its transcript in the form of a text data set3700from the control device2500, and/or is able to view various derived insights concerning the transcript. Again, in some embodiments, such visualizations may have been previously generated and then provided to the requesting device for presentation to convey such insights. Alternatively or additionally, the processor2750may be caused to generate such visualizations from information contained within text metadata3779associated with a text data set3700. FIGS.15A,15B,15C,15D,15E and15F, taken together, illustrate, in greater detail, aspects of one implementation of an end-to-end framework within an embodiment of the distributed processing system2000ofFIGS.14A-Cto provide improved insights into the contents of speech audio. Within this implementation of the end-to-end framework across multiple devices2300and2500, various pieces of information concerning speech audio are routed through multiple processing operations in which data is analyzed and transformed in multiple ways to derive a transcript of the contents of the speech audio, and then to derive insights concerning those contents.FIGS.15A-Billustrates aspects of distributed pre-processing operations that are performed across the control device2500and multiple node devices2300to determine the manner in which speech audio stored as a speech data set3100is to be divided into speech segments (represented as data segments3140), or sets of speech segments3140, for speech-to-text processing operations.FIGS.15C-Dillustrate aspects of distributed speech-to-text processing operations that are performed across the control device2500and multiple node devices2300to generate a transcript (stored as a text data set3700) of what was spoken in the speech audio, including the use of a corpus of a selected language (stored as a corpus data set3400).FIGS.15E-Fillustrate aspects of distributed text analytics post-processing operations that are performed across the control device2500and multiple node devices2300to derive insights (which may be stored as text metadata3379) into the contents of the speech audio and/or to identify transcripts (stored as other text data sets3700) of other related pieces of speech audio. Turning toFIG.15A, a speech data set3100representing speech audio spoken by one or more individuals in a digitally encoded form in storage (e.g., within the storage device(s)2100) may be divided into a set of multiple chunks of the speech audio of equal length, represented as a set of multiple data chunks3110. Such multiple data chunks3110may then be provided to each of multiple node devices2300for pause detection. Within each of the multiple node devices2300, a different pause detection technique may be performed to proceed through the multiple chunks of speech audio represented by the multiple data chunks3110to identify the longer pauses that typically occur between sentences. It should be noted that the division of the speech data set3100into the multiple data chunks3110may be necessary to accommodate input data size limitations imposed by one or more of the pause detection techniques. Different components of, and/or different versions of, the control routine2310may be executed within each node device2300of the multiple node devices2300to cause the performance of a different one of the multiple pause detection techniques within each of those node devices2300. As a result, within each of those node devices2300, a different set of likely sentence pauses may be derived. Indications of the separately derived sets of likely sentence pauses may then be provided to the control device2500by each of the multiple node devices2300as a separate pause set3116. Turning toFIG.15B, following the receipt of the multiple pause sets3116, the control device2500may provide copies of the multiple pause sets3116to the at least one node device2350that may perform a speaker diarization technique. Again, just a single speaker diarization technique may be performed in some embodiments, while multiple speaker diarization techniques may be performed in other embodiments. Also in preparation for the performance of at least one speaker diarization technique, the speech data set3100may again be divided into a set of multiple chunks of the speech audio of equal length (again represented as a set of multiple data chunks3110). Such multiple data chunks3110may then be provided to each of the one or more node devices2300that is to perform a speaker diarization technique. Within each node device2300that is to perform a speaker diarization technique, the division of the speech data set2310into multiple data chunks3110may again be necessary to accommodate input data size limitations imposed by a speaker diarization technique. Different components of, and/or different versions of, the control routine2310may be executed within each node device2300of the at least one node device2300that performs a speaker diarization technique to detect instances of a likely change of speaker in the speech audio. As a result, within each node device2300of the at least one node device2300, a different set of likely speaker changes may be derived (although, again, as depicted, it may be that there is just one node device2300that performs a speaker diarization technique, and therefore, just one set of likely speaker changes is derived). Indications of the derived set of likely speaker changes from each speaker diarization technique may then be provided to the control device2500as a separate change set3118. Within the control device2500the sets of indications of likely sentence pauses from the pause sets3116may be combined in any of a variety of ways to derive a single set of likely sentence pauses. Similarly, if more than one speaker diarization technique was performed, then the sets of indications of likely speaker changes from multiple change sets3118may be similarly combined into a single set of likely speaker changes. The single set of likely sentence pauses and the single set of likely speaker changes may then both be used to generate a single segmentation set3119of indications of the manner in which the speech data set3100is to be divided into the segments that will be used as inputs to the subsequent text-to-speech processing operations to be performed. Turning toFIG.15C, following such pre-processing operations as are described just above, the same speech data set3100representing the same speech audio may be divided, again, but now into a set of multiple speech segments that are each represented by a data segment3140. Unlike the division into multiple chunks of speech audio that did not in any way take into account the content of the speech audio, the division of the speech audio into multiple speech segments may be based on the indications of where sentence pauses and/or speaker changes have been deemed to be likely to be present within the speech audio, as indicated by the segmentation set3119. Also unlike the provision of the same full set of multiple data chunks3110to each of the multiple node devices2300in which a different segmentation technique was performed, each of multiple node devices2300may be provided with one or more different ones of the data segments3140. Within each of the multiple node devices2300that are provided with at least one of the data segments3140, execution of the control routine2340may cause each such provided data segment3140to be divided into multiple data frames3141of equal length. In so doing, the speech segment represented by each of such data segments3140is divided into multiple speech frames that are each represented by one of the data frames3141. It should be noted that, since each of the data segments3140are likely to be of a different size (as a result of each of the speech segments represented thereby likely being of a different temporal length), the division of each data segment3140into multiple data frames3141may result in different quantities of data frames3141being generated from each data segment3140. Following the division of a data segment3140into multiple data frames3141within each of the multiple node devices2300, each of those data frames3141may then be subjected to feature detection in which the speech frame represented by each is analyzed to identify any occurrences of one or more selected acoustic features therein. For each data frame3141, a corresponding feature vector3142may be generated that includes indications of when each identified acoustic feature was found to have occurred within the corresponding speech frame. Each feature vector3142of the resulting set of feature vectors3142corresponding to the set of data frames3141of a single segment3140may then be provided as an input to an acoustic model that is caused to be implemented within each of the multiple node devices2300by further execution of the control routine2340. The acoustic model may map each occurrence of a particular acoustic feature, or each occurrence of a particular sequence of acoustic features, to one or more graphemes that may have been pronounced and/or to a pause that may have occurred. More specifically, for each feature vector3142, the acoustic model may generate one or more probability distributions of one or more graphemes (which may correspond to one or more phonemes that may be represented by corresponding text character(s)) that were pronounced, and/or one or more pauses that occurred within the corresponding speech frame. The probability distributions so derived from all of the feature vectors that correspond to a single speech segment may be assembled together in temporal order to form a single probability distribution set3143that corresponds to that single speech segment. Turning toFIG.15D, each of the probability distribution sets3143, following its generation within a different one of the multiple node devices2300, may then be provided to the control device2500. Also, each of the multiple node devices2300may be provided with a complete copy of a corpus data set3400that includes an n-gram language model. Within the control device2500, execution of the control routine2540may cause the probability distributions of graphemes and/or of pauses within each of the probability distribution sets3143to be analyzed in temporal order to derive a set of up to a pre-selected quantity of candidate words that are each among the words that are each more likely to be the next word that was spoken. Each word of this set of candidate words may then be combined with up to a pre-selected quantity of earlier-identified preceding words to form a corresponding set of candidate n-grams that are to be searched for within the corpus data set3400. The set of candidate n-grams may then be provided to the multiple node devices2300to enable the performance of a beam search through the corpus of the corpus data set3400in a distributed manner across the multiple node devices2300, as will be explained in greater detail. Within each of the multiple node devices2300, in executing the control routine2340, a different subset of the set of candidate n-grams is searched for within the corpus represented by the corpus data set3400, as will also be explained in greater detail. Within each of the multiple node devices2300, as the probability for each candidate n-gram of the subset is retrieved from the corpus of the corpus data set3400as a result of the search, indications of those probabilities may be transmitted back to the control device2500. Within the control device2500, following the receipt of the probabilities for all of the candidate n-grams within the set of candidate n-grams from the node devices2300, the one candidate n-gram within the set that has the highest probability may be identified. In so doing, the corresponding candidate word out of the set of candidate words is selected as being the word that was mostly likely the next word spoken. That word may then be added to the transcript of the speech audio of speech data set3100, which may be stored within the control device2500as a text data set3700. Turning toFIG.15E, following the generation of a complete transcript of what was said in the speech audio of the speech data set3100, the transcript may be stored within the one or more storage devices2100as the corresponding text data set3700. The text data set3700may include an identifier of the speech data set3100from which the transcript of the text data set3700was derived. Within the control device2500, in executing the control routine2570, various post-processing analyses may be performed of the text within the transcript to identify such features as the one or more topics that were spoken about, the relative importance of each topic, indications of sentiments, etc. More specifically, using the transcript of the text data set3700as an input, one or more terms within the transcript (each including one or more words) may be identified as having one or more quantifiable characteristics (e.g., counts of occurrences of each term and/or aggregate counts of multiple terms, degree of relevance of a term within the transcript, degree of strength of positive or negative sentiment about a term, etc.), and/or relational characteristics (e.g., semantic and/or grammatical relationships among terms, whether detected sentiment about a term is positive or negative, etc.) In some embodiments, the entirety of the transcript may be provided to each of multiple ones of the node devices2300to enable each to perform a different post-processing analysis on the entirety of the transcript. As part of one or more of such analyses, sets of n-grams from the transcript may be provided to the multiple node devices2300to be searched for within the corpus data set3400as part of using n-gram probabilities in identifying topics, indications of sentiments about topics, etc. Regardless of the exact types of text analyses that are performed, and regardless of the exact manner in which each text analysis is performed, the various insights that may be derived from such analyses may be assembled as corresponding text metadata3779that may also be stored within the one or more storage devices2100. Turning toFIG.15F, following the derivation of the text metadata3779corresponding to the text data set3700, further execution of the control routine2570may cause the retrieval of text metadata3779corresponding to other text data sets3700that correspond to other speech data sets3100. Such other text metadata3779may be analyzed to identify relationships among words, text chunks, utterances, topics, etc. that may lead to the identification of other text data sets3700generated from other speech data sets3100that may be deemed to be related. In further executing the control routine2570, the control device2500may be cause to provide the text data set3700, the corresponding text metadata3779, and/or text metadata3779of other related speech data set(s)3100and/or text data set(s)3700to the requesting device2700. It may be that the request to provide various insights into what was spoken in the speech audio of the speech data set3100was received by the control device2500from the requesting device2700. In executing the control routine2740, images of the transcript of the text data set3700, various visualizations of aspects of the contents thereof indicated in the corresponding text metadata3779, and/or visualizations of identified relationships to other transcripts of other speech audio may be presented to an operator of the requesting device2700. FIGS.16A,16B,16C,16D,16E and16F, taken together, illustrate, in greater detail, aspects of one implementation of an end-to-end framework within an embodiment of the distributed processing system2000ofFIG.14D-Fto provide improved insights into the contents of speech audio. Within this implementation of the end-to-end framework across multiple threads within a single node device2300, various pieces of information concerning speech audio are routed through multiple processing operations in which data is analyzed and transformed in multiple ways to derive a transcript of the contents of the speech audio, and then to derive insights concerning those contents.FIGS.16A-Cillustrates aspects of distributed pre-processing operations that may be performed across multiple threads within a single node device2300to determine the manner in which speech audio stored as a speech data set3100is to be divided into speech segments (represented as data segments3140), or sets of speech segments3140, for speech-to-text processing operations.FIGS.16D-Eillustrate aspects of distributed speech-to-text processing operations that may be performed across multiple threads within a single node device2300to generate a transcript (stored as a text data set3700) of what was spoken in the speech audio, including the use of a corpus of a selected language (stored as a corpus data set3400).FIG.16Fillustrates aspects of distributed text analytics post-processing operations that may be performed across multiple threads within a single node device2300to derive insights (which may be stored as text metadata3379) into the contents of the speech audio and/or to identify transcripts (stored as other text data sets3700) of other related pieces of speech audio. Turning toFIG.16A, a speech data set3100representing speech audio spoken by one or more individuals in a digitally encoded form in storage (e.g., within the storage device(s)2100) may be divided into a set of multiple chunks of the speech audio of equal length, represented as a set of multiple data chunks3110. Such multiple data chunks3110may then be provided to each of one or more threads2454within a single node device2300for pause detection. It may be that within each of the one or more threads2454within a single node device2300, a different pause detection technique may be performed to proceed through the multiple chunks of speech audio represented by the multiple data chunks3110to identify the longer pauses that typically occur between sentences. Again, the division of the speech data set3100into the multiple data chunks3110may be necessary to accommodate input data size limitations imposed by one or more of the pause detection techniques. Different components of, and/or different versions of, the control routine2310may be executed within each of the one or more threads2454to cause the performance of a different one of the multiple pause detection techniques within each of those threads2454. As a result, within each of those threads2454, a different set of likely sentence pauses may be derived. Turning toFIG.16B, the multiple pause sets3116may then be provided to each of one more threads2545within the same node device2300to perform one or more speaker diarization techniques. Just a single speaker diarization technique may be performed within a single thread2545in some embodiments, while multiple speaker diarization techniques may each be performed within a separate thread2545in other embodiments. Also in preparation for the performance of at least one speaker diarization technique, the speech data set3100may again be divided into a set of multiple chunks of the speech audio of equal length (again represented as a set of multiple data chunks3110). Such multiple data chunks3110may then be provided to each of the one or more threads2545in which a speaker diarization technique is to be performed. Within each thread2545in which a speaker diarization technique is to be performed, the division of the speech data set2310into multiple data chunks3110may again be necessary to accommodate input data size limitations imposed by a speaker diarization technique. Different components of, and/or different versions of, the control routine2310may be executed within each thread2545of the one or more threads2545in which a speaker diarization technique is performed to detect instances of a likely change of speaker in the speech audio. As a result, within each such thread2545, a different set of likely speaker changes may be derived (although, again, as depicted, it may be that there is just one thread2545in which a speaker diarization technique is performed, and therefore, just one set of likely speaker changes is derived). Turning toFIG.16C, within the same single node device2300, the sets of indications of likely sentence pauses from the pause sets3116may be combined in any of a variety of ways to derive a single set of likely sentence pauses. Similarly, if more than one speaker diarization technique was performed, then the resulting change sets3118of indications of likely speaker changes may be similarly combined into a single set of likely speaker changes. The single set of likely sentence pauses and the single set of likely speaker changes may then both be used to generate a single segmentation set3119of indications of the manner in which the speech data set3100is to be divided into the segments that will be used as inputs to the subsequent text-to-speech processing operations to be performed. Turning toFIG.16D, following such pre-processing operations as are described just above, the same speech data set3100representing the same speech audio may be divided, again, but now into a set of multiple speech segments that are each represented by a data segment3140. Again, unlike the division into multiple chunks of speech audio that did not in any way take into account the content of the speech audio, the division of the speech audio into multiple speech segments may be based on the indications of where sentence pauses and/or speaker changes have been deemed to be likely to be present within the speech audio, as indicated by the segmentation set3119. It may be that all data segments3140are initially provided to a single thread2545within the single node device2300for feature and grapheme detection. Alternatively, it may be that different subsets of the data segments3140are each provided to a different thread2545of multiple threads for at least partially parallel performances of feature and grapheme detection. Within each of such one or more threads2454, execution of the control routine2340may cause each such provided data segment3140to be divided into multiple data frames3141of equal length. In so doing, the speech segment represented by each of such data segments3140is divided into multiple speech frames that are each represented by one of the data frames3141. It should be noted that, since each of the data segments3140are likely to be of a different size (as a result of each of the speech segments represented thereby likely being of a different temporal length), the division of each data segment3140into multiple data frames3141may result in different quantities of data frames3141being generated from each data segment3140. Following the division of a data segment3140into multiple data frames3141within each of such threads2454, each of those data frames3141may then be subjected to feature detection in which the speech frame represented by each data frame3141is analyzed to identify any occurrences of one or more selected acoustic features therein. For each data frame3141, a corresponding feature vector3142may be generated that includes indications of when each identified acoustic feature was found to have occurred within the corresponding speech frame. Each feature vector3142of the resulting set of feature vectors3142corresponding to the set of data frames3141of a single segment3140may then be provided as an input to an acoustic model that is caused to be implemented within the single node device2300by further execution of the control routine2340. Again, the acoustic model may map each occurrence of a particular acoustic feature, or each occurrence of a particular sequence of acoustic features, to one or more graphemes that may have been pronounced and/or to a pause that may have occurred. Again, for each feature vector3142, the acoustic model may generate one or more probability distributions of one or more graphemes (which may correspond to one or more phonemes that may be represented by corresponding text character(s)) that were pronounced, and/or one or more pauses that occurred within the corresponding speech frame. The probability distributions so derived from all of the feature vectors that correspond to a single speech segment may be assembled together in temporal order to form a single probability distribution set3143that corresponds to that single speech segment. Turning toFIG.16E, the multiple probability distribution sets3143, after being generated all within a single thread2454or across multiple threads2454within the node devices2300, may then be distributed among multiple threads2545. As previously discussed, it is the speech-to-text operations that have been found to consume the greatest amounts of processing resources, especially performances of beam searches. Thus, although the use of multiple threads2454has been discussed above as being potentially used for various pre-processing operations, it is envisioned that multiple threads2454within the single node device2300may be used primarily to enable at least beam searches to be performed at least partially in parallel to alleviate potential bottlenecks arising from the performance of this part of the speech-to-text operations. As will be explained in greater detail, a queue may be instantiated and maintained for use in distributing individual probability distribution sets3143among multiple threads in temporal order as each of those multiple threads become available to accept a probability distribution set3143as an input. Within each of those multiple threads2545, execution of the control routine2340may cause the probability distribution of graphemes and/or of pauses within the probability distribution set3143that is assigned to that thread2454to be analyzed to derive a set of up to a pre-selected quantity of candidate words that are each among the words that are each more likely to be the next word that was spoken. Each word of this set of candidate words may then be combined with up to a pre-selected quantity of earlier-identified preceding words to form a corresponding set of candidate n-grams that are to be searched for within the corpus data set3400. Beam searches may then be performed through the corpus of the corpus data set3400to retrieve a probability for each candidate n-gram to identify tine candidate n-gram within the set that has the highest probability. The corresponding candidate word out of the set of candidate words is then selected as being the word that was mostly likely the next word spoken. That word may then be added to the transcript of the speech audio of speech data set3100, which may be stored within the control device2500as a text data set3700. Turning toFIG.16F, following the generation of a complete transcript of what was said in the speech audio of the speech data set3100, the transcript may be stored within the one or more storage devices2100as the corresponding text data set3700. The text data set3700may include an identifier of the speech data set3100from which the transcript of the text data set3700was derived. Following the generation of the corresponding text data set3700, it may be that various post-processing analyses may be performed of the text within the transcript to identify such features as the one or more topics that were spoken about, the relative importance of each topic, indications of sentiments, etc. More specifically, using the transcript of the text data set3700as an input, one or more terms within the transcript (each including one or more words) may be identified as having one or more quantifiable characteristics (e.g., counts of occurrences of each term and/or aggregate counts of multiple terms, degree of relevance of a term within the transcript, degree of strength of positive or negative sentiment about a term, etc.), and/or relational characteristics (e.g., semantic and/or grammatical relationships among terms, whether detected sentiment about a term is positive or negative, etc.) In some embodiments, the entirety of the transcript may be provided to a single node device2300. It may be that the transcript is provided in its entirety to each of multiple threads2454to enable each one of a set of different post-processing analyses to be performed at least partially in parallel on the entirety of the transcript. As part of one or more of such analyses, sets of n-grams from the transcript may be provided to such one or more threads2454to be searched for within the corpus data set3400as part of using n-gram probabilities to identify topics, indications of sentiments about topics, etc. Regardless of the exact types of text analyses that are performed, and regardless of the exact manner in which each text analysis is performed, the various insights that may be derived from such analyses may be assembled as corresponding text metadata3779that may also be stored within the one or more storage devices2100. Again, following the derivation of the text metadata3779corresponding to the text data set3700, the text metadata3779may be analyzed to identify relationships among words, text chunks, utterances, topics, etc. that may lead to the identification of other text data sets3700generated from other speech data sets3100that may be deemed to be related. The text data set3700, the corresponding text metadata3779, and/or text metadata3779of other related speech data set(s)3100and/or text data set(s)3700may be provided to the requesting device2700. Again, in executing the control routine2740, images of the transcript of the text data set3700, various visualizations of aspects of the contents thereof indicated in the corresponding text metadata3779, and/or visualizations of identified relationships to other transcripts of other speech audio may be presented to an operator of the requesting device2700. FIGS.17A,17B and17C, taken together, illustrate an example of use of an adaptive peak amplitude (APA) pause detection technique as part of performing pre-processing operations to derive a manner of dividing the speech audio of a speech data set3100into segments (each represented in storage by a data segment3140).FIG.17Aillustrates the initial division of the speech data set3100into data chunks3110athat each represent a chunk of the speech audio of the speech data set3100, and the measurement of peak amplitude levels to derive a threshold amplitude2232.FIG.17Billustrates the use of the threshold amplitude2232to categorize each of the data chunks3110aas either a speech data chunk3110sor a pause data chunk3110p.FIG.17Cillustrates the identification of sets of consecutive pause data chunks3110pthat represent likely sentence pauses for inclusion in a pause set3116aof indications of likely sentence pauses within the speech audio of the speech data set3100. As previously discussed, in the distributed processing system2000depicted inFIGS.14A-C, it may be that, for each speech data set3110, each one of multiple pause detection techniques is assigned to be performed by a different one of the node devices2300. Thus, each one of such assigned node devices2300derives a different pause set3116of indications of likely sentence pauses for subsequent use as one of the inputs for deriving a segmentation set3119of indications of segments into which the speech data set3100is to be divided. Alternatively, and as also previously discussed, in the distributed processing system2000depicted inFIGS.14D-F, it may be that, for each speech data set3110, each of the multiple pause detection techniques is assigned to be performed within a separate one of multiple execution threads2454supported by processor(s)2350of a single node device2300. Thus, each of the multiple pause sets3116of indications of likely sentence pauses would be derived on a different one of those assigned threads2454within the single node device2300. However, as also discussed in reference to the distributed processing system2000ofFIGS.14D-F, it may be that, for each speech data set3110, multiple ones of the pause detection techniques are performed on a single thread2454within a single node device2300, while other operations that consume greater resources (e.g., beam searches) may be performed across multiple threads2454within the same single node device2300. Turning toFIG.17A, in executing a division component2311of the control routine2310, processor(s)2350of a node device2300aallocated for performing this APA pause detection technique, or of a node device2300on which multiple pause detection techniques are performed, may be caused to divide a speech data set3100into multiple data chunks3110a. In so doing, an indication of the length of the speech audio that is to be represented by each data chunk3110amay be retrieved from the configuration data2335in embodiments in which at least the majority of the data chunks3110aare to represent audio of equal length. It should be noted that, in some embodiments, the pre-processing operations may also include normalizing the digital format in which the speech audio is stored as a speech data set3100. Thus, it may be, that prior to or as part of dividing the speech audio into chunks, the digital format in which the speech audio is stored as the speech data set3100may be changed to a pre-selected format that specifies one or more of a particular sampling frequency, data width and/or type of data value per sample, a particular type of compression (or no compression), etc. It may be that such a pre-selected format is necessitated for sake of compatibility with one or more components for performing one or more of the pre-processing operations, and/or one or more of the processing operations of the speech-to-text conversion. In executing an amplitude component2312of the control routine2310, processor(s)2350may be caused to analyze each of the data chunks3110ato measure the peak amplitude of the chunk of speech audio present within each. With all of the peak amplitudes across all of the data chunks3110aso measured, a level of amplitude of a preselected percentile of all of the peak amplitudes may be derived and used as a threshold amplitude2232. In so doing, an indication of the preselected percentile may be retrieved from the configuration data2335. As previously discussed, it may be that the multiple pause detection techniques are assigned relative weighting factors that are used in combining the resulting multiple pause sets3116of likely sentence pauses to derive the segmentation set3119of indications of the manner in which the speech data set3100is to be divided to form segments, and it may be that the relative weighting factors are adjusted based on the level of audio noise that is present across the chunks of the speech audio. In such embodiments, and as depicted, it may be that execution of the amplitude component2312also causes the measurement of the level of audio noise in the chunk of speech audio within each of the data chunks3110a, and causes the derivation of an audio noise level3112that is in some way representative of the level of audio noise present within the entire speech audio. In various embodiments, the audio noise level3112may be indicative of the minimum level of audio noise measured across all of the data chunks3110a, an average thereof, and/or of any of a variety of other characteristics of audio noise. Turning toFIG.17B, in executing a categorization component2315of the control routine2310, processor(s)2350may be caused to use the threshold amplitude2232to categorize each of the data chunks3110aas either a speech data chunk3110sor a pause data chunk3110p. More specifically, all of the data chunks3110athat each represent a chunk of speech audio with a measured peak amplitude above the threshold amplitude2232are deemed to be speech data chunks3110s, while all of the data chunks3110athat each represent a chunk of the speech audio with a measured peak amplitude below the threshold amplitude2232are deemed to be pause data chunks3110p. Turning toFIG.17C, in executing a pause identification component2317of the control routine2310, processor(s)2350may be caused to adaptively identify longer pauses defined by larger quantities of consecutive pause data chunks3110pas likely sentence pauses. More specifically, and starting with the data chunk3110athat represents the temporally earliest chunk of the speech audio of the speech data set3100, a window2236that covers a preselected quantity of temporally consecutive ones of the data chunks3110amay be shifted across the length of the speech audio, starting with the temporally earliest data chunk3110aand proceeding throughout all of the data chunks3110ain temporal order toward the temporally last data chunk3110a. Thus, with the window2236positioned to begin with the earliest data chunk3110a(regardless of whether it is a pause data chunk3110por a speech data chunk3110s), measurements of the lengths of each pause represented by multiple consecutive pause data chunks3110pwithin the window2236(if there are any pauses represented by multiple consecutive pause data chunks3110pwithin the window2236) may be taken to identify the longest pause thereamong. The longest pause that is so identified within the window2236(i.e., the pause represented by the greatest quantity of consecutive pause chunks3110p) may then be deemed likely to be a sentence pause. The window2236may then be shifted away from the earliest data chunk3110aand along the data chunks3110of the speech audio in temporal order so as to cause the window2236to next begin either amidst the just-identified likely sentence pause (e.g., beginning at the midpoint thereof) of just after the just-identified likely sentence pause (e.g., as depicted, immediately after the temporally last data chunk of the consecutive pause data chunks3110pthat define the just-identified likely sentence pause). With the window2236so repositioned, again, measurements of the lengths of each pause represented by multiple consecutive pause data chunks3110pwithin the window2236may be taken to again identify the longest pause thereamong. Again, the longest pause that is so identified within the window (i.e., the pause represented by the greatest quantity of consecutive pause chunks3110pwithin the window2236) may be deemed likely to be a sentence pause. As depicted, this may be repeated until the window2236has been shifted along the entirety of the length of the speech audio (i.e., from the temporally earliest data chunk3110ato the temporally latest data chunk3110a). For each of the pauses that has been deemed a likely sentence pause within the speech audio3100using the APA technique, an indication of that likely sentence pause may be generated and stored as part of the pause set3116a. More precisely, indications of where each likely sentence pause starts and ends within the speech audio may be stored within the pause set3116a, and/or indications of where the midpoint of each likely sentence pause is located within the speech audio and/or its length may be so stored. The manner in which such locations within the speech audio are described may be as amounts of time from the beginning of the speech audio represented by the speech data set3100. In so identifying likely sentence pauses through such use of the window2236, it may be that an indication of what the length of the window2236should be (i.e., how many consecutive data chunks3110ait should span) may be retrieved from the configuration data2335. The length of the window2236may be selected to ensure that there cannot be a distance between the midpoints of any adjacent pair of likely sentence pauses that is greater than a capacity limitation that may be present in subsequent processing operations of the speech-to-text conversion. Alternatively or additionally, the length of the window2236may be selected to increase the likelihood that a sentence pause will be identified each time the window2236is re-positioned, based on the typical length of sentences in whichever language is used for the speech audio. Further, in some embodiments, it may be that any instances of an adjacent pair of likely sentence pauses that are closer to each other than a predetermined threshold period of time are not permitted. An indication of the length of the predetermined threshold period of time (which may also be expressed as a quantity of consecutive data chunks3110a) may also be retrieved from the configuration data2335. It may be that, wherever such a pair of likely sentence pauses might occur, that an indication of one of the two likely sentence pauses may be dropped from those that are included in the pause set3116a. The selection of which of two such likely sentence pauses is the one to be dropped may be based on which is shorter than the other, and/or may be based on a requirement that the dropping of one or the other should not be allowed to create a distance between any of two of the remaining likely sentence pauses that is greater than the length of the window2236, which may be treated as an upper limit on the distance between any two of the likely sentence pauses. FIGS.18A and18B, taken together, illustrate an example of use of a connectionist temporal classification (CTC) pause detection technique as part of performing pre-processing operations to derive a manner of dividing the same speech audio of the same speech data set3100into segments.FIG.18Aillustrates the initial division of the speech data set3100into data chunks3110cthat each represent a chunk of the speech audio of the speech data set3100, and the provision of those data chunks3110cas an input to an acoustic model neural network2234with CTC output2235.FIG.18Billustrates the use of that acoustic model neural network2234to identify likely sentence pauses for inclusion in a pause set3116cof indications of likely sentence pauses within the speech audio of the speech data set3100. Again, as previously discussed, in the distributed processing system2000depicted inFIGS.14A-C, it may be that, for each speech data set3110, each one of multiple pause detection techniques is assigned to be performed within a different one of the node devices2300. Thus, each one of such assigned node devices2300derives a different pause set3116of indications of likely sentence pauses for subsequent use as one of the inputs for deriving a segmentation set3119of indications of segments into which the speech data set3100is to be divided. Alternatively, and again, as also previously discussed, in the distributed processing system2000depicted inFIGS.14D-F, it may be that, for each speech data set3110, each of the multiple pause detection techniques is assigned to be performed within a separate one of multiple execution threads2454supported by processor(s)2350of a single node device2300. Thus, each of the multiple pause sets3116of indications of likely sentence pauses would be derived on a different one of those assigned threads2454within the single node device2300. However, as also discussed in reference to the distributed processing system2000ofFIGS.14D-F, it may be that, for each speech data set3110, multiple ones of the pause detection techniques are performed on a single thread2454within a single node device2300, while other operations that consume greater resources (e.g., beam searches) may be performed across multiple threads2454within the same single node device2300. Turning toFIG.18A, in executing the division component2311of the control routine2310, processor(s)2350of a node device2300callocated for performing this CTC pause detection technique, or of a node device2300on which multiple pause detection techniques are performed, may be caused to divide the same speech data set3100as was featured inFIGS.17A-Cinto multiple data chunks3110c. In so doing, an indication of the length of the speech audio that is to be represented by each data chunk3110cmay be retrieved from the configuration data2335. It should be noted that the data chunks3110cof this CTC pause detection technique may not represent the same length of the speech audio as are represented by the data chunks3110aof the APA pause detection technique ofFIGS.17A-C. Indeed, it is envisioned that the data chunks3110care each likely to represent a greater length of speech audio such that the speech audio represented by a single one of the data chunks3110cmay match the length of the speech audio represented by multiple ones of the data chunks3110a. Again, in some embodiments, the pre-processing of speech audio may include normalizing the digital format in which the speech audio is stored as a speech data set3100. Thus, it may again be that, prior to or as part of dividing the speech audio into chunks, the digital format in which the speech audio is stored may be changed to a pre-selected format that specifies one or more of a particular sampling frequency, data width and/or type of data value per sample, a particular type of compression (or no compression), etc. As will be familiar to those skilled in the art, at least some acoustic models implemented using neural networks (and/or other technologies) may accept indications of detected audio features as input, instead of accepting audio data (e.g., the data chunks3110c) more directly as input. To accommodate the use of such implementations of an acoustic model, execution of the control routine2310may entail execution of a feature detection component2313to analyze the portion of speech audio represented by each data chunk3110cto identify instances of each of a pre-selected set of acoustic features. In so doing, processor(s)2350may be caused to generate a corresponding feature vector3113from each data chunk3110cthat is analyzed. Each feature vector3113may include indications of each acoustic feature that is identified and when it occurred within the speech audio of the corresponding data chunk3110c. In executing a configuration component2314, processor(s)2350may be caused to instantiate and configure an acoustic model neural network2234to implement an acoustic model. As previously discussed, and as depicted, the acoustic model neural network2234incorporates a CTC output2235, thereby augmenting the output of text characters by the acoustic model neural network2234with the output of blank symbols. As also previously discussed, in embodiments in which at least a subset of the node device(s)2300include one or more neuromorphic devices2355, the acoustic model neural network2234, along with its CTC output2235, may be instantiated within one or more of the neuromorphic devices2355such that the acoustic model neural network2234may be implemented in hardware. Alternatively, in embodiments that lack the incorporation of neuromorphic devices, it may be that the acoustic model neural network2234is implemented in software. As previously discussed, an acoustic model neural network incorporating a CTC output is normally used to accept indications of acoustic features detected within speech audio, and to output indications of the probabilities of which one or more text characters are likely to correspond to those acoustic features (e.g., probability distributions for text characters). With the addition of the CTC output, the probabilistic indications of likely text characters are augmented with blank symbols that are intended to identify instances where there are likely to be consecutive occurrences of the same text character (e.g., the pair of “1” characters in the word “bell”), despite the absence of an acoustic feature that would specifically indicate such a situation (e.g., no acoustic feature in the pronunciation of the “1” sound in the word “bell” that indicates that there are two consecutive “1” characters therein). Broadly, CTC outputs have been used to aid in temporally aligning a sequence of indications of features that have been observed (e.g., acoustic features in speech sounds, or visual features in handwriting), with a sequence of labels (e.g., text characters, phonemes and/or graphemes) where there may be differences between the density of input observations over a period of time and the density of labels that are output for that same period of time. Such a CTC output has been used to generate blank symbols that may be used as a guide in performing such an alignment, including blank symbols that indicate where there may be multiple ones of the same label that are consecutively output that might otherwise be mistakenly merged into a single instance of that label (as in the above-described situation of a pair of “1” text characters that should not be merged). In this way, such multiple consecutive instances of a label (e.g., of a text character) are able to be associated with what may be a single observation, or a single set of observations, that might otherwise be associated with only one instance of that label, thereby aiding in the proper aligning of the input and output sequences. However, it has been observed (and then confirmed by experimentation) that such an acoustic model neural network with a CTC output may also be useful in identifying sentence pauses. More specifically, it has been observed that, in addition to outputting single blank symbols for such consecutive instances of a text character, such a CTC output also has a tendency to generate relatively long strings of consecutive blank symbols that correspond quite well to where sentence pauses occur. Turning toFIG.18B, in so using the acoustic model neural network2234for the detection of sentence pauses, each data chunk3110cis provided to the acoustic model neural network2234as an input. In executing the pause identification component2316, processor(s)2350are caused to monitor the CTC2235output for occurrences of strings of consecutive blank symbols.FIG.18Bdepicts an example of three consecutive data chunks3110cthat each represent a different depicted portion of speech audio that represent the words “Hello” and “Please leave a message” spoken as two separate sentences. Turning to the provision of the first of the three data chunks3110cthat represents the speech sounds for portions of the words “Hello” and “Please” as an input to the acoustic model neural network2234, the output thereof includes the letters therefor, accompanied by instances from the CTC output2235of the blank symbol (indicated inFIG.18Busing the “{circumflex over ( )}” character) separating the corresponding characters. As shown, a single instance of the blank symbol may be output between the two consecutive instances of the “1” character of the word “Hello”, thereby exemplifying the aforedescribed function for which the CTC output2235is typically relied upon to perform. However, as also shown, an instance of a relatively long string of consecutive blank symbols is also output by the CTC output2235that corresponds with the sentence pause that occurs between these two words. Turning to the provision of the second of the three data chunks3110cthat represents the speech sounds for another portion of the word “Please” and the entirety of each of the two words “leave” and “a” as input to the acoustic model neural network2234, the output thereof includes the letters therefor, also accompanied by instances from the CTC output2235of the blank symbol separating the corresponding characters. As shown, two instances of a relatively short string of consecutive blank symbols are also output by the CTC output2235that each correspond with one of the two pauses that occur between adjacent pairs of these three words. Turning to the provision of the third of the three data chunks3110cthat represents the speech sounds for just the word “message” as input to the acoustic model neural network2234, the output includes the letters therefor, also accompanied by instances from the CTC output2235of the blank symbol separating the corresponding characters. As shown, a single instance of the blank symbol may be output between the two consecutive instances of the “s” character from this word, thereby again exemplifying the aforedescribed function for which the CTC output2235is typically relied upon to perform. As each of these outputs are provided by the acoustic model neural network2234, the length of each string of consecutive blank symbols that may be present therein is compared (as a result of execution of the pause identification component2316) to a threshold blank string length. Where a string of consecutive blank symbols in such an output is at least as long as the threshold blank string length (e.g., the string of blank symbols corresponding to the pause between the words “Hello” and “Please”), such a string of blank symbols may be deemed likely to correspond to a sentence pause. However, where a string of consecutive symbols in such an output is not at least as long as the threshold blank string (e.g., the strings of blank symbols between the words “Please” and “leave”, and between the words “leave” and “a”), such a string of blank symbols may be deemed to not correspond to a sentence pause. Thus, in the example depicted inFIG.18B, the pause between the words “Hello” and “Please” may be deemed to be a likely sentence pause, and an indication thereof may be included in the pause set3116cof likely sentence pauses. In performing such comparisons of the lengths of strings of consecutive blank symbols to the threshold blank string length, an indication of the threshold blank string length may be retrieved from the configuration data2335. In some embodiments, the threshold blank string length may have been previously derived during training and/or testing of the acoustic model neural network2234to become part of configuration information stored within the configuration data2335for use in instantiating and configuring the acoustic model neural network2234with its CTC2235output. During such training, it may be that portions of speech audio that are known to include pauses between sentences may be used, and the lengths of the resulting strings of blank symbols that correspond to those sentence pauses may be measured to determine what the threshold blank string length should be to enable its use in distinguishing pauses between sentences from at least pauses between words. FIGS.19A,19B,19C and19D, taken together, illustrate an example of use of a speaker diarization technique based on the use of a speaker diarization neural network2237as part of performing pre-processing operations to derive a manner of dividing the same speech audio of the same speech data set3100into segments.FIG.19Aillustrates the initial division of the speech data set3100into data chunks3110dthat each represent a chunk of the speech audio of the speech data set3100, and the provision of those data chunks3110das an input to a speaker diarization neural network2237, and the use of that speaker diarization neural network2237to generate speaker vectors that are each indicative of characteristics of a speaker who speaks in the speech audio.FIGS.19B-C, taken together, illustrate aspects of the use of the speaker vectors as points in a performance of clustering within a multi-dimensional space to identify speakers.FIG.19Dillustrates the matching of speaker identities to speaker vectors to identify likely speaker changes for inclusion in a change set3118of indications of likely speaker changes within the speech audio of the speech data set3100. As has been discussed, unlike the aforedescribed use of multiple pause detection techniques to identify likely sentence pauses, it may be that just one speaker diarization technique (such as the particular technique that is about be described in reference toFIGS.19A-D) may be used. However, as also discussed, other embodiments are possible in which there may be multiple different speaker diarization techniques used, such that there may be multiple separate change sets3118that are separately and independently generated in a manner akin to what has been discussed above in generating multiple separate pause sets3116. Therefore, and as previously discussed, in the distributed processing system2000depicted inFIGS.14A-C, it may be that, for each speech data set3110, each speaker diarization technique of the at least one speaker diarization technique is assigned to be performed within a different one of the node devices2300. Thus, each one of such assigned node devices2300derives a different change set3118of indications of likely changes in speaker for subsequent use as one of the inputs for deriving a segmentation set3119of indications of segments into which the speech data set3100is to be divided. Alternatively, and as also previously discussed, in the distributed processing system2000depicted inFIGS.14D-F, it may be that, for each speech data set3110, each of the one or more speaker diarization techniques is assigned to be performed within a separate one of multiple execution threads2454supported by processor(s)2350of a single node device2300. Thus, each of the multiple change sets3118of indications of likely speaker changes would be derived on a different one of those assigned threads2454within the single node device2300. However, as also discussed in reference to the distributed processing system2000ofFIGS.14D-F, it may be that, for each speech data set3110, multiple speaker diarization techniques are performed on a single thread2454within a single node device2300, while other operations that consume greater resources (e.g., beam searches) may be performed across multiple threads2454within the same single node device2300. Turning toFIG.19A, in executing the division component2311of the control routine2310, processor(s)2350of a node device2300dallocated for performing this speaker diarization technique, or of a node device2300on which one or more speaker diarization techniques are performed, may be caused to divide the same speech data set3100as was featured inFIGS.17A-Cand18A-B into multiple data chunks3110d. In so doing, an indication of the length of the speech audio that is to be represented by each data chunk3110dmay be retrieved from the configuration data2335. It should be noted that, in a manner similar to the data chunks3110aversus the data chunks3110c, the data chunks3110dof this speaker diarization technique may not represent the same length of the speech audio as are represented by either or both of the data chunks3110aor3110c. However, unlike each of the aforedescribed uses of the division component2311to generate the chunks3110aand3110c, the execution of the division component2311in support of this speaker diarization technique may cause further subdivision of each data chunk3110dinto a set of data fragments3111d. In so doing, an indication of the length of the speech audio that is to be represented by each data fragment3111dmay also be retrieved from the configuration data2335. Additionally, beyond performing such a subdivision of each data chunk3110dinto a set of data fragments3110d, the execution of the division component2311may cause the indications of likely sentence pauses within each of the pause sets3116generated by each of the multiple pause detection techniques to be used to identify ones of the data fragments3111dthat represent portions of the speech audio that may not include speech sounds as a result of including at least a portion of a sentence pause. As those skilled in the art will readily recognize, attempting to identify a speaker in a portion of speech audio that does not actually include speech sounds may yield unpredictable results that may undesirably affect subsequent processing operations. Following the identification of such data fragments3111d, such data fragments3111dmay be removed from within the ones of the data chunks3110din which they are present. As a result, each of the data chunks3110dshould be at least unlikely to include data fragments3111dthat represent a portion of the speech audio that does not include any speech sounds. Again, in some embodiments, the pre-processing of speech audio may include normalizing the digital format in which the speech audio is stored as a speech data set3100. Thus, it may again be that, prior to or as part of dividing the speech audio into chunks, the digital format in which the speech audio is stored may be changed to a pre-selected format that specifies one or more of a particular sampling frequency, data width and/or type of data value per sample, a particular type of compression (or no compression), etc. As previously discussed in reference to the acoustic model neural network2234, different implementations of neural networks used in performing various functions in the processing of audio may accept indications of detected audio features as input, instead of accepting audio data (e.g., the data chunks3110d) more directly as input. Thus, it may be that the feature detection component2313is again executed to analyze the portion of speech audio represented by each data fragment3111dto identify instances of each of a pre-selected set of acoustic features. In so doing, processor(s)2350may be caused to generate a corresponding set of feature vectors3113from each data fragment chunk3111dthat is analyzed. In executing the configuration component2314, processor(s)2350may be caused to instantiate and configure a speaker diarization neural network2237. As previously discussed, in embodiments in which at least a subset of the node device(s)2300include one or more neuromorphic devices2355, the speaker diarization neural network2237may be instantiated within one or more of the neuromorphic devices2355such that the speaker diarization neural network2237may be implemented in hardware. Alternatively, in embodiments that lack the incorporation of neuromorphic devices, it may be that the speaker diarization neural network2237is implemented in software. With the speaker diarization neural network2237instantiated (regardless of whether it is implemented in hardware or software), the speaker diarization neural network2237may then be provided with the data fragments3111d, one at a time, as input (either directly or indirectly, such as in the form of the depicted sets of feature vectors3113d). For each data fragment3111d, the speaker diarization neural network2237may generate a corresponding speaker vector3117dthat is descriptive of vocal characteristics of a speaker who is speaking in the portion of speech audio that is represented by the data fragment3111d. More specifically, and as previously discussed, each speaker vector3117dmay include (or may be) a one-dimensional array of various data values (e.g., binary data values and/or other numerical data values) that are each provide an indication of a presence or absence of a vocal characteristic, a measure of a degree or level of a vocal characteristic, etc. As those skilled in the art will readily recognize, the variation in vocal characteristics across the human race has been found to be sufficiently varied that the use of vocal characteristics as a form of identification of individual persons has been accepted for some time. Further, it has been found to be possible to train a neural network (such as the depicted speaker diarization neural network2237) well enough to generate speaker vectors that with relatively highly consistent data values for the vocal characteristics of a particular person despite variations in the speech of that particular person that may arise under differing conditions, such as speech volume, speech speed and/or pitch associated with differing emotional states, etc. This high degree of consistency in the data values of speaker vectors associated with a particular individual more readily enables the use of such techniques as clustering to identify individual speakers. FIGS.19B and19C, taken together depict various aspects of the manner in which execution of a clustering component2318of the control routine2310by processor(s)2350may cause the identification of speakers in the chunk of speech audio represented by a data chunk3110dby using each speaker vector3117dassociated with a data fragment3111dthereof as a point in a multidimensional space2239. More specifically, each data value of each speaker vector3117dmay be treated as specifying a location along a different one of multiple axes. Thus, the set of values within each speaker vector3117d, when taken together, may specify a point. By way of example, and as depicted inFIG.19B, each one of the five depicted points a, b, c, d and e may be a point within the depicted space2239that is specified by the data values of a corresponding speaker vector3117d. It should be noted, however, that each ofFIGS.19B and19Cdepict a deliberately highly simplified two dimensional view of a deliberately simplified example of a space2239. This deliberately highly simplified example is presented herein for purposes of enabling understanding of aspects of the use of clustering to identify speakers, and should not be taken as limiting. Indeed, as those skilled in the art will readily recognize, effective identification of speakers requires the use of speaker vectors with numerous data values such that any treatment of speaker vectors as a point within a space would necessitate the use of a space having numerous dimensions, which would be quite difficult to effectively depict in a two-dimensional image. Referring toFIGS.19B and19C, as well as toFIG.19A, the clustering component2318may employ any of a wide variety of clustering algorithms. As will be familiar to those skilled in the art, regardless of the exact choice of clustering algorithm that is selected for use, broadly, such factors as distance between points2238, quantities of points2237within a preselected radius of a portion of the space2239, density of points2337within a preselected radius of a portion of the space2239, etc. may be used to identify each cluster2238of points2237that may be deemed to be associated a single speaker. Thus, depending on the algorithm that is selected, the clustering component2318may employ any of a variety of rules for determining what points2237belong together in a cluster2238. In some embodiments, the clustering component2318may employ multiple clustering algorithms at different stages of using clustering to identify speakers. By way of example, a spectral clustering algorithm may initially be used as new speakers continue to be identified as part of adding points associated with a single data chunk3110dto the space2239. This may be done as an approach to attempting to reduce the number of dimensions of the space2239. However, with all points associated with a single data chunk3110dadded to the space2239, a k-means clustering algorithm may be used in view of its affinity for handling what may still be a relatively large quantity of dimensions. Turning more specifically toFIG.19B, as depicted, it may be that, as each point2237that is specified by the data values of one of the speaker vectors3117of a single data chunk3110dis added to the space2239, the clustering component2318may determine whether the addition of each point2237defines a new cluster2238, again, based on such factors as quantity and/or density of points2237that are caused to be within a portion of the space2239having a preselected radius and/or other characteristics. Once a new cluster2238is determined to be present within the space2239, it may be, in some clustering algorithms, that points2237that are near to such a portion of the space2239, but not in it, may nonetheless be deemed to be part of the cluster2238. Turning more specifically toFIG.19C, as depicted, it may be that the ongoing addition of more points2237leading to the identification of another cluster2238, may then lead to a need to re-evaluate which points2237that have been plotted, so far, belong to which cluster2238. More specifically, while it may be that one or both of the depicted points e and f might have initially been deemed to belong to the single cluster1depicted inFIG.19B, the identification of another cluster2depicted inFIG.19Cmay necessitate a re-evaluation of whether one or both of the points e and f should be deemed as belonging to the newer cluster2. Thus, in at least some clustering algorithms the identification of each new cluster2238may trigger at least a partial repeat performance of clustering. However, and as will be familiar to those skilled in the art, each performance of a clustering algorithm can consume an amount of processing resources that may increase exponentially with the addition of each point. To address this, it may be that each performance and repeated performance of clustering is limited to the points2237that correspond to the data fragments3111dthat are present within a single data chunk3110d. Turning toFIG.19D, following the performance of clustering (including any repeat performances) to generate clusters that identify speakers present within the portion of speech audio represented by data chunk3110d, further execution of the clustering component2318may cause processor(s)2350to match each speaker vector3117dof a data fragment3111dof the data chunk3110dto one of the identified speakers. More specifically, a separate speaker identifier may be generated for each cluster2238that is identified (each of which is deemed to be associated with a different speaker). Following the matching of speaker vectors3117dto identified speakers, the speaker identifiers of temporally adjacent speaker vectors3117dmay be compared to identify each instance in which there is a change of speakers. For each such instance of change of speakers, an indication of a change of speakers may be added to the change set3118. FIGS.20A,20B,20C and20D, taken together, illustrate an example of generating the segmentation set3119of indications of segments in each of the embodiments of a distributed processing system2000ofFIGS.14A-CandFIGS.14D-F.FIG.20Aillustrates the combining of multiple pause sets3116of indications of likely sentence pauses with at least one change set3118of indications of likely speaker changes from multiple node devices2300in the embodiment ofFIGS.14A-Cto generate the segmentation set3119, andFIG.20Billustrates the use of that segmentation set3119in dividing the speech data set3100into data segments3140representing segments of the speech audio of the speech data set3100in that same embodiment.FIG.20Cillustrates the combining of multiple pause sets3116of indications of likely sentence pauses with at least one change set3118of indications of likely speaker changes from multiple threads2454in the embodiment ofFIGS.14D-Fto generate the segmentation set3119, andFIG.20Dillustrates the use of that segmentation set3119in dividing the speech data set3100into data segments3140representing segments of the speech audio of the speech data set3100in that same embodiment. Turning toFIG.20A, in executing an aggregation component2519of the control routine2510, processor(s)2550of the control device2500in the embodiment of a distributed processing system2000ofFIGS.14A-Cmay be caused to combine multiple pause sets3116(which may be received from multiple node devices2300, such as the specifically depicted pause sets3116aand3116c) into a single set of indications of likely sentence pauses. As previously discussed, a variety of different approaches may be used in performing such a combining of such multiple pause sets3116, including approaches to combining in which different pause detection techniques (and therefore, different ones of the pause sets3116) may be assigned different relative weighting factors. As depicted, and as also previously discussed, such relative weight factors may be made dynamically adjustable based on one or more characteristics of the speech audio represented by the speech data set3100. By way of example, and as previously discussed in connection with the APA pause detection technique ofFIGS.17A-C, it may be that audio noise level measurement(s) are taken along with the measurements of peak amplitude that are performed as part of the APA pause detection technique. In so doing, the audio noise level3112may be generated as an average, a peak, or other representation of the level of audio noise throughout the speech audio of the speech data set3100. Regardless of the exact manner in which the representation of the level of audio noise within the audio noise level3112is generated, the audio noise level3112may be used as an input for dynamically adjusting the relative weighting factors assigned to the different pause sets3116to take into account the relative degrees of susceptibility of each pause detection technique to being adversely affected by audio noise present in the speech audio. More specifically, it may be that the CTC pause detection technique is less susceptible to audio noise than the APA pause detection technique such that the presence of a higher level of audio noise in the speech audio (as indicated by the audio noise level3112) may cause the pause set3116cgenerated via the CTC pause detection technique to be given a greater relative weight compared to the pause set3116agenerated via the APA pause detection technique. Also in executing the aggregation component2519of the control routine2510, processor(s)2550of the control device2500may be caused to similarly combine multiple change sets3118(which may also be received from multiple node devices2300) in embodiments in which multiple different speaker diarization techniques have been similarly performed, at least partially in parallel, to similarly generate a single combined set of indications of likely speaker changes. In so doing, there may also be the use of some form of relative weighting that may also be based on the audio noise level3112, and/or based on any of a variety of other factors. Alternatively, and as depicted, it may be that just a single speaker diarization technique was performed, resulting the generation of just a single change set3118(such as the specifically depicted change set3118d). In further executing the aggregation component2519of the control routine2510, processor(s)2550of the control device2500may be caused to then use the single set of indications of likely sentence pauses along with the single set of indications of likely speaker changes to derive a manner in which the speech audio of the speech data set3100is to be divided into segments of speech audio. In so doing, a set of indications of the manner in which to effect such segmentation may be stored as the segmentation set3119. Turning toFIG.20B, in executing a division component2541of the control routine2540, processor(s)2550of the control device2500may be caused to divide the speech data set3100into data segments3140based on the segmentation set3119. In so doing, the speech audio represented by the speech data set3100may be divided into segments where the divisions between each adjacent pair of segments is caused to occur at a location at which each likely sentence pause and/or likely speaker change was determined to have occurred. As a result, each of the segments of speech audio should be at least more likely to start and end with portions of sentence pauses, and should be at least more likely to include words spoken by the same speaker(s) throughout. This should serve to increase the likelihood that the entirety of the pronunciation of each letter, of each word, and/or of each sentence is fully contained within a single one of the segments, instead of being split across the divide between two segments, and to increase the likelihood that the manner in which such speech sounds are pronounced throughout each segment should not change. In this way, the accuracy of subsequent processing operations to detect acoustic features, to identify letters, and then to identify whole words, may be improved. Turning toFIG.20C, in executing an aggregation component2319of the control routine2310, processor(s)2350of a node device2300in the embodiment of a distributed processing system2000ofFIGS.14D-Fmay be caused to combine multiple pause sets3116(which may be received from multiple threads2454within the same node device2300, such as the specifically depicted pause sets3116aand3116c) into a single set of indications of likely sentence pauses. Again, a variety of different approaches may be used in performing such a combining of such multiple pause sets3116, including approaches to combining in which different pause detection techniques (and therefore, different ones of the pause sets3116) may be assigned different relative weighting factors. Again, such relative weight factors may be made dynamically adjustable based on one or more characteristics of the speech audio represented by the speech data set3100. Again, as previously discussed in connection with the APA pause detection technique ofFIGS.17A-C, it may be that audio noise level measurement(s) are taken along with the measurements of peak amplitude that are performed as part of the APA pause detection technique. In so doing, the audio noise level3112may be generated as an average, a peak, or other representation of the level of audio noise throughout the speech audio of the speech data set3100. Regardless of the exact manner in which the representation of the level of audio noise within the audio noise level3112is generated, the audio noise level3112may be used as an input for dynamically adjusting the relative weighting factors assigned to the different pause sets3116to take into account the relative degrees of susceptibility of each pause detection technique to being adversely affected by audio noise present in the speech audio. Again, it may be that the CTC pause detection technique is less susceptible to audio noise than the APA pause detection technique such that the presence of a higher level of audio noise in the speech audio (as indicated by the audio noise level3112) may cause the pause set3116cgenerated via the CTC pause detection technique to be given a greater relative weight compared to the pause set3116agenerated via the APA pause detection technique. Also in executing the aggregation component2319of the control routine2310, processor(s)2350of the node device2300may be caused to similarly combine multiple change sets3118(which may also be received from multiple threads2454within the same node device2300) in embodiments in which multiple different speaker diarization techniques have been similarly performed, at least partially in parallel, to similarly generate a single combined set of indications of likely speaker changes. Again, there may also be the use of some form of relative weighting that may also be based on the audio noise level3112, and/or based on any of a variety of other factors. Alternatively, and as depicted, it may be that just a single speaker diarization technique was performed, resulting the generation of just a single change set3118(such as the specifically depicted change set3118d). In further executing the aggregation component2319of the control routine2310, processor(s)2350of the node device2300may be caused to then use the single set of indications of likely sentence pauses along with the single set of indications of likely speaker changes to derive a manner in which the speech audio of the speech data set3100is to be divided into segments of speech audio. In so doing, a set of indications of the manner in which to effect such segmentation may be stored as the segmentation set3119. Turning toFIG.20D, in executing a division component2341of the control routine2340, processor(s)2350of the node device2300may be caused to divide the speech data set3100into data segments3140based on the segmentation set3119. Again, in so doing, the speech audio represented by the speech data set3100may be divided into segments where the divisions between each adjacent pair of segments is caused to occur at a location at which each likely sentence pause and/or likely speaker change was determined to have occurred. Again, as a result, each of the segments of speech audio should be at least more likely to start and end with portions of sentence pauses, and should be at least more likely to include words spoken by the same speaker(s) throughout. Again, this should serve to increase the likelihood that the entirety of the pronunciation of each letter, of each word, and/or of each sentence is fully contained within a single one of the segments, instead of being split across the divide between two segments, and to increase the likelihood that the manner in which such speech sounds are pronounced throughout each segment should not change. In this way, the accuracy of subsequent processing operations to detect acoustic features, to identify letters, and then to identify whole words, may be improved. FIGS.21A,21B,21C,21D,21E,21F,21G,21H and211, taken together, illustrate an example of using the data segments3140into which a speech data set3100is divided to perform speech-to-text processing operations in the embodiment ofFIGS.14A-C.FIG.21Aillustrates the use of feature detection and an acoustic model to generate sets of probability distributions that are indicative of relative probabilities of the use of various graphemes, andFIG.21Billustrates the collection of those probability distribution sets3143for use by the control device2500.FIGS.21C-D, taken together, illustrate the use of the probability distribution sets3143to generate sets of candidate words3145, and then to generate sets3146of candidate n-grams for use by a language model.FIG.21Eprovides an overview illustration of using sets of candidate words3145and candidate n-gram sets3146as input to generate a text data set3700representing transcript(s) of the words spoken in speech data set3100.FIG.21Fillustrates the distribution of a large corpus3400representing a language model, along with individual node identifiers2311, to each one of multiple selected node devices2300in preparation for using the language model in a distributed manner.FIGS.21G-Hillustrate aspects of the performance of a distributed beam search within the corpus data set3400among the multiple selected node devices2300to derive probability sets3147indicative of relative probabilities of use of n-grams within the candidate n-gram sets3146.FIG.21Iillustrates aspects of the collection and use of probability sets3147to determine another word to add to a transcript stored as a text data set3700. As will be familiar to those skilled in the art, the use of an n-gram language model has become commonplace in speech-to-text processing. Such use of an n-gram language model is often based on an assumption that the next word in a transcript of speech audio is able to be identified with a relatively high degree of accuracy based on what word or words immediately preceded it. It has also been found that the accuracy of the identification of the next word is able to be increased by increasing the quantity of immediately preceding words that are used as the basis for that identification. Unfortunately, as will also be familiar to those skilled in the art, each increase in the quantity of immediately preceding words by a single word can result in an exponential increase in the size of the corpus of n-grams that must be used. As a result, although there have been experimental implementations of speech-to-text processing that have used an n-gram language model supporting up to as many as 10 immediately preceding words, the amount of time, storage and processing resources required often make such an implementation impractical. Therefore, it is more commonplace to employ a quantity 3, 4 or 5 immediately preceding words. As will shortly be explained, in the embodiment of the distributed processing system2000ofFIGS.14A-C, the processing, storage and/or other resources of multiple computing devices may be employed in a cooperative manner to make the use of a higher quantity of immediately preceding words in an n-gram language model in speech-to-text processing significantly more practical. Turning toFIG.21A, in executing a division component2341of the control routine2340, processor(s)2350of at least one node device2300may be caused to divide a data segment3140into multiple data frames3141. In embodiments of the distributed processing system2000ofFIGS.14A-C, it may be that multiple data segments3140of a speech data set3100are distributed among multiple node devices2300to enable such processing of data segments3140to be performed at least partially in parallel. In so executing the division component2341, an indication of the length of the speech audio that is to be represented by each data frame3141may be caused to be retrieved from the configuration data2335and used to control the division of each data segment3140into multiple data frames3141. Again, at least some acoustic models implemented using neural networks (and/or other technologies) may be designed to accept indications of detected audio features as input, instead of accepting audio data (e.g., the data frames3141) more directly as input. To accommodate the use of such implementations of an acoustic model, execution of the control routine2340may entail execution of a feature detection component2342to analyze the portion of speech audio represented by each data frame3141to identify instances of each of a pre-selected set of acoustic features. In so doing, processor(s)2350may be caused to generate a corresponding feature vector3142from each data frame3141that is analyzed. Each feature vector3141may include indications of each acoustic feature that is identified and when it occurred within the speech audio of the corresponding data frame3141. ComparingFIG.21AtoFIG.18A, it may be that both feature detection and use of an acoustic model may be repeated. Indeed, in comparingFIG.21AtoFIG.18A, it becomes evident that the very same acoustic model based on a neural network (e.g., the acoustic model neural network2234incorporating the CTC output2235) may be used, again, in some embodiments. However, it should be noted that other embodiments are possible in which different acoustic models based on differing types of neural network may be used, and/or in which different acoustic models based on entirely different technologies may be used. In embodiments in which neural network(s) are used, execution of a configuration component2344may cause processor(s)2350to again instantiate the same acoustic model neural network2234with the CTC output2235to implement the same acoustic model. As depicted, in some of such embodiments, it may be that one or more neuromorphic devices2355may be used to again implement the acoustic model neural network2234in hardware within each of one or more node devices2300. Regardless of whether the acoustic models ofFIGS.18A and21Aare identical, there are significant differences in the manner in which they are used inFIGS.18A and21A. Unlike the use of an acoustic model inFIG.18Ato perform part of the aforedescribed CTC-based segmentation technique, the acoustic model inFIG.21Ais used to used to perform part of speech-to-text processing operations. More specifically, the acoustic model is now used to generate, from a speech segment represented by a data segment3140, a probability distribution set3143. Each of the probability distributions within the set3143specifies, for a particular time within the segment, the relative probabilities for each of a pre-selected set of graphemes. As will be familiar to those skilled in the art, over time, a number of different systems of notation have been devised for describing speech sounds for one or more languages using graphemes. In many of such notation systems, the graphemes may be text characters and/or similar visual symbols (e.g., text characters modified to include various accent markings). In different ones of such notation systems, at least some of the graphemes may each correspond to one or more phonemes, and/or at least some of the graphemes must be used in various combinations that each correspond to one or more phonemes. Thus, in specifying relative probabilities of a pre-selected set of graphemes, each probability distribution may specify the relative probabilities that each of a pre-selected set of speech sounds was uttered at a particular time within a speech segment. Turning toFIG.21B, the probability distribution sets3143associated with a single speech data set3100may be collected from the multiple node devices2300in which they were generated, and may be provided to the control device2500through the network2999. Such provision of those multiple probability distribution sets3143to the control device2500may occur as they are generated, at least partially in parallel, within the multiple node devices2300. Within the control device2500, execution of the control routine2540may cause processors2550of the control device2500to organize the probability distribution sets3143into temporal order in preparation for being used to identify words for inclusion in a transcript of the contents of the speech audio. Regardless of whether such a collection and provision of probability distribution sets3143via the network2999takes place, as also depicted, each of the node devices2300of the processing system2000(whether engaged in generating probability distribution sets3143, or not) may also provide the control device2500with indications of the availability of their processing, storage and/or other resources. Such indications may be used to augment and/or update resources data2539. Turning toFIG.21C, in executing a candidate word component2545of the control routine2540, processor(s)2550of the control device2500may be caused to generate sets of one or more candidate words3145from each probability distribution set3143. Then, in executing a candidate n-gram component2546of the control routine2540, processor(s)2550of the control device2500may be caused to generate corresponding one or more candidate n-gram sets3146from the one or more candidate words3145that are generated for each probability distribution set3143. More specifically, as previously discussed, and turning toFIG.21D, each speech segment (each of which is represented in storage by a corresponding data segment3140) may be formed by dividing the speech audio of a speech data set3100at midpoints amidst what are determined to be likely sentence pauses and/or likely changes in speakers. As a result, each speech segment may begin with a portion of a sentence pause and/or where there is a change in speakers, and each speech segment may end with a portion of a sentence pause and/or where there is a change in speakers. Each speech segment may then be further divided into frames (each of which is represented in storage by a corresponding data frame3141), which are kept in temporal order. Thus, as depicted inFIG.21D, the speech segment (again, represented by a data segment3140) that corresponds to the depicted probability distribution set3143may begin with a first few consecutive speech frames (each of which is represented by a corresponding data frame3141) in which there may not be any speech sounds, as would be expected within a likely sentence pause. As a result, each of the corresponding first few consecutive probability distributions3144(including the earliest thereof) may indicate that a grapheme (e.g., a text character and/or a blank symbol) for an empty space has the highest probability of having occurred within the corresponding speech frame. Following such consecutive probability distributions3144associated with the likely sentence pause at the start of the speech segment, there may then be the first of multiple consecutive probability distributions3144that may be associated with the pronunciation of the letters of the first word of a sentence (the transition from probability distributions3144associated with a likely sentence pause to probability distributions3144that may be associated with pronouncing the first word is marked by vertical dashed line). In executing the candidate word component2545, processor(s)2550of the control device2500may, based on those multiple consecutive probability distributions3144, derive a pre-selected quantity of candidate words3145that are each among the most likely to be the first word that was spoken throughout the corresponding multiple consecutive speech frames. The processor(s)2550may then be caused by execution of the candidate n-gram component2546to convert the set of candidate words3145into a candidate n-gram set3146aby adding up to a pre-selected quantity of words that were previously identified as the immediately preceding words in what may be a sentence that corresponds to the probability distribution set3143. However, since each of the candidate words3145is preceded by what is deemed to be a likely sentence pause, there may be no such preceding words to be added such that the resulting candidate n-gram set3146acontains a set of uni-grams that are each just one of the candidate words3145. FIG.21Dalso depicts another example set of candidate words3145being derived from multiple consecutive probability distributions3144at a temporally later location within the same probability distribution set3143that may be associated with pronouncing another word at a later time within the same speech segment. Again, in executing the candidate word component2545, processor(s)2550of the control device2500may, based on those multiple consecutive probability distributions3144, derive another pre-selected quantity of candidate words3145that are each among the most likely to be the word that was spoken throughout these other corresponding multiple consecutive speech frames. The processor(s)2550may then be caused by execution of the candidate n-gram component2546to convert this other set of candidate words3145into another candidate n-gram set3146bby adding up to the pre-selected quantity of words that were previously identified as the immediately preceding words in what may be a sentence that corresponds to the probability distribution set3143. Unlike the previously discussed set of candidate words3145, there may be multiple immediately preceding words that were spoken up to the point at which one of the candidate words3145within this other set of candidate words3145. Therefore, the other candidate n-gram set3146bmay include up to the pre-selected quantity of words. Turning toFIG.21E, regardless of whether the n-grams within a candidate n-gram set3146generated within the control device2500include any immediately preceding words ahead of the candidate words3145thereof, in executing a beam search component2347of the control routine2340, processor(s)2350may be caused to perform a beam search within the corpus data set3400for one or more of the n-grams present within the candidate n-gram set3146. As will be familiar to those skilled in the art of n-gram language models, each n-gram within an n-gram corpus may be accompanied therein with an indication of the relative frequency of its occurrence and/or its relative probability of occurrence within texts of a particular language (based on the sample texts of the particular language used in generating the n-gram corpus). As each n-gram is found within the corpus data set3400, an indication of the relative probability of that n-gram occurring may be stored within a probability set3147generated for all of the candidate n-grams in the candidate n-gram set3146. Following generation of each probability set3147, execution of a transcript component2548of the control routine2540may cause processor(s)2550of the control device2500to, based on the indications of the relative probabilities in the probability set3147for each n-gram within the candidate n-gram set3146, identify a candidate word3145among the corresponding set of candidate words3145as the word that was most likely the next word to be spoken. The identified most likely spoken word may then be added to the transcript of the speech audio represented as a text data set3700. Turning toFIG.21F, it may be that execution of a coordination component2549causes processor(s)2550of the control device2500to use indications of node devices2300with sufficient available processing and/or storage resources as a basis for selecting particular ones of node devices2300that are to be employed in performing beam searches of a corpus data set3400in a distributed manner. With such selections made, unique node identifiers2331may be transmitted to each of the selected node devices2300via the network2999. The node identifiers2331may be a continuous series of positive integers of increasing value, starting with 0, and incremented by 1. The processor(s)2550of the control device2500may also be caused to cooperate with processors2350of the node devices2300to coordinate communications through the network2999to cause the provision of complete copies of the corpus data set3400for a pre-selected language from the one or more storage devices2100to each of the selected node devices2300. Turning toFIG.21G, in further executing the coordination component2549, the processor(s)2550of the control device2500may be caused to provide complete copies of each of the candidate n-gram sets3146, in temporal order, to all of the selected node devices2300. Within each of the selected node devices2300, execution of the beam search component2347of the control routine2340may cause the processor(s)2350thereof to perform a beam search within the corpus data set3400for one or more of the n-grams present within the candidate n-gram set3146. As will be familiar to those skilled in the art of n-gram language models, each n-gram within an n-gram corpus may be accompanied therein with an indication of the relative frequency of its occurrence and/or its relative probability of occurrence within texts of a particular language (based on the sample texts of the particular language used in generating the n-gram corpus). Referring toFIG.21H, in addition toFIG.21G, it should be noted that each of the selected node devices2300is caused to perform a beam search for different one(s) of the n-grams within the candidate n-gram set3146, such that no two of the selected node devices2300are caused to perform a beam search for the same n-gram. In some embodiments, this may be effected through the use of modulo calculations in which, within each of the selected node devices2300, the numerical designation of the position occupied by each n-gram within the candidate n-gram set3146is divided by the quantity of the selected node devices2300to derive a modulo value for each n-gram within the candidate n-gram set3146. The modulo value calculated for each n-gram is then compared to the unique node identifier2331that was earlier assigned to the selected node device2300. The n-gram(s) that are searched for within each of the selected node devices2300are the one(s) for which the modulo value matches the unique node identifier2331for that node device2300. Thus, as depicted (in the deliberately simplified example inFIG.21Hin which there are only three selected node devices2300), within the selected node device2300that has been assigned the “0” node identifier2331, the n-grams at the “0th” and “3rd” positions within the candidate n-gram set3146are searched for within the corpus data set3400stored therein. Correspondingly, within the selected node device2300that has been assigned the “1” node identifier2331, the n-grams at the “1st” and “4th” positions within the candidate n-gram set3146are searched for within the corpus data set3400stored therein. Also correspondingly, within the selected node device2300that has been assigned the “2” node identifier2331, the n-gram at the “2nd” position within the candidate n-gram set3146is searched for within the corpus data set3400stored therein. In this way, a relatively even distribution of n-grams to be searched for within the corpus data set3400across the multiple selected node devices2300is achieved with relatively minimal communication across the network2999. Also, by providing each of the selected node devices2300with a complete copy of the entire corpus data set3400, all processing operations for the beam search for each n-gram are performed entirely within a single node device2300without need for communications with any other device through the network2999. This entirely eliminates the need for network communications among the selected node devices2300to carry out any of the beam searches, thereby reducing consumption of network bandwidth and eliminating the expenditure of time that would occur while such communications take place. Further, such distribution of beam searches among multiple computing devices enables the corpus data set3400to be of considerably larger size versus the maximum size that would be practical and/or possible were just a single computing device used. As will be familiar to those skilled in the art, the ability to more efficiently perform a greater quantity of beam searches in less time, thereby enabling the use of a larger corpus, may advantageously permit a corpus to include more lower frequency n-grams (i.e., n-grams that have a relatively low probability of occurring within texts of a particular language) and/or to include n-grams with a greater quantity of words per n-gram. Focusing again more specifically onFIG.21G, within each of the selected node devices2300, as each n-gram is found within the corpus data set3400, an indication of the relative probability of that n-gram occurring may be stored within a probability set3147generated for all of the n-grams for which a beam search is performed within that selected node device2300. In some embodiments, where a particular n-gram is not found within the corpus data set3400, an indication of default value for the relative probability of the occurrence of an “unknown” n-gram may be stored within the probability set3147. Turning toFIG.21I, each of the probability sets3147may be provided to the control device2500through the network2999as they are generated, at least partially in parallel, within multiple node devices2300. Within the control device2500, execution of a transcript component2548may cause processor(s)2550of the control device2500to, based on the indications of the relative probabilities retrieved for each n-gram within the candidate n-gram set3146, identify the word that was most likely spoken. The identified most likely spoken word may then be added to the transcript of the speech audio. Upon completion of the generation of the transcript, the control device2500may provide it to the one or more storage devices2100to be persistently stored therein as a text data set3700. FIGS.22A,22B,22C,22D,22E and22F, taken together, illustrate an example of using the data segments3140into which a speech data set3100is divided to perform speech-to-text processing operations in the embodiment ofFIGS.14D-F.FIG.22A-B, taken together, illustrate aspects of the manner in which an example of single-threaded pre-processing and initial speech-to-text processing operations may be combined with multi-threaded subsequent speech-to-text processing operations to efficiently utilize processing, storage and other resources within each node device2300to perform speech-to-text conversion on multiple speech data sets3100in parallel.FIG.22Cillustrates the use of feature detection and an acoustic model within a single thread2454sto generate sets of probability distributions as part of the initial single-threaded speech-to-text processing operations.FIG.22Dillustrates the use of a buffer queue2460to distribute the probability distribution sets generated inFIG.22Camong multiple threads2454pof a thread pool2450for the performances of beam searching as part of the subsequent multi-threaded speech-to-text processing operations.FIG.22Eillustrates the multi-threaded use of use of the probability distribution sets3143to generate sets of candidate words3145, and then to generate sets3146of candidate n-grams for use by a language model across the multiple threads2454pof the thread pool2450.FIG.22Fprovides an overview illustration of the multi-threaded use of sets of candidate n-gram sets3146as inputs to parallel performances of beam searches, and sets of candidate words3145as additional inputs to generating a text data set3700representing a transcript of the words spoken in the corresponding speech data set3100. Again, the use of an n-gram language model has become commonplace in speech-to-text processing due to having been found to increase the accuracy of the identification of spoken words. However, again, the use of an n-gram language model has also been found to consume considerable resources, with such consumption of resources increasing exponentially as the size of the n-grams increases by even one more word. As will shortly be explained, in the embodiment of the distributed processing system2000ofFIGS.14D-F, the processing, storage and/or other resources of multiple threads within a single computing device may be employed to better enable the practical use of n-grams having larger quantities of words. Again, the operation in speech-to-text conversion at which so much of processing, storage and/or other resources are consumed has been found to the beam searches that are performed on an n-gram corpus that implements a language model. And again, arranging for beam searches to be performed at least partially in parallel has been found to be an efficient approach to addressing the bottleneck that often results. Turning toFIGS.22A-B, in contrast to the approach described just above of distributing parallel performances of beam searches associated with a single speech data set3100across multiple node devices2300in the distributed processing system2000ofFIGS.14A-C, what will now be described in greater detail is an approach of distributing parallel performances of beam searches associated with a speech data set3100across multiple threads2454pof a thread pool2450within a single node device2300in the distributed processing system2000ofFIGS.14D-F. More specifically, for a single speech data set3100, the pre-processing operations of the control routine2310, and a subset of the speech-to-text operations of the control routine2340that precede operations associated with using a language model (as implemented with the corpus data set3400) may be performed entirely within a single thread2454swithin a single node device2300. Some degree of parallel performance of the pause detection pre-processing operations within the single thread2454smay be implemented through use of the neuromorphic device(s)2355(in embodiments in which the node device2300include the neuromorphic device(s)2355) to obviate the need to implement an acoustic model based on a neural network in software for CTC-based pause detection. However, the use of a thread pool2450of multiple threads2454pmay be reserved for speech-to-text processing operations that are associated with using a language model. In this way, most, if not all, pre-processing operations and speech-to-text processing operations for a single speech data set3100may be performed entirely within a single node device2300, thereby eliminating much of the use of network communications associated with the distributed processing system2000ofFIGS.14A-C. Thus, for each speech data set3100, the need for communications among multiple devices through the network2999is obviated as a mechanism to achieve parallel performances of beam searches of the corpus data set3400as part of generating a text data set3700representing what was said in a speech represented by a speech data set3100. Instead, as shortly will be explained in greater detail, a buffer queue2460is used to distribute individual probability distribution sets3143generated in the single thread2454sof the preceding pre-processing and processing operations among the multiple threads2454pof a thread pool2450instantiated within the same single node device2300. As each thread2454pof a thread pool2450is used in generating a portion of a text data set3700from the probability distribution set3143provided to it as input, those portions of the text data set3700are assembled in temporal order to generate the text data set3700within the same single node device2300. Also in this way, depending on the overall quantity of threads2454that are able to be supported within each node device2300of the distributed processing system2000ofFIGS.14D-F, it may be possible for at least a subset of the node devices2300to each support the performance of pre-processing and speech-to-text processing operations by which multiple text data sets3700may be generated from multiple corresponding speech data sets3100in parallel. More specifically, and referring more specifically toFIG.22B, it may be that at least the depicted node device2300xyis able to support the use of a sufficient quantity of threads2454as to enable two thread pools2450xand2450yto be instantiated that each include a sufficient quantity of threads2454pas to enable a sufficient quantity of parallel performances of beam searches of the corpus data set3400as to enable the parallel generation of both of the depicted text data sets3700xand3700yfrom the depicted speech data sets3100xand3100y, respectively. As also depicted, another node device2300zmay be able to support the use of a sufficient quantity of threads2454as to enable at least one other thread pool2450zto be instantiated to similarly enable the generation of at least one other text data set3700zfrom a corresponding at least one other speech data set3100z. FIG.22Cdepicts some of the speech-to-text processing operations that are performed in a single thread2454sthat precedes the parallel use of a language model in a thread pool2450of multiple threads2454p. In this single-threaded execution environment, each data segment3140of a speech data set3100is used as an input to generating a corresponding probability distribution set3143. More specifically, in executing the division component2341of the control routine2340, processor(s)2350of a single one of the node devices2300may be caused to divide each data segment3140of multiple data segments of a speech data set3100into multiple data frames3141. In so executing the division component2341, an indication of the length of the speech audio that is to be represented by each data frame3141may be caused to be retrieved from the configuration data2335and used to control the division of each data segment3140into multiple data frames3141. Again, at least some acoustic models implemented using neural networks (and/or other technologies) may be designed to accept indications of detected audio features as input, instead of accepting audio data (e.g., the data frames3141) more directly as input. To accommodate the use of such implementations of an acoustic model, execution of the control routine2340may entail execution of a feature detection component2342to analyze the portion of speech audio represented by each data frame3141to identify instances of each of a pre-selected set of acoustic features. In so doing, processor(s)2350may be caused to generate a corresponding feature vector3142from each data frame3141that is analyzed. Each feature vector3141may include indications of each acoustic feature that is identified and when it occurred within the speech audio of the corresponding data frame3141. ComparingFIG.22CtoFIG.18A, it becomes evident that the very same acoustic model based on a neural network (e.g., the acoustic model neural network2234incorporating the CTC output2235) may be used in both the CTC-based pause detection, and generating probability distribution sets3143as part of using acoustic features in beginning the identification of words spoken. However, it should again be noted that other embodiments are possible in which different acoustic models based on differing types of neural network may be used, and/or in which different acoustic models based on entirely different technologies may be used. In embodiments in which neural network(s) are used, execution of a configuration component2344may cause processor(s)2350to again instantiate the same acoustic model neural network2234with the CTC output2235to implement the same acoustic model. As depicted, in some of such embodiments, it may be that one or more neuromorphic devices2355may be used to again implement the acoustic model neural network2234in hardware within each of one or more node devices2300. FIG.22Ddepicts aspects of the manner in which a buffer queue2460is employed in distributing, among the multiple threads2454pof the depicted thread pool2450, the probability distribution sets3143that have been generated within a single thread2454s, as just described in reference toFIG.22C. The buffer queue2460may be operated as a FIFO buffer. Thus, as probability distribution sets3143are being generated as an output of the acoustic model neural network2234within the single thread2454s, each one of those probability distribution sets3143may be stored within one of the data buffers2466to become available to the threads2454pof the thread pool2450. As the speech-to-text processing operations using one of the probability distribution sets3143are completed within each thread2454pso as to allow that thread2454pto become available for beginning such processing with another probability distribution set3143, that thread2454pmay be provided with the next probability distribution set3143in the order in which the probability distribution sets3143were stored within the buffer queue2460. It should be noted that, in some embodiments, the generation of probability distribution sets3143from data segments314may be done in batches as part of an approach to make better use of opportunities for parallel performances of various operations enabled by the thread pool. Thus, a batch of data segments3140may be divided into data frames3141from which corresponding feature vectors3142may be generated, which may be provided as input to the acoustic model neural network2234to generate a corresponding batch of probability distribution sets3143. In such embodiments, it may then be that a batch of multiple ones of the probability distribution sets3143corresponding to a batch of multiple ones of the data segments3140may be stored together within a single data buffer2466of the buffer queue2460, thereby resulting in the batch of probability distribution sets3143corresponding to a batch of data segments3140being provided as an input to a single one of the threads2454pof the thread pool2450, instead of a single probability distribution set3143corresponding to a single data segment3140. Alternatively, in spite of the generation of batches of probability distribution sets3143from corresponding batches of data segments3140, it may be that just a single probability distribution set3143corresponding to just a single data segment3140may be stored within each of the data buffers2466. As previously discussed, in executing the resource routine2440, processor(s)2350of the single node device2300may instantiate the buffer queue2460in addition to instantiating the single thread2454sin which the probability distribution sets3143are generated, and the thread pool2450of multiple threads2454pin which the probability distribution sets3143are used. Although not specifically depicted, in some embodiments, it may be that the resource routine2440is executed within the single thread2454ssuch that the use of processing and/or storage resources for instantiation, maintenance and/or control of at least the buffer queue2460occurs within the single thread2454s. Alternatively, it may be that the resource routine2440is executed within an entirely separate thread2454(not specifically shown) such that the use of processing and/or storage resources for instantiation, maintenance and/or control of the buffer queue2460and/or of the threads2454sand/or2454poccurs within that separate thread2454. In some embodiments, the quantity of threads2454pallocated to the thread pool2450and/or the quantity of data buffers2466that are allocated to the buffer queue2460may be predetermined and fixed quantities. Indeed, it may be that such quantities are specified in the configuration data2335, and may be retrieved therefrom as part of instantiated a thread pool2450and/or a buffer queue2460. In other embodiments, one or both of these quantities may be dynamically adjustable based on various factors that may be monitored over time, including and not limited to, a rate at which a text data set3700is being generated from a speech data set3100(e.g., is this rate keeping up with speech of a speech data set3100that is currently being spoken in real time), a quantity of available processing resources (e.g., a maximum quantity of threads that processor(s)2350of a node device2300are currently able to support) and/or of available storage resources (e.g., an amount of available storage space that is able to be provided to sufficiently support the various operations being performed within the threads2454sand2454pfor each speech data set3100), etc. More specifically, where the processing and/or storage resources of a node device2300are not being fully utilized, it may be that additional threads2454pmay be added to existing thread pool(s)2450and/or it may be that additional data buffers2466may be added to existing buffer queue(s)2460. Still further, the quantity of threads2454pin a thread pool2450and/or the quantity of data buffers2466in a buffer queue2460may be adjusted based on such characteristics of a particular speech data set3100as a current audio noise level3112(that may be determined as discussed in reference toFIG.17A), based on what language(s) are spoken in the speech represented by a particular speech data set3100, and/or the current quantity of speakers that are determined to have spoken within the speech represented by a particular speech data set3100. As previously discussed, each data segment3140may include an indication of a range of time associated with the speech segment that it represents within the speech that is represented by a speech data set3100. As a probability distribution set3143is generated from each data segment3140, a time stamp may be assigned to each probability distribution of the relative probabilities of various graphemes and/or phonemes that may have occurred at the time indicated by that time stamp. Thus, each probability distribution set3143may include (or be otherwise associated with) a range of time that it covers out of the larger range of time during which the speech represented by the speech data set3100was spoken. Such indications of time within (or otherwise associated with) each probability distribution set3143may be used in causing the probability distribution sets3143to be loaded into the data buffers2466of the buffer queue2460in temporal order. In this way, advantage may be taken of the FIFO manner of operation of the buffer queue2460to ensure that the probability distribution sets3143are then distributed among the threads2454pof the thread pool2450in the same temporal order. In this way, there is at least an increased likelihood that, across the threads2454pof the thread pool2450, the portions of the text data3700that are generated as outputs of the speech-to-text operations performed within each of those threads2454pwill at least have a tendency to be output in temporal order. However, with separate instances of speech-to-text processing operations being performed entirely independently of each other, and in parallel, it is entirely possible that there may be portions of the text data set3700that are generated out of temporal order. To address this, each of the portions of the text data3700that are so generated may include (or be otherwise associated with) time stamps providing indications of the range of time covered by each of those portions, and such time stamps may then be used to ensure that those portions of the text data set3700are assembled in temporal order to correctly form the transcript within the text data set3700. FIG.22Edepicts some of the speech-to-text processing operations associated with using a language mode, and that are performed as multiple instances thereof across the multiple threads2454pof the thread pool2450. Within each of the threads2454pof this multi-threaded execution environment, each data segment3140of a speech data set3100is used as an input to generating a corresponding probability distribution set3143. More specifically, within each of the threads2454p, in executing a candidate word component2345of the control routine2340, processor(s)2350of the node device2300may be caused to generate sets of one or more candidate words3145from a probability distribution set3143. Then, in executing a candidate n-gram component2346of the control routine2340, processor(s)2350of the node device2300may be caused to generate corresponding one or more candidate n-gram sets3146from the one or more candidate words3145that are generated for the probability distribution set3143. Turning toFIG.22F, in preparation for the parallel performances of beam searches, each of the threads2454pmay be provided with a copy of the corpus data set3400, as depicted inFIG.22A. Alternatively, each of the node devices2300may be provide with a copy of the corpus data set3400to which access may be shared among the multiple threads2454pof a single thread pool2450, or to which access may be shared among the multiple threads2454pof more than one thread pool2450. Again, the corpus data set3400may implement a language model as a corpus of n-grams. Within each thread2454p, in executing a beam search component2347of the control routine2340, processor(s)2350of the node device2300may be caused to perform a beam search within the corpus data set3400for one or more of the n-grams present within the candidate n-gram set3146. Again, as will be familiar to those skilled in the art of n-gram language models, each n-gram within an n-gram corpus may be accompanied therein with an indication of the relative frequency of its occurrence and/or its relative probability of occurrence within texts of a particular language. As each n-gram is found within the corpus data set3400, an indication of the relative probability of that n-gram occurring may be stored within a probability set3147generated for all of the candidate n-grams in the candidate n-gram set3146earlier generated from a single probability distribution set3143. Following generation of each probability set3147, execution of a transcript component2348of the control routine2340may cause processor(s)2350of the node device2300to, based on the indications of the relative probabilities in the probability set3147for each n-gram within the candidate n-gram set3146, identify a candidate word3145among each corresponding set of candidate words3145as a next word most likely spoken. The identified most likely spoken words associated with the range of time covered by the candidate n-gram set3146(which corresponds to one of the probability distribution sets3143) may then be added to the transcript of the speech audio represented as a text data set3700. FIGS.23A,23B and23Cillustrate examples of additional improvements that may be incorporated to the performance of various ones of the speech-to-text operations described above.FIG.23Aillustrates aspects of using the same acoustic model in the aforedescribed CTC segmentation technique and in the aforedescribed initial speech-to-text processing operations.FIG.23Billustrates aspects of the addition of dynamic per-word assignment of relative weighting to the use of an acoustic model or a language model in identifying spoken words.FIG.23Cillustrates aspects of selective concatenation of segments of audio speech to effect the formation of longer transcripts to improve the results of subsequent post-processing text analysis operations. Turning toFIG.23A, as previously discussed, due to the use of an acoustic model in the aforedescribed CTC segmentation technique ofFIGS.18A-B, and due to use of an acoustic model in the aforedescribed initial speech-to-text processing operations ofFIGS.21A-D, it may be that, in some embodiments, the very same acoustic model is used in both of these pre-processing and speech-to-text processing operations. In such embodiments, and where the processing system2000includes multiple node devices2300in which the single acoustic model may be used to perform of those functions, it may be that the single acoustic model is instantiated within those multiple node devices2300in preparation for the performing the CTC segmentation technique, and then allowed to remain instantiated so as to already be in place within the storage of those multiple node devices2300for subsequent use in the aforedescribed initial speech-to-text processing operations. In this way, advantage may be taken of an opportunity to avoid the consumption of time, network resources and/or processing resources to instantiate the same acoustic model, twice. Thus, by way of example, and as specifically depicted inFIG.23A, in such embodiments where the acoustic model neural network2234may be implemented using the neuromorphic device(s)2355incorporated into each of such node devices2300, it may be that execution of the configuration component2314(as described earlier in connection withFIG.18A) to cause instantiation of the neuromorphic device(s)2355to implement the acoustic model neural network2234enables the avoidance of subsequent execution of the configuration component2344(as described earlier in connection withFIG.21A) to do so, again. Turning toFIG.23B, as previously discussed, it has become commonplace to employ a two-stage combination of an acoustic model and a language model in which the acoustic model is typically relied upon to perform a first pass at identifying words that are likely to be the ones that were spoken, and the language model is typically relied upon to perform the next and final pass by refining the identification of such spoken words such that the words identified by the language model are the ones from which a transcript is generated. However, and as also previously discussed, the reduced error rate achieved by such a two-stage combination is still widely seen as being too high. Again, a possible reason for being still too high is that a good language model tends to resist identifying words that are actually spoken where those spoken words include mistakes in vocabulary and/or syntax. To improve upon the error rate of such a typical two-stage use of a combination of an acoustic model and a language model, in some embodiments, the transcript component2548may incorporate additional functionality to dynamically vary the relative weighting assigned to each of the acoustic model and the language model for each word to be identified based on the degree of uncertainty in the per-grapheme probability distributions output by the acoustic model for each word. Thus, in addition to being provided with the probability set3147and corresponding candidate words3145associated with a segment of speech audio as inputs, the transcript component2548may additionally receive the corresponding probability distribution set3143that includes the corresponding probability distributions for graphemes associated with the same segment of speech audio. In executing the transcript component2548, core(s)2551of processor(s)2550of the control device2500may be caused to use the probability distributions of graphemes that are output by the acoustic model for the pronunciation of a single word spoken within the segment to derive a measure of the degree of uncertainty for each of those probability distributions. Such a degree of uncertainty may be based on degree of a perplexity, degree of entropy, or other statistical measures of those probability distributions. Again, such a degree of uncertainty may serve as an indication of the degree to which a probability distribution for a grapheme presents an indefinite indication of which speech sound was uttered during a corresponding portion of the segment of speech audio. A probability distribution for graphemes that provides an uncertain indication of what speech sound was uttered may be one in which the degree of probability for the grapheme indicated as being the most probable is not significantly higher than the degree of probability for the grapheme indicated as being the second most probable. More specifically, where the difference between these two degrees of probability is less than a pre-determined threshold difference in probabilities, the probability distribution may be deemed to provide an indication that the second most probable grapheme is almost as likely to describe a speech sound that was uttered as the speech sound described by the most probable grapheme such that it is deemed to be uncertain as to which of these two speech sounds is the one that was uttered. In this way, the probability distribution may be said to provide an ambiguous indication of what speech sound was uttered. In some embodiments, the degree of uncertainty used to control which model is to be relied upon to identify a single word may be derived from measures of such a difference in probabilities associated with the most probable grapheme and the second most probable grapheme within each probability distribution associated with the single word. These differences in probabilities may be averaged or otherwise aggregated to derive a single value indicative of the degree of uncertainty, which may then be compared to a threshold degree of uncertainty specified in the configuration data2335. Where the degree of uncertainty is less than the threshold, greater weight may be assigned to the identification of the single word using the acoustic model, and where the degree of uncertainty is greater than the threshold, greater weight may be assigned to the identification of the single word using the language model. In other embodiments, the degree of uncertainty used to control which model is to be relied upon to identify a single word may be derived as an aggregate degree of perplexity or entropy. Stated differently, the degree of uncertainty may be based on calculations of the degree of entropy or degree of perplexity (which may be derived from a degree of entropy) of each probability distribution associated with the single word may be calculated and aggregated to derive a degree of uncertainty. In such embodiments, the aggregated degree of uncertainty may be compared to a threshold degree of uncertainty specified in the configuration data2335. Again, where the degree of uncertainty is less than the threshold, greater weight may be assigned to the identification of the single word using the acoustic model, and where the degree of uncertainty is greater than the threshold, greater weight may be assigned to the identification of the single word using the language model. As previously discussed, in some embodiments, both of the acoustic model and the language model may always be utilized in combination for each spoken word, regardless of whether the dynamic per-word determination is made to give greater weight to relying more on the acoustic model or the language model to identify a word. Thus, the beam searches associated with the execution of the beam search component2347to use the language model (where the language model is based on an n-gram corpus) may always be performed regardless of such dynamic per-word assignment of relative weighting. This may be the case where an output of the language model is employed as an input to the dynamic per-word relative weighting assigned to the acoustic and language models in addition to degree of uncertainty for the probability distributions for the corresponding graphemes. Alternatively, in other embodiments, it may be that the language model is not used to provide any input to the dynamic per-word relative weighting. In such other embodiments, such a situation may provide the opportunity to entirely refrain from consuming processing and or storage resources to perform beam searches associated with using the language model to identify a particular word if the results of the dynamic per-word relative weighting are such that the identification of the word that would be provided by the language model will not be used. In this way, use of the language model may be made contingent on such dynamic per-word relative weighting. As will be familiar to those skilled in the art, speech recognition in the human brain involves using a combination of detecting and recognizing speech sounds as received by the ears, and recognizing portions of language based on language rules. It has been observed that, where speech sounds are able to be clearly heard, speech recognition in the human brain tends to rely more heavily on those sounds to determine what was said. However, such reliance on speech sounds as received by the human ears may become insufficient where acoustic conditions are such that some speech sounds are masked enough to not be heard such that there are noticeable gaps in the speech sounds as received. It has been observed that, where at least some speech sounds are less clearly heard, speech recognition in the human brain tends to rely more heavily on language rules to determine what was said, thereby effectively “filling in the gaps” among the speech sounds that were able to be heard. To put this more simply, it has been observed that the human brain will take advantage of opportunities to not expend the resources needed to use language rules for such purposes when it is not necessary. The use of degrees of uncertainty to select between the acoustic and language models in identifying each word, as just described, effectively achieves a similar result. Where acoustic conditions are sufficiently good as to enable spoken words to be captured clearly, the probability distributions output by the acoustic model are more likely to demonstrate greater certainty in being able to identify words through use of the acoustic model, alone. However, where acoustic conditions are sufficiently poor as to degrade the ability to capture spoken words clearly, the probability distributions output by the acoustic model are more likely to demonstrate greater uncertainty in being able to identify words through use of the acoustic model alone, thereby inviting the use of the language model to identify words. Thus, such an evaluation of at least the degree of uncertainty of the probability distributions output by the acoustic model provide an indirect path for taking acoustic conditions into account in dynamically determining how each spoken word is ultimately identified. However, as also depicted inFIG.23B, alternative embodiments are possible in which the acoustic conditions under which speech sounds are captured may be more directly taken into account. Specifically, it may be that the indications of audio noise level2235that are determined and stored as part of performing the APA segmentation technique (as described earlier in connection withFIG.17A) may be used as another input to the transcript component2548in determining whether to use the acoustic model or the language model in selecting each word for inclusion in a transcript. By way of example, while it may be that the degree of uncertainty demonstrated in the probability distributions from the acoustic model may be a primary factor in making such selections, an indication in the audio noise level2235of there being audio noise at a level exceeding a pre-determined upper limit may trigger the use of the language model, regardless of the degree of uncertainty demonstrated in the probability distributions from the acoustic model. Turning toFIG.23C, from experimentation and observation, it has been found that, generally, many forms of automated text analyses are able to be more successfully used with longer transcripts. Again, it has been found that shorter transcripts tend to cause an overemphasis on words with greater frequencies of use in a language, with the result that analyses to derive topics and/or other insights concerning the text of a transcript tend to produce less useful results. As an approach to counteracting this effect, in some embodiments, all of the text derived from a single piece of speech audio may be maintained and treated (at least for purposes of performing text analyses) as a single transcript. More specifically, the text generated from speech-to-text processing of a single speech data set3100may be organized within the text data set3700as a single transcript. However, as also previously discussed, a single transcript encompassing speech audio that is especially long and/or that includes multiple conversations and/or verbal presentations may also beget less useful results when text analyses are performed thereon. Thus, in some embodiments, rules concerning lengths of transcripts, frequencies of words, and/or acoustic features such as relatively lengthy pauses may be used to bring about the generation of lengths and/or quantities of transcripts for each piece of speech audio that are more amenable to providing useful results from automated text analyses. More specifically, a set of such rules may be used to cause the selective concatenation of the text of consecutive sets of segments of speech audio stored as a single speech data set3100to form multiple transcripts that may be stored together as a set of transcripts within a single corresponding text data set3700(or as a set of transcripts that are each stored as a separate text data set3700). Such a text data set3700(or such a multitude of text data sets3700) may include indications of the relative temporal order of the multiple transcripts to preserve at least that contextual aspect. Indications of such rules and/or thresholds therefore may be maintained as part of the configuration data2335. Among such thresholds may be a minimum and/or maximum threshold for the size of a transcript, which may be expressed in terms of quantities of words and/or lengths of time periods. In some of such embodiments, it may be that text associated with segments of speech audio may be automatically combined to form transcripts that have a length that meets such word count and/or time thresholds. Alternatively or additionally, the configuration data2335may specify a minimum threshold quantity of words in a transcript that are required to have a frequency of occurrence in a language that falls below a specified maximum threshold. In some of such embodiments, it may be that text associated with segments of speech audio may be combined to form transcripts in which the combination of words includes such a requisite quantity of such lower frequency words. In so doing, the storage, within a corpus data set3400, of uni-grams that are each correlated to an indication of frequency of use may be relied upon as a source of such indications of frequency. Also alternatively or additionally, the configuration data2335may specify a minimum threshold length of time for a pause between speech sounds that may be greater than the minimum threshold length for a likely sentence pause such that it may be deemed a likely pause between conversations and/or verbal presentations where a change of subject may be more likely to occur. In some of such embodiments, occurrences of such longer pauses may be used as breakpoints at which text may be divided to define multiple transcripts. There may still be an enforcement of minimum and/or maximum thresholds as a default to address situations in which too few or too many of such longer pauses are found to occur. FIGS.24A,24B,24C,24D,24E,24F and24G, taken together, illustrate, in greater detail, aspects of the generation and/or augmentation of an n-gram corpus implementing an n-gram language model. More specifically,FIGS.24A-Gpresent, in greater detail, aspects of the generation and/or augmentation of a corpus data set3400based on the contents of a text data set3700.FIG.24Aillustrates aspects of the distribution of portions of a selected text data set3700among multiple node devices2300in preparation for the generation of n-grams therefrom.FIG.24Billustrates aspects of the generation of a portion of an n-gram corpus from each of the portions of the selected text data set3700.FIGS.24C-Dillustrate aspects of the collection and combining of the generated portions of n-gram corpus to either form an entirely new corpus data set3400, or augment an existing corpus data set3400.FIG.24Eillustrates aspects of the distribution of portions of the new or augmented corpus data set3400among multiple node devices2300in preparation for the deduplication of n-grams therein.FIGS.24F-Gillustrate aspects of the collection and re-combining of the deduplicated portions of the corpus data set3400, and the calculation and/or re-calculation of relative frequencies and/or probabilities of occurrence of each of the n-grams therein. Turning toFIG.24A, within the control device2500, execution of the control routine2510may cause processor(s)2550thereof to select particular ones of the node devices2300for use in performing operations to generate or augment an n-gram corpus from a selected text data set3700. The text data set3700may have been previously generated as a transcript from speech audio, and/or the text data set3700may have been generated from any of a variety of other sources. Following the selection of node devices2300, in executing a coordination component2519of the control routine2510, processor(s)2550of the control device2500may be caused to cooperate with processors2350of the node devices2300to coordinate communications through the network2999to cause the provision of a different portion3710of the text data set3700to each of the selected node devices2300. In this way the selected node devices2300are prepared for use in generating n-grams from the selected text data set3700in a distributed manner. Turning toFIG.24B, in some embodiments, the processor(s)2350of one or more of the selected node devices2300may be capable of supporting multiple execution threads2352by which multiple different executable routines and/or multiple instances of an executable routine may be executed at least partially in parallel. Within each of such selected node devices2300, the received text data portion3710may be divided into multiple text data sub-portions3711that are distributed among multiple execution threads2352therein. Within each such execution thread2352, execution of an n-gram component2317of an instance of the control routine2310may cause a core of a processor2350to parse through the text within the corresponding text data sub-portion3711to generate n-grams therefrom. In so doing, within each execution thread2352, it may be that an n-gram buffer2237is instantiated to temporarily assemble and store sets of the generated n-grams until the n-gram buffer2237has been filled to at least a predetermined degree, whereupon the contents of the n-gram buffer2237may be added to a corresponding corpus data sub-portion3411. In some embodiments, the n-gram buffer2237may be implemented as a hash map in which a two-dimensional (2D) array is defined wherein each row thereof is to store an n-gram generated from the corresponding text-data sub-portion3711, along with a count of instances of that n-gram that have been generated. As each n-gram is generated from the text of the text data sub-portion3711, a hash value may be taken of that n-gram, and that hash value may become the index value used to specify which row within the n-gram buffer2237is the row in which that n-gram is to be stored, and in which the count for that n-gram is to be incremented to reflect the generation of an instance thereof. Each time the contents of the n-gram buffer2237are added to the corresponding corpus data sub-portion3411, the counts for all of the rows therein may be reset to indicate a quantity of 0 instances. Such use of an n-gram buffer2237implemented as such a hash map may aid in reducing data storage requirements for each execution thread2352and/or for each corpus data sub-portion by enabling some degree of deduplication of n-grams to be performed. More specifically, such use of hash values as index values for rows within such an implementation of a hash table enables multiple instances of the same n-gram to be relatively quickly and efficiently identified so that just a single row of storage space within the n-gram buffer2237is occupied for those multiple instances, instead of allowing each of those instances to occupy a separate storage location within a data structure, even temporarily. Such use of distributed processing across multiple node devices2300and/or across multiple execution threads2352within each node device2300, and such use of hash maps in performing at least an initial deduplication of n-grams, may serve to enable relatively large n-gram corpuses to be generated and used in the performance of speech-to-text processing. As a result, supporting a larger than commonplace n-gram corpus that includes larger n-grams that include relatively large quantities of words (e.g., greater than the more commonplace quantities of 5 words or less) becomes practical. Alternatively or additionally, supporting a larger than commonplace n-gram corpus that includes highly infrequently used n-grams (e.g., n-grams that include names of specific people and/or places such that they may be found in just one of thousands of text documents) also becomes practical. As those skilled in the art will readily recognize, it is commonplace practice to allow only n-grams that occur in texts with a frequency above a predetermined minimum threshold frequency to be included in an n-gram corpus in an effort to limit the overall size thereof. The ability to support a larger n-gram corpus may render such a restriction unnecessary, thereby increasing the accuracy that is able to be achieved in performing to speech-to-text processing. Within each of the selected node devices2300, following the use of the entirety of the text data sub-portion3711in generating n-grams, the multiple execution threads2352may be caused to cooperate to assemble the multiple corpus data sub-portions3411therein to form a single corresponding corpus data portion3410. Turning toFIG.24C, within the control device2500, further execution of the coordination component2519may cause processor(s)2550of the control device2500to cooperate with processors2350of the node devices2300to coordinate communications through the network2999to cause the corpus data portions3410generated within each of the selected node devices to be provided to the one or more storage devices2100. In so doing, the multiple corpus data portions3410may be combined to form a new corpus data set3400, or may be combined and added to an existing corpus data set3400. Turning toFIG.24D, as depicted, each of the corpus data sets3400stored within the one or more storage devices2100may employ a 2D array data structure of rows3421and columns3422. As also depicted, while each n-gram may occupy a single row3421, each word within an n-gram occupies a separate column3422such that the number of columns occupied by each n-gram is based on the quantity of words that it includes. It should be noted thatFIG.24Ddepicts a deliberately highly simplified example of a very small n-gram corpus that includes relatively few uni-grams3431and relatively few bi-grams3432. As depicted, the single word within each of the uni-grams3431occupies just column3422a, while the pair of words within each of the bi-grams3432occupies both columns3422aand3422b. As will be familiar to those skilled in the art, the currently widely used standard format for organizing n-gram corpuses to implement a language model is the “ARPA” text format originally introduced by Doug B. Paul of the Massachusetts Institute of Technology. The ARPA format is generally implemented as an ASCII text file in which each n-gram is stored within a separate line of text separated by carriage returns. Although this format is widely accepted, it suffers various disadvantages, including slower access due to requiring a text parser to interpret the contents of each line (not all of which include n-grams). Another limitation of the ARPA format is the imposition of a requirement that all n-grams having the same quantity of words must be grouped together, and must be provided with a textual label indicating the quantity of words therein. In contrast, the 2D array format depicted inFIG.24Ddoes not require a text parser for such purposes as it relies on the row-column organization of the array structure to enable speedier addressability and access to each word of n-gram. Also, as depicted, there may be no need to group the uni-grams3431together and separately from the bi-grams3432, or to provide distinct labels or other form of identification for each group. Instead, it may simply be the quantity of columns3422occupied by each n-gram that determines the quantity of words therein. Again, the single word of each uni-gram3431occupies the single column3422a, while the pair of words of each bi-gram3432occupies the pair of columns3422aand3422b, and so on. However, it should be noted that such a 2D array format enables relatively easy importation of the n-grams and related information from the ASCII text file structure of the ARPA format. Specifically, a text parser may be used just once to parse such a text file structure to identify n-grams and related information with which to fill the rows of the 2D array format. As a result of using such a 2D array format, the combining of the corpus data portions3410to form a new corpus data set3400, or to add to an existing corpus data set3400, becomes a relatively simple matter of combining rows3421. In this way, the need for a text parser, as well as text file editing functionality, is eliminated. Turning toFIG.24E, following such combining of rows3421as part of combining corpus data portions3410containing newly generated n-grams, as just discussed, processor(s)2550of the control device2500may be caused to cooperate with the one or more storage devices2100to re-distribute the newly formed or newly augmented corpus data set3400among multiple node devices2300in preparation for being refined. More specifically, although the newly formed or newly augmented corpus data set3400may contain a relatively large quantity of newly generated n-grams, there may remain duplications of n-grams therein, at least as a result of having been generated in a distributed manner across multiple node devices2300. Also, to fully enable the use of the corpus data set3400as a language model, relative frequencies and/or probabilities of occurrence for each n-gram must be calculated, or re-calculated. Unlike the relatively simple division of the text data set3700into text data portions3710earlier discussed in reference toFIG.24A, inFIG.24E, the rows3421of n-grams within the corpus data set3400may be reorganized into groups based on hash values taken of each n-gram. More precisely, a hash value may be taken of each n-gram, and then the n-grams may be reorganized within the corpus data set3400based on an ascending or descending order of their hash values. This advantageously has the result of causing the rows3421of duplicate n-grams to become adjacent rows3421. With the rows3421of n-grams so reorganized, sub-ranges of hash values within the full range of hash values may be derived as a mechanism for dividing the corpus data set3400into multiple corpus data groups3415that contain relatively similar quantities of rows3421for distribution among the multiple node devices2300. In this way, each set of adjacent rows3421of duplicate n-gram is kept together and provided together to a single node device2300for deduplication. As previously discussed, in some embodiments, it may be that processor(s) of the one or more storage devices2100are capable of performing at least a limited range of processing operations needed to maintain local and/or distributed file systems as part of storing data sets of widely varying sizes within either a single storage device2100or across multiple storage devices2100. In such embodiments, the processor(s) of the one or more storage devices2100may be capable of performing at least a limited range of data reorganization functions, including the grouping of rows within array-type data structures based on a variety of organizing criteria, including hash values. Thus, in such embodiments, it may be that processor(s)2550of the control device are caused, by execution of the coordinating component2519, to transmit a command to the one or more storage devices2100to cause such a reorganization of the rows3421within the corpus data set3400, prior to the division of the corpus data set3400into the multiple corpus data groups3415by sub-ranges of those very same hash values. Turning toFIG.24F, within each of the multiple node devices2300, execution of a compacting component2318may cause processor(s)2350thereof to iterate through the rows3421of n-grams within its corresponding corpus data group3415to identify instances of two or more rows3421containing duplicate n-grams. For each such instance of duplicate n-grams, the two or more rows3421containing duplicates of an n-gram may be reduced to a single row3421containing just a single copy of that n-gram, and an indication of at least the quantity of duplicates identified may be stored within the single row3421. As such deduplication of n-grams within each corpus data group3415is completed, the corpus data groups3415may be provided to the control device2500, where they may be re-combined to recreate the corpus data set3400. In so doing, execution of a probability component2511of the control routine2510may cause processor(s)2550of the control device2500to calculate values for the frequency and/or probability of occurrence for each n-gram, and to augment each row3421with those value(s). More specifically, and as depicted inFIG.24G, one or more columns3422that were previously unoccupied across all of the rows3421may be caused to store such frequency and/or probability values. Returning toFIG.24F, as will be familiar to those skilled the art, there may arise situations in which the n-grams within the corpus data set3400do not cover all possible combinations of the words that are present within the corpus data set3400. This may result in a default assignment of a zero probability value to such combinations of words as if such combinations could never occur, and this may adversely affect the accuracy of the resulting language mode in speech-to-text operations. To at least mitigate this adverse affect, the processor(s)2550of the control device2500may be caused to provide one of a variety of types of “smoothing” of values indicative of probability of occurrence for at least a subset of the n-grams within the corpus data set3400. More specifically, for at least some n-grams with a higher probability of occurring, their probability values may be reduced by a relatively small degree (thereby indicating a slightly reduced probability of occurring), and the probability value assigned for the occurrence of n-grams not included within the corpus data set3400may be increased to a non-zero value. Among the widely accepted techniques for smoothing are various “backoff” calculations that may be used to derive a backoff value by which the probability values of at least a subset of the n-grams may be multiplied to reduce those values by a relatively small degree. As those skilled in the art will readily recognize, one widely used technique for calculating the backoff value is the Katz back-off model introduced by Slava M. Katz, but this technique becomes less effective as the size of the n-gram corpus increases. Another widely known technique is the “Stupid Backoff” introduced by Google, Inc. in2007, but this technique is based on the use of a fixed value which, despite being capable of at least somewhat better results than the Katz back-off model, can also yield increasingly less effective results as the size of the n-gram corpus increases. To better handle the potentially larger than commonplace size of the n-gram corpus within the corpus data set3400, the probability component2511may employ an entirely new calculation: Backoff(n)=|Set(n⁢gram)||Set⁢(n-1⁢gram)| In this new calculation, the backoff value for an n-gram corpus of up to n words per n-gram may be derived by dividing the quantity of n-grams that include n words by the quantity of n-grams that include n−1 words. This backoff value is able to be quickly and simply calculated once, and then the values for the probability of occurrence of all of the n-grams may be multiplied by this backoff value. Since this backoff value is calculated based on the n-grams actually present within the corpus data set3400, instead of being based on an arbitrary fixed value, the resulting n-gram perplexity is not rendered artificially smaller than it should be, thereby enabling better accuracy in the use of the corpus data set3400as a language model for speech-to-text processing operations. FIGS.25A,25B,25C,25D,25E and25F, together, illustrate an example embodiment of a logic flow4100. The logic flow4100may be representative of some or all of the operations executed by one or more embodiments described herein. More specifically, the logic flow4100may illustrate operations performed by core(s)2351and/or2551of the processor(s)2350and/or2550of the node devices2300and/or of the control device2500, respectively, in executing various ones of the control routines2310,2340,2510and2540. Starting atFIG.25A, at4110, processor(s) of a control device of a processing system (e.g., the processor(s)2550of the control device2500of the processing system2000ofFIGS.14A-C) may receive a request from a requesting device via a network (e.g., the requesting device2700via the network2999) to perform speech-to-text conversion of speech audio represented by a specified speech data set (e.g., one of the speech data sets3100). At4112, pre-processing of the speech audio represented by the specified speech data set may begin with either a processor of the control device or processor(s) of one or more node devices of the processing system (e.g., one or more of the node devices2300) dividing the speech data set into data chunks that each represent a chunk of the speech audio. As has been discussed, the pre-processing may entail the performances of multiple pause detection techniques (e.g., the combination of at least the APA pause detection technique ofFIGS.17A-C, and the CTC pause detection technique ofFIGS.18A-B) at least partially in parallel. As also discussed, where the processing system does include multiple node devices (e.g., the multiple node devices2300), it may be that each pause detection technique is assigned to be performed by a different one of the node devices. Alternatively, where the processing system does not so include such a multitude of node devices, it may be that each pause detection technique is assigned to be performed by a different core and/or a different processor of the control device. It should again be noted that the chunks of the speech audio used by different ones of the pause detection techniques may not be of the same size, or more precisely, may not represent chunks of the speech audio that are of the same length (e.g., as previously discussed, the chunks of speech audio generated for the APA pause detection technique may be shorter than those generated for the CTC pause detection technique). Therefore, it may be that multiple different sets of chunks of the speech audio are generated at4112. More precisely, where each pause detection technique is assigned to a different node device or to a different thread of execution, it may be that the division of the speech audio into chunks is among the operations that are also so assigned such that separate node devices or separate cores are used to separately generate chunks of speech audio that are of appropriate length for their corresponding one of the pause detection techniques. Regardless of the exact manner in which chunks of speech audio are generated at4112, as depicted, multiple portions of pre-processing may be performed at least partially in parallel acrossFIGS.25B-25D, including the APA and CTC pause detection techniques. Turning toFIG.25B, and following the generation of APA data chunks at4112that are of appropriate size for use as inputs to the APA pause detection technique (e.g., the data chunks3110a), at4120, core(s) of a processor of either a node device or of the control device may analyze the chunk of speech audio represented by each APA data chunk to identify and measure the peak amplitude present therein. At4122, with the peak amplitudes of each of the APA data chunks so measured, a pre-selected percentile amplitude may be derived from across all of the measured peak amplitudes from across all of the APA data chunks, and may be designated to serve as a threshold amplitude (e.g., the threshold amplitude2232). At4124, the peak amplitude measured within each of the APA data chunks may be compared to the threshold amplitude. At4126, each APA data chunk representing a chunk of speech audio having a peak amplitude greater than the threshold amplitude may be designated as a speech data chunk (e.g., a speech data chunk3110s), and each APA data chunk representing a chunk of speech audio having a peak amplitude less than the threshold amplitude may be designated as a pause data chunk (e.g., a pause data chunk3110p). Again, in various differing embodiments, each APA data chunk representing a chunk of speech audio having a peak amplitude equal to the threshold amplitude may be designated as either a speech data chunk or a pause data chunk. At4130, a first set of temporally consecutive APA data chunks of a pre-selected quantity, starting with the temporally earliest one of the APA data chunks, may be selected and analyzed to identify the longest consecutive subset of the APA data chunks therein that have been designated as pause data chunks, thereby corresponding to the longest pause present across all of the corresponding consecutive chunks of speech audio represented by the set of APA data chunks. The identified longest pause may be designated a likely sentence pause. At4132, an indication of the just-designated likely sentence pause may then be noted within an APA pause set of indications of likely sentence pauses (e.g., the APA pause set3116aof likely sentence pauses). As previously discussed, such an indication of a likely sentence pause within the APA pause set may include an indication of the temporal location of the likely sentence pause within the entirety of the speech audio. At4134, a check may be made of whether there are any more APA data chunks beyond (i.e., temporally later than) the set of APA data chunks just analyzed. If so, then at4136, another set of temporally consecutive APA data chunks of a pre-selected quantity may be selected, where the newly selected set may start either 1) with the APA chunk that temporally follows the subset of APA data chunks that make up the longest pause of the last set, or 2) amidst the subset of APA data chunks that make up the longest pause of the last set (e.g., with the APA chunk at the midpoint of that longest pause). The newly selected set of APA data chunks may then be analyzed to identify the longest consecutive subset of the APA data chunks with the new set that have been designated as pause data chunks, thereby corresponding to the longest pause present across all of the corresponding consecutive chunks of speech audio represented by the set of APA data chunks. The identified longest pause may be designated a likely sentence pause. Again, at4132, an indication of the just-designated likely sentence pause may then be noted within the APA pause set of likely sentence pauses. However, if at4134, there are no more APA data chunks beyond the set of APA data chunks just analyzed, then preparations are made to perform a speaker diarization technique, starting at4160inFIG.25E. Turning toFIG.25C, and following the generation of APA data chunks at4112that are of appropriate size for use as inputs to the APA pause detection technique (e.g., the data chunks3110a), at4114, core(s) of a processor of either a node device or of the control device may analyze the chunk of speech audio represented by each APA data chunk to identify and measure an amplitude of audio noise present therein. As previously discussed in reference toFIG.17A, it may be that such measurements of a level of audio noise may be taken coincident with the taking of measurements of peak amplitude of each of the APA data chunks. However, it should be noted that other embodiments are possible in which measurements of a level of audio noise may be taken of other chunks generated for another of the multiple pause detection techniques, or measurement(s) may be taken of a level of audio noise in the speech audio at a time and/or in a manner that may be entirely unconnected with any of the pause detection techniques. At4116, with the audio noise levels of each of the APA data chunks so measured, at least one indication of the audio noise level within the speech audio (e.g., the audio noise level3112) may be derived using any of a variety of ways. By way of example, and as previously discussed, such an indicated audio noise level may be based on average noise levels, lowest noise levels, and/or highest noise levels across all of the APA data chunks. Following the derivation of the indicated audio noise level, preparations are made to perform a speaker diarization technique, starting at4160inFIG.25E. Turning toFIG.25D, and following the generation of CTC data chunks at4112that are of appropriate size for use as inputs to the CTC pause detection technique (e.g., the data chunks3110c), at4140, core(s) of a processor of either a node device or within the control device may instantiate and/or configure an acoustic model neural network within the node device or of the control device (e.g., the acoustic model neural network2234). As has been discussed, the acoustic model neural network that is so configured may incorporate a CTC output (e.g., the CTC output2235) that would normally be used to output a blank symbol that provides an indication of their being consecutive instances of a character that are not to be merged. At4142, the temporally earliest one of the CTC data chunks may be provided to the acoustic model neural network as an input. At4144, if there are no strings of consecutive blank symbols output by the CTC output of the acoustic model neural network, then a check may be made at4154of whether there are any more CTC data chunks remaining to be provided to the acoustic model neural network as input. If there is at least one more of such CTC data chunks remaining, then the temporally next CTC data chunk (i.e., the next CTC data chunk in order from the temporally earliest to the temporally latest) may be provided to the acoustic model neural network as input at4156. However, if at4144, there are one or more strings of consecutive blank symbols output by the CTC output of the acoustic model neural network in response to the provision thereto of a CTC data chunk as input, then at4146, the length of each of those one or more strings may be compared to a pre-determined threshold blank string length. At4148, if there is any string of consecutive blank symbols that is at least as long as the threshold blank string length, then each such string may be designated as a likely sentence pause. If, at4150, there are no strings of consecutive blank symbols in the output of the neural network that have been so designated as likely sentence pauses, then the check of whether there are any more CTC data chunks remaining may be made at4154. However, if at4150, there are one or more strings of consecutive blank symbols that have been so designated as likely sentence pauses, then for each such string, an indication of a likely sentence pause may then be added to the CTC pause set of indications of likely sentence pauses at4152, and then the check may be made at4154for more CTC data chunks. However, if at4164, there are no more CTC data chunks, then preparations are made to perform a speaker diarization technique, starting at4160inFIG.25E. Turning toFIG.25E, at4160, core(s) of a processor of either a node device or of the control device may continue the pre-processing of the speech audio of the speech data set by again dividing the speech data set into data chunks that each represent a chunk of the speech audio, this time for use in speaker diarization (e.g., the speaker diarization data chunks3110d). As has been discussed, the pre-processing may entail the performance of at least one speaker diarization technique (e.g., the speaker diarization technique ofFIGS.19A-D). As also discussed, where more than one speaker diarization technique is to be performed, and where the processing system does include multiple node devices (e.g., the multiple node devices2300), it may be that each speaker diarization technique is assigned to be performed by a different one of the node devices. Alternatively, where the processing system does not so include such a multitude of node devices, it may be that each speaker diarization technique is assigned to be performed by a different core and/or a different processor of the control device. However, in the example performance of pre-processing and processing operations performed in this logic flow4100, it is assumed that just a single speaker diarization technique is performed. In addition to dividing the speech audio of the speech data set into speaker diarization data chunks, each of the speaker diarization data chunks may be further subdivided into data fragments. Further, at4162, the indications of likely sentence pauses from each of the pause sets generated by the multiple pause detection techniques may be used to filter out (or otherwise remove) each data fragment that represents a portion of speech audio in which even a portion of a sentence pause is likely to have occurred. In this way, it becomes more likely that all of the data fragments that are present within each speaker diarization data chunk will include speech sounds. At4164, core(s) of a processor of either a node device or of the control device may instantiate and/or configure a speaker diarization neural network within the node device or within the control device (e.g., the speaker diarization neural network2237). At4166, the temporally earliest one of the speaker diarization data chunks may be provided to the acoustic model neural network as an input. More precisely, each of the data fragments of that speaker diarization data chunk may be provided to the speaker diarization neural network as an input to cause the generation of a corresponding speaker vector. As previously discussed, each speaker vector includes a set of binary values and/or other numeric values that are descriptive of various vocal characteristics of a speaker. At4168, clustering of the speaker vectors generated from the data fragments of the temporally earliest speaker diarization data chunks may be performed to identify the speakers who spoke within the chunk of speech audio represented by the speaker diarization data chunk. As previously discussed, such clustering may include one or more repetitions of performances of clustering of the speaker vectors of the speaker diarization data chunk each time a new speaker is identified. At4170, each speaker vector is matched to one of the speakers identified through the performance of clustering for the speaker diarization data chunk. At4172, the identities of the speakers assigned to each pair of temporally consecutive speaker vectors are compared to identify each instance of a likely change of speakers within the speaker diarization data chunk. At4174, indications of any of such identified likely changes in speaker are stored within a change set of indications of likely speaker changes. At4176, a check may be made as to whether there are any speaker diarization data chunks remaining that have not been put through the speaker diarization technique just described. If so at4176, then the temporally next speaker diarization data chunk may be provided to the speaker diarization neural network as an input at4178. More precisely, each of the data fragments of that speaker diarization data chunk may be provided to the speaker diarization neural network as an input to cause the generation of a corresponding speaker vector. However, if at4176, there are no more speaker diarization data chunks, then segmentation may be performed at4180inFIG.25Fin preparations for perform speech-to-text processing. Turning toFIG.25F, at4180, core(s) of a processor of either a node device or of the control device may assign relative weighting factors to each of the pause detection techniques by which a pause set of likely sentence pauses has been generated. As has been discussed, such weighting factors may be made dynamically adjustable based on the earlier derived indication of audio noise level, and this may be done in recognition of the differing degrees to which each of the pause detection techniques is susceptible to the presence of audio noise within speech audio. At4182, the assigned relative weighting factors may be used in the combining of the multiple pause sets of likely sentences pauses to generate a single set of indications of likely sentence pauses. At4184, core(s) of a processor of each of one or more node devices, and/or core(s) of a processor of the control device may then use the single set of indications of likely sentence pauses together with the change set of indications of likely speaker changes from the performance of the speaker diarization technique to generate a segmentation set of indications of the manner in which the speech data set is to be divided into data segments that each represent a segment of the speech audio of the speech data set. At4186, core(s) of a processor of each of one or more node devices, and/or cores(s) of a processor of the control device may re-divide the speech data set into data segments that each represent a segment of the speech audio based on the segmentation set. With the provision of segments of the speech audio to use as an input, the processing operations to perform the requested speech-to-text may begin. As has been discussed, due to the performance of the pre-processing operations, each point at which the speech audio is divided to form segments is at least likely to be a midpoint of a sentence pause and/or of a speaker change, thereby making it more likely that each segment will fully contain the complete pronunciations of phonemes, words and/or entire sentences by an individual speaker. At4190, feature detection is performed on each segment to detect instances of a pre-selected set of acoustic features that are to be provide as an input to an acoustic model for purposes of identifying likely text characters. At4192, within each node device and/or within the control device, core(s) of a processor may again instantiate an acoustic model neural network with CTC output, but this time for purposes of identifying characters. Again, the same type of acoustic model neural network with CTC output that was used for the CTC pause detection technique may be used again for character identification. At4194, each data segment is provided to the acoustic model neural network as input for the identification of likely text characters (along with blank symbols used to identify instances of identical consecutive text characters). At4196, such identified text characters are provided to implementation(s) of a language model as input for the identification of words. At4198, a processor of a node device or a processor of the control device may assemble the identified words, in temporal order, to form text data that represents the text into which the speech audio of the speech data set has been converted (e.g., the text data2519). As previously discussed, such text data may then be transmitted back to the device from which the request was received to perform the speech-to-text conversion. FIG.26illustrates an example embodiment of another logic flow4200. The logic flow4200may be representative of some or all of the operations executed by one or more embodiments described herein. More specifically, the logic flow4200may illustrate operations performed by core(s)2351and/or2551of the processor(s)2350and/or2550of the node devices2300and/or of the control device2500, respectively, in executing various ones of the control routines2340and2540. At4210, core(s) of processor(s) of a node device of a processing system (e.g., the core(s)2351of the processor(s)2350of one of the node devices2300of the processing system2000ofFIGS.14A-C), or core(s) of processor(s) of a control device of the processing system (e.g., the core(s)2551of the processor(s)2550of the control device2500of the processing system2000ofFIGS.14A-C) may perform feature detection on one or more consecutive frames of a segment of speech audio covering a period of time during which a next word was spoken. As has been discussed, the output of the performance of feature detection may be data structures (e.g., the feature vectors3142) that provide indications of detected instances of various acoustic features, along with indications of when those instances occurred. At4212, such feature vectors generated from the performance of feature detection may be provided as input to an acoustic model. As has been discussed, the acoustic model may be implemented using a neural network (e.g., the neural network2355or2555, which may include a CTC output2356or2556, respectively), or using any of a variety of other technologies. At4214, the core(s) of the processor(s) of either the node device or the control device may be caused to use the acoustic model with the feature vectors as input to generate corresponding probability distributions of graphemes. As has been discussed, each grapheme may be correlated, either individually or in various combinations, to one or more speech sounds. As a result, each of the probability distributions provides an indication of relative probabilities of various different speech sounds having been uttered at a particular time. At4216, from multiple probability distributions that are associated with the pronunciation of the next single word that was spoken and that is to be identified for addition to a transcript, a set of a pre-determined quantity of candidate words (e.g., the candidate words3145) may be generated, where each of the candidate words is among those that are most likely to be the next spoken word. At4220, for each candidate word in the set of candidate words, a corresponding candidate n-gram may be generated that is to become part of a corresponding set of candidate n-grams (e.g., the set3146of candidate n-grams). At4222, the core(s) of the processor(s) of either the node device or the control device may be caused to use the language model with the set of candidate n-grams as input to generate a corresponding set of probabilities (e.g., one of the probability sets3147). As has been discussed, where the language model is based on an n-gram corpus (e.g., one of the corpus data sets3400), beam searches may be used to retrieve the per-n-gram probabilities stored as part of the n-gram corpus. As a result, each of the probability sets provides the relative probabilities of the set of n-grams, thereby enabling the most probable candidate n-gram of that set to be determined, and in so doing, enabling the most probable corresponding candidate word to be identified as the next most likely word to be spoken, according to the language model. At4230, each of the probability distributions for graphemes associated with the next word may be analyzed to derive an aggregate degree of uncertainty for those probability distributions. If, at4232, the resulting degree of uncertainty is greater than a pre-determined threshold level, then at4234, greater weighting may be given to relying on the language model to identify the next word most likely to have been spoken. However, if at4232, the resulting degree of uncertainty is less than the pre-determined threshold level, then at4236, greater weighting may be given to relying on the acoustic model to identify the next word most likely to have been spoken. In various embodiments, each of the processors2350,2550and2750may include any of a wide variety of commercially available processors. Further, one or more of these processors may include multiple processors, a multi-threaded processor, a multi-core processor (whether the multiple cores coexist on the same or separate dies), and/or a multi-processor architecture of some other variety by which multiple physically separate processors are linked. However, in a specific embodiment, the processor(s)2350of each of the one or more node devices2300may be selected to efficiently perform the analysis of multiple instances of pre-processing, processing and/or post-processing operations at least partially in parallel. By way of example, the processors2350may incorporate a single-instruction multiple-data (SIMD) architecture, may incorporate multiple processing pipelines, and/or may incorporate the ability to support multiple simultaneous threads of execution per processing pipeline. Alternatively or additionally by way of example, the processor1550may incorporate multi-threaded capabilities and/or multiple processor cores to enable parallel performances of the tasks of more than job flow. In various embodiments, each of the control routines2310,2340,2370,2510,2540,2570and2740, including the components of which each is composed, may be selected to be operative on whatever type of processor or processors that are selected to implement applicable ones of the processors2350,2550and/or2750within each one of the devices2300,2500and/or2700, respectively. In various embodiments, each of these routines may include one or more of an operating system, device drivers and/or application-level routines (e.g., so-called “software suites” provided on disc media, “applets” obtained from a remote server, etc.). Where an operating system is included, the operating system may be any of a variety of available operating systems appropriate for the processors2350,2550and/or2750. Where one or more device drivers are included, those device drivers may provide support for any of a variety of other components, whether hardware or software components, of the devices2300,2500and/or2700. In various embodiments, each of the storages2360,2560and2760may be based on any of a wide variety of information storage technologies, including volatile technologies requiring the uninterrupted provision of electric power, and/or including technologies entailing the use of machine-readable storage media that may or may not be removable. Thus, each of these storages may include any of a wide variety of types (or combination of types) of storage device, including without limitation, read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDR-DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory (e.g., ferroelectric polymer memory), ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, one or more individual ferromagnetic disk drives, non-volatile storage class memory, or a plurality of storage devices organized into one or more arrays (e.g., multiple ferromagnetic disk drives organized into a Redundant Array of Independent Disks array, or RAID array). It should be noted that although each of these storages is depicted as a single block, one or more of these may include multiple storage devices that may be based on differing storage technologies. Thus, for example, one or more of each of these depicted storages may represent a combination of an optical drive or flash memory card reader by which programs and/or data may be stored and conveyed on some form of machine-readable storage media, a ferromagnetic disk drive to store programs and/or data locally for a relatively extended period, and one or more volatile solid state memory devices enabling relatively quick access to programs and/or data (e.g., SRAM or DRAM). It should also be noted that each of these storages may be made up of multiple storage components based on identical storage technology, but which may be maintained separately as a result of specialization in use (e.g., some DRAM devices employed as a main storage while other DRAM devices employed as a distinct frame buffer of a graphics controller). However, in a specific embodiment, the storage2560in embodiments in which the one or more of the federated devices2500provide federated spaces2566, or the storage devices2600in embodiments in which the one or more storage devices2600provide federated spaces2566, may be implemented with a redundant array of independent discs (RAID) of a RAID level selected to provide fault tolerance to objects stored within the federated spaces2566. In various embodiments, the input device2720may be any of a variety of types of input device that may each employ any of a wide variety of input detection and/or reception technologies. Examples of such input devices include, and are not limited to, microphones, remote controls, stylus pens, card readers, finger print readers, virtual reality interaction gloves, graphical input tablets, joysticks, keyboards, retina scanners, the touch input components of touch screens, trackballs, environmental sensors, and/or either cameras or camera arrays to monitor movement of persons to accept commands and/or data provided by those persons via gestures and/or facial expressions. In various embodiments, the display2780may be any of a variety of types of display device that may each employ any of a wide variety of visual presentation technologies. Examples of such a display device includes, and is not limited to, a cathode-ray tube (CRT), an electroluminescent (EL) panel, a liquid crystal display (LCD), a gas plasma display, etc. In some embodiments, the display2780may be a touchscreen display such that the input device2720may be incorporated therein as touch-sensitive components thereof. In various embodiments, each of the network interfaces2390,2590and2790may employ any of a wide variety of communications technologies enabling these devices to be coupled to other devices as has been described. Each of these interfaces includes circuitry providing at least some of the requisite functionality to enable such coupling. However, each of these interfaces may also be at least partially implemented with sequences of instructions executed by corresponding ones of the processors (e.g., to implement a protocol stack or other features). Where electrically and/or optically conductive cabling is employed, these interfaces may employ timings and/or protocols conforming to any of a variety of industry standards, including without limitation, RS-232C, RS-422, USB, Ethernet (IEEE-802.3) or IEEE-1394. Where the use of wireless transmissions is entailed, these interfaces may employ timings and/or protocols conforming to any of a variety of industry standards, including without limitation, IEEE 802.11a, 802.11ad, 802.11ah, 802.11ax, 802.11b, 802.11g, 802.16, 802.20 (commonly referred to as “Mobile Broadband Wireless Access”); Bluetooth; ZigBee; or a cellular radiotelephone service such as GSM with General Packet Radio Service (GSM/GPRS), CDMA/1×RTT, Enhanced Data Rates for Global Evolution (EDGE), Evolution Data Only/Optimized (EV-DO), Evolution For Data and Voice (EV-DV), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), 4G LTE, 5G, etc. However, in a specific embodiment, one or more of the network interfaces2390and/or2590may be implemented with multiple copper-based or fiber-optic based network interface ports to provide redundant and/or parallel pathways in exchanging at least the speech data sets2130. In various embodiments, the division of processing and/or storage resources among the federated devices1500, and/or the API architectures employed to support communications between the federated devices and other devices may be configured to and/or selected to conform to any of a variety of standards for distributed processing, including without limitation, IEEE P2413, AllJoyn, IoTivity, etc. By way of example, a subset of API and/or other architectural features of one or more of such standards may be employed to implement the relatively minimal degree of coordination described herein to provide greater efficiency in parallelizing processing of data, while minimizing exchanges of coordinating information that may lead to undesired instances of serialization among processes. However, it should be noted that the parallelization of storage, retrieval and/or processing of portions of the speech data sets2130are not dependent on, nor constrained by, existing API architectures and/or supporting communications protocols. More broadly, there is nothing in the manner in which the speech data sets2130may be organized in storage, transmission and/or distribution via the network2999that is bound to existing API architectures or protocols. Some systems may use Hadoop®, an open-source framework for storing and analyzing big data in a distributed computing environment. Some systems may use cloud computing, which can enable ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Some grid systems may be implemented as a multi-node Hadoop® cluster, as understood by a person of skill in the art. Apache™ Hadoop® is an open-source software framework for distributed computing.
373,707
11862172
DETAILED DESCRIPTION Disclosed is an approach for providing a pervasive user experience capable of effectively integrating robo-advising with as-needed human advising. Example systems and methods may include a proactive listening bot and/or other consumer computing devices configured to actively detect conversations and determine that a financial issue is being discussed. Based on the financial discussions, a financial strategy may be developed. As used herein, the term “financial strategy” may be used to refer to a strategy generated to meet a financial goal. A financial strategy may include a financial plan, budget, investment strategy, or combination thereof. The system may include one or more consumer computing devices in communication with a computing system of a provider, which may be a financial institution. A consumer computing device may be structured to detect a voice input, and the consumer computing device and/or the provider computing system may determine that a financial goal (e.g., a major expenditure, credit repair, transaction, or purchase such as a vacation, new home, expensive jewelry, or any other purchase requiring substantial funding) was or is being discussed. The consumer computing devices may communicate or otherwise present (via, e.g., an application that generates a virtual dashboard or other user interface) a financial strategy for meeting the financial goal in response to the detection of the voice input and identification of the financial goal. The connected computing device and/or provider computing system may advise a customer to connect with an advisor computing device of an advisor (who need not be associated with the provider) based on, for example, the customer's financial goals. The system may match the customer with a suitable advisor, schedule a meeting, and facilitate a discussion via, for example, an application running on the consumer computing device that connects the consumer computing device with the advisor computing device. The user computing device, advisor computing device, and/or provider computing device may update the financial goals and/or financial strategy (e.g., by extracting relevant information exchanged or discussed in the meeting), and continue advising the user as before, informed by the information exchanged in the meeting, until another issue warranting connection with an advisor computing device is identified and the user wishes to connect with the same (or another) advisor computing device. Embodiments and implementations of the systems and methods disclosed herein improve current computing systems by providing proactive and pervasive user experiences involving seamless or otherwise substantially enhanced) transitions between robo-advising and human advising. In some implementations, financial goals affecting multiple users may be identified based on, for example, already-known associations of computing devices of existing customers with a provider computing system. The system may include mechanisms (e.g., digital voice assistants, biometric scanners, and so on) for authenticating users to enable simultaneous financial advising for multiple users. Identities may be verified in various ways to prevent fraudulent activity and to ensure that each person who interacts with the proactive listening bot operates under the proper security roles and permissions. A “ubiquitous” proactive listening bot (i.e., a bot that may be configured to detect signals using multiple or all computing devices of one or more customers at all times or until turned off or otherwise deactivated) can be structured to identify financial goals and needs that users may be able to identify for themselves due to a lack of information or expertise. Users who may not be aware of a potential strategy for improving their financial health need not manually enter a large quantity of information that may be irrelevant (by, e.g., answering a large number of questions that are intended to reveal (“fish” for) financial issues that may or may not exist). Without such requirements, the computing resources needed (e.g., processing time, programmatic instructions, memory utilization, etc.), are reduced. In some situations, advise from a professional may be needed. However, even after the right advisor is found, connecting with the advisor and providing needed information is a time-consuming and inefficient process. For example, professional advisors tend to use their own devices and are generally part of separate computing environments. By matching a user with the right advisor based on information acquired proactively (by, e.g., listening to the user and without requiring separate user entry), and by allowing calendar sharing and syncing, the user is able to easily find an advisor and schedule meetings in less time and with reduced demand for computing resources. Moreover, conventionally, to provide an advisor with financial information about him/herself (and others affected by the user's financial health), the user could share his or her login credentials to allow the advisor to access the user's financial accounts to retrieve the information needed. However, this is a great security risk, is likely to share too much personal information, and can be over-inclusive (requiring the advisor to spend additional time extracting relevant information from a large amount of data). And after each interaction with the advisor, the customer conventionally must manually update his or her financial records. By interfacing with the advisor's system, security risks are reduced, as are the time and processing resources required to keep financial records updated. The facilitated transitions between robo-advising and human advising disclosed herein involves an unconventional solution to a technological problem. Further, the disclosed approach improves computing systems by using one or more computing devices to interact with a user (e.g., a customer) via voice recognition and analytics that pervasively and interactively provide financial planning advice to users. Rather than requiring a user to dedicate time and computing resources to determining one's financial needs and goals and researching available options (e.g., by filling out a questionnaire intended to identify issues/needs/goals and seeking sources of information from various databases), user devices can acquire the information without requiring the user to dedicate time or otherwise change daily activities. User computing devices are not limited to single, one-time statements in determining customer goals and needs, but can obtain the needed information over the course of a day, a week, a month, or longer, based on multiple conversations with family and friends, consultations with advisors, and/or other activities. This saves a computing device from having to either remain silent because not enough is known to provide a relevant or useful recommendation, or provide recommendations that are likely to be irrelevant or unhelpful because they are based on tidbits of information or on conjecture. Systems, methods, and computer implementations disclosed herein improve the functioning of such systems and information management by providing unconventional, inventive functionalities that are novel and non-obvious improvements over current systems. Referring toFIG.1, a block diagram of a proactive advising system100is shown according to one or more example embodiments. As described herein, the proactive advising system100enables the implementation of pervasive user experiences involving facilitated transitions between robo-advising and human advising. As used herein, robo-advising, bot advising, robot advising, and like terms refer to advising that does not involve interaction with, or intervention by, a person. Robo-advising may be implemented using one or more mobile or non-mobile computing devices capable of acquiring inputs from a user (e.g., a user's communications) and automatically performing actions, or providing recommendations for future actions by the user, that affect the user's circumstances. The robo-advising may be accomplished using, for example, artificial intelligence tools, intelligent agents, machine learning, or other logic and algorithms capable of extracting relevant information from input streams that include both relevant and non-relevant information (e.g., conversations that may span multiple days and cover related and unrelated topics). The proactive advising system100includes one or more provider computing devices110(of one or more service providers), one or more consumer computing devices120(of one or more users receiving one or more financial or other services from the service provider), one or more advisor computing devices130(of one or more persons who advise users, and who may or may not be associated with the service provider), and one or more third-party computing devices140(of entities that are separate from the service provider). Each provider computing device110, consumer computing device120, advisor computing device130, and third-party computing device140may include, for example, one or more mobile computing devices (e.g., smartphones, tablets, laptops, smart devices such as home smart speakers and watches, etc.), non-mobile computing devices (such as desktop computers, workstations, servers, etc.), or a combination thereof. Provider computing devices110, consumer computing devices120, advisor computing devices130, and third-party computing devices140may be communicably coupled to each other over a network150, which may be any type of communications network. The network150may involve communications using wireless network interfaces (e.g., 802.11X, ZigBee, Bluetooth, near-field communication (NFC), etc.), wired network interfaces (e.g., Ethernet, USB, Thunderbolt, etc.), or any combination thereof. Communications between devices may be direct (e.g., directly between two devices using wired and/or wireless communications protocols, such as Bluetooth, WiFi, NFC, etc.), and/or indirect (e.g., via another computing device using wired and/or wireless communications protocols, such as via the Internet). The network150is structured to permit the exchange of data, values, instructions, messages, and the like between and among the provider computing devices110, the consumer computing devices120, the advisor computing devices130, and the third-party computing devices140via such connections. Referring toFIG.2, computing device200is representative of example computing devices that may be used to implement proactive advising system100, such as one or more provider computing devices110, consumer computing devices120, advisor computing devices130, and/or third-party computing devices140. Not every provider computing device110, consumer computing device120, advisor computing device130, and third-party computing device140necessarily requires or includes all of the example device components depicted inFIG.2as being part of computing device200. Multiple computing devices200(each with a potentially different set of components, modules, and/or functions) may be used by one service provider (e.g., a financial institution providing financial and other services), one user (e.g., a customer receiving financial advice), one advisor (e.g., a professional who provides financial advice suited to a customer's circumstances), or one third party (e.g., a credit agency, government agency, merchant, or other source of information or provider of services). Similarly, one computing device200may be used by multiple service providers, multiple users, multiple advisors, or multiple third-parties. Each computing device200may include a processor205, memory210, and communications interface215. Each processor205may be implemented as a general-purpose processor, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a digital signal processor (DSP), a group of processing components, or other suitable electronic processing components structured to control the operation of the computing device200. The memory210(e.g., RAM, ROM, NVRAM, Flash Memory, hard disk storage) may store data and/or computer code for facilitating at least some of the various processes described herein. In this regard, the memory210may store programming logic that, when executed by the processor205, controls the operation of the computing system200. Memory210may also serve as one or more data repositories (which may include, e.g., database records such as user and account data and data acquired from various sources). The communications interface215may be structured to allow the computing device200to transmit data to and receive data from other mobile and non-mobile computing devices (e.g., via network150) directly or indirectly. Each computing device200may include one or more other components (generally involving additional hardware, circuitry, and/or code) depending on the functionality of the computing device200. User interfaces220include any input devices (e.g., keyboard, mouse, touchscreen, microphone for voice prompts, buttons, switches, etc.) and output devices (e.g., display screens, speakers for sound emission, notification LEDs, etc.) deemed suitable for operation of the computing device200. Computing device200may also include one or more biometric scanners225, such as fingerprint scanners, cameras for facial, retinal, or other scans, microphones for voice signatures, etc. In conjunction with, or separate from, the biometric scanners225, each computing device200may include authentication circuitry230to allow the computing device200to engage in, for example, financial transactions (such as mobile payment and digital wallet services) in a more secure manner. Various computing devices200may include one or more location sensors235to enable computing device200to determine its location relative to, for example, other physical objects or relative to geographic locations. Example location sensors235include global positioning system (GPS) devices and other navigation and geolocation devices, digital compasses, gyroscopes and other orientation sensors, as well as proximity sensors or other sensors that allow the computing device200to detect the presence and relative distance of nearby objects and devices. Computing device200may also include ambient sensors240that allow for the detection of sound and imagery, such as cameras (e.g., visible, infrared, etc.) and microphones, in the surroundings of computing device200. A computing device's microphone may be considered an ambient sensor that could also be used as a biometric scanner if it is involved in capturing the voice of a user for authentication purposes, and/or a user interface if the microphone is involved in receiving information, commands, or other inputs from, for example, speaking users. Each computing device200may include one or more applications250(“apps”) that aid the computing device200in its operations and/or aid users of the computing device200in performing various functions with the computing device200. In some implementations, applications250may be stored in memory210and executed using processor205, and may interact with, or otherwise use, one or more of communications interfaces215, user interfaces220, biometric sensors225, authentication circuitry230, location sensors235, and/or ambient sensors240. Not every provider computing device110, consumer computing device120, advisor computing device130, and/or third-party computing device140necessarily requires or includes all of the example application components/modules depicted inFIG.2as being part of application250. Example components of one or more applications250(running on, e.g., provider computing device110, consumer computing device120, and/or advisor computing device130) include a transition module255configured to determine whether or when it is advisable to transition a user between robo-advising and human advising based on one or more transition triggers (which are further discussed below). For example, the transition module255(running on provider computing device110or consumer computing device120) may use inputs to determine that it is appropriate to transition a user computing device120from robo-advising to human advising based on one or more human advising triggers, and from human advising to robo-advising based on one or more robo-advising triggers. Such “go-human” triggers may indicate that a need or goal of a user is sufficiently complex, variable, unpredictable, or significant so as to warrant input from or review by a human advisor. For example, human advising triggers may indicate that two or more options are available for a user, with the options sufficiently divergent (i.e., having substantially different consequences depending on factors beyond the purview of the robo-advisor, and/or requiring subjective evaluation of a user's circumstances) to warrant human intervention. Example go-human triggers may include: a transaction exceeding a threshold value (e.g., investing a large sum of money); a conversation determined to indicate that a situation is very emotionally charged (based on, e.g., above-average volume for the voice of the speakers, detection of tension in voices, and/or identification of a major life event); extensive communications about a topic, suggesting that the user is weighing many factors because a financial issue is significantly nuanced or particularly personal; use of predetermined keywords or phrases associated with topics outside the purview of the robo-advisor; expression of a desire to speak with a professional advisor; etc. Go-human triggers may be identified in, for example, conversations or other communications of the customer with other users and/or with a chatbot. Similarly, the transition module255(running on, e.g., provider computing device110, user computing device120, and/or advisor computing device130) may determine, during a communications session between a customer and an advisor, that the customer may have reached a point that no longer requires human intervention, or that a return to robo-advising may otherwise be a viable option, based on one or more triggers for robo-advising. Such “back to bot” triggers may, for example, indicate that the motivation for transitioning to human advising may no longer be relevant (e.g., an issue has been resolved or otherwise sufficiently addressed, one or more accounts have been set up and/or restructured, etc.), that the topics being discussed are all in the purview of the robo-advisor, and/or that the conversation has become non-financial in nature (e.g., the user and advisor have concluded a discussion of life events or financial situations and are only discussing news or sports). In some implementations, if the topics being discussed during a human-advising session have no go-human triggers (such that if the discussion had been detected outside of the session with the advisor, the robo-advisor would not have determined that human intervention or review is warranted), then the transition module255may determine that a return to robo-advising is appropriate. Back-to-bot triggers may be identified in, for example, conversations or other communications of the customer with the advisor, such as entries while interacting with a user dashboard during a session with the advisor. An advisor manager260may be configured to identify one or more advisors that may be able to assist a user based on the user's needs and/or goals, and to schedule a meeting or other communications session with the advisor (by, e.g., comparing the user's and advisor's calendars to determine mutual or overlapping availability). For example, if one or more go-human triggers are detected, or it is otherwise determined that there is a financial need or goal suited for human advising, the advisor manager may access records stored at a provider computing device110, an advisor computing device130, and/or a third-party computing device140to determine which advisors may have the background and experience suited to the customer's needs and goals. The advisor manager260may also access records (e.g., transcripts) of prior sessions of an advisor (with the same or with other users) to determine whether the advisor would be a good match with the user of the consumer device120. The ultimate suitability of an advisor may sometimes be based, at least in part, on whether the calendars reveal mutual/overlapping availability for the consumer and the advisor (even if otherwise matched based on needs and expertise). The advisor manager260may access one or more calendars accessible to one or more consumer devices120to determine the customer's availability. In some implementations, the advisor manager260may determine the customer's availability based on discussions of the user (e.g., detecting via a consumer device120that the customer stated “I'm available all day Friday”) or other communications. The advisor manager260may access one or more calendars accessible to provider computing device110, advisor computing device130, and/or third-party computing device140to determine the availability of one or more advisors. Computing devices with separately-maintained calendars may interface with each other using, e.g., any combination of one or more application programming interfaces (APIs), software development kits (SDKs or devkits), or other hardware/software mechanisms that facilitate data exchange or communication between and among co-located or remote computing systems with various access protocols. A location monitor265may be configured to determine the location of, for example, consumers and advisors, as well as the locations associated with customer transactions (e.g., where a transaction took place). The location monitor265may be configured to track (using, e.g., one or more location sensors235) the physical location of computing device200. The location monitor265may be configured to identify the location of the computing device200at specified points in time or when triggered by identified events, such as the location of the consumer computing device120when a purchase occurs, when a device is turned on or off, when an application is launched, etc. The location of computing device200may be presumed to correspond with the location of one or more users associated with the computing device200, and/or the location at which an event occurred. In different implementations, location may be determined without using location sensors235. For example, location of computing device200may be determined by determining the location of a merchant at which a purchase occurred using a payment app running on computing device200. Additionally or alternatively, location may be determined using other sensors, such as ambient sensors240used to detect sounds and videos that are recognized as indicative of a certain physical location of the computing device200(e.g., detection of spoken words or phrases from which location may be inferred, or detection of sounds from a public announcement system of a particular landmark such as a train station or airport). Also, a location of a first computing device may be determined based on (geographically-limited) communications (such as NFC, Bluetooth, WiFi) of the first computing device with a (nearby) second computing device (such another user's smartphone, the router of a hotel or restaurant, etc.) for which location has already been determined or is known or presumed. A chatbot270may be configured to simulate a conversation between a customer and advisor. Such a conversation may be conducted by, for example, capturing a customer's spoken words (or other communications), analyzing the communication to better understand context and identify user needs, and responding to the customer or otherwise providing information determined to be relevant. In some implementations, inputs (or a portion thereof) received via chatbot270may be fed to analytics engine275for analyses and formulation of responses. Alternatively or additionally, chatbot270may perform the analyses needed to formulate suitable responses to users. In certain implementations, certain analyses may be performed by chatbot270(e.g., determining what a user is asking and identifying when a financial issue has arisen), while other analyses (e.g., determining what recommendation would be suitable based on the financial issue and the user's circumstances, behaviors, etc.) may be performed via analytics engine275. The analytics engine275may be configured to enable artificial/machine intelligence capabilities by, for example, analyzing customer and advisor inputs (to, e.g., determine user goals and needs) and generating recommendations and proposals for presentation to the customer (to, e.g., achieve goals and/or satisfy needs). The analytics engine275may utilize, for example, artificial intelligence and machine learning tools to analyze customer conversations or other inputs and otherwise provide robo-advising without human intervention. A transaction monitor280may be configured to identify and keep track of financial or other transactions of users. A customer may engage in transactions using, e.g., mobile payment and digital wallet services, or via any app and/or device through which a user may make purchases, transfers, deposits, cash advances, etc. The transaction monitor280may access such sources as user accounts (e.g., bank accounts, brokerage accounts, credit card accounts, merchant accounts, etc.) and payment/wallet applications to acquire data on transactions. A session manager285may be configured to initiate and terminate communications sessions between consumer computing devices120and advisor computing devices130. Such advising sessions may incorporate one or more of audio, video, and text entries of users and advisors. In some implementations, advising sessions may be conducted via the same dashboard (e.g., from within the same application) through which the user is robo-advised. Advising sessions may begin at times scheduled via advisor manager260, and/or on an ad-hoc basis. A profile manager290may generate and update user and advisor profiles (further discussed below), which facilitate robo-advising and human advising and help make transitions between the robo-advising and human advising smoother. An external resource module295may be configured to access data from information sources other than the provider computing device110and the consumer computing device120. In some implementations, the external resource module295may use, for example, any combination of one or more APIs, SDKs, or other hardware/software mechanisms that facilitate data exchange or communication between and among co-located or remote computing systems with various access protocols. Alternatively or additionally, the external resource module295may access publicly-available information sources. External resources may include financial product websites, merchant websites, and other sources of information on available products. In certain implementations, the external resource module295may access social networking websites for information on, for example, life events and familial or other relationships to understand (in an automated fashion) the needs, circumstances, and likely goals of a user (e.g., information on who might be affected by the financial decisions of a user, such the user's children). The external resource module295may similarly access other sources of information, such as credit agencies, news sources, financial institutions, governmental bodies, etc. Information from such sources may provide inputs to the analytics engine275to inform the robo-adviser in making recommendations as to, for example, financial goals and changes thereto. The information may also be made available to human advisors to assist with advising sessions. Although the above discussion identifies a set of modules that perform specified functions, in various implementations, the above (and other) functions may be performed by any module in the system100. Functions performed by the modules discussed above may be redistributed (i.e., differently apportioned or distributed) among the modules of applications running on provider computing devices110, consumer computing devices120, advisor computing devices130, and/or third-party computing devices. Similarly, the functions discussed may be consolidated into fewer modules, or expanded such that they are performed by a greater number of (separate) modules than illustrated above. For example, functions performed by the above-identified modules of one or more provider computing devices110could additionally or alternatively be performed by modules of one or more consumer computing devices120, and functions performed by the above-identified modules of one or more consumer computing devices120could additionally or alternatively be performed by modules of one or more provider computing devices110. Referring toFIG.3, in example implementations, a system300may include a virtual dashboard310(see, e.g.,FIGS.8-17discussed below) that is accessible to one or more consumer computing devices120and one or more advisor computing devices130. The dashboard310, which may be maintained and/or administered using one or more provider computing devices110of a service provider, may be “unified” in the sense that it allows consumer computing devices120and advisor computing devices130to effectively exchange information in the same virtual environment. Because customers and advisors may interact with each other via, for example, user interfaces with common elements, and both users and advisors may be able to readily access at least some (if not all) of the same information and user interface elements, advisors may more easily learn of a customer's circumstances (goals, needs, etc.) via dashboard310. This may help save consumers and advisors from needing to devote a substantial amount of resources (time, computing resources, etc.) to bring an advisor “up to speed.” Users need not spend time explaining their unique situations by sharing details that have already been entered or otherwise provided by the user or acquired from various information sources (such as third-party computing devices140). A common dashboard helps discussions by allowing customers and advisors to refer to the same user interface elements. Moreover, familiarity with the dashboard allows the customer and advisor to more readily access and provide information that is relevant to different topics being discussed or otherwise addressed. The unified dashboard310may help provide for smoother transitions between robo-advising and human advising. In certain implementations, the provider computing system110may maintain a user profile (further discussed below) that may include relevant financial information, user preferences, triggers for transitioning between robo-advising and human advising, and other data. The provider computing system110may use user profiles to assist with the implementation of dashboard310. Consumer computing devices120can be provided access to the dashboard310to receive recommendations, review conversations, enter additional information, monitor progress towards goals, request and schedule human advising sessions, etc. Advisor computing devices130may be used to access consumer data, schedule advising sessions with consumers, provide additional recommendations, monitor and update goals, etc. The user profile may include parameters for what information is accessible, when transitions are advisable, etc., further helping make transitions smoother. Referring toFIG.4, various versions of example process400may be implemented using, for example, a provider computing device110, a consumer computing device120, and an advisor computing device130. At410, one or more computing devices200(e.g., consumer computing devices120and/or, in some implementations, provider computing device110) may be used to capture user inputs. User inputs may include conversations (e.g., spoken conversations or discussions in electronic messages) captured via computing devices200, entries submitted via application250, or any other transfer or exchange of data from the user to the computing device200. For example, application250running on consumer computing device120may detect (using microphones of one or more consumer computing devices120) that a customer is discussing a financial matter. In some implementations, a provider computing device110may receive audio of a conversation from a consumer computing device120for analysis, and/or a consumer computing device120may itself analyze audio of conversations. In certain implementations, particular keywords or phrases may be deemed to indicate a potential financial goal or need. Examples include: “my mother had a bad fall . . . I need to manage her finances”; “my credit score is really low . . . . I need to work on improving my credit score.”; “I would like to buy a car”; “I would like to go on a vacation/I need a vacation”; “Honey, we should save some money . . . We should have more of a cushion in our finances in case we have unexpected expenses”; “We're having a baby, we need to start saving for college”; etc. Additionally or alternatively, at420, one or more computing devices200may access records on financial or other transactions of the user to identify transactions indicative of a user need or goal (such as baby supply purchases indicative of a potential goal or need to save for educational expenses). In some implementations, such transactions may be detected via, for example, application250running on, for example, a consumer computing device120, such as mobile wallet or electronic payment application. In various implementations, such transactions may be identified by, for example, a consumer computing device120accessing user records maintained at or administered by a provider computing device110(e.g., for accounts held at a provider that is a financial institution) and/or accessing a third party computing device140. In some implementations, such transactions may be identified by a provider computing device110accessing a consumer computing device120and/or a third party computing device140. At430, one or more computing devices (e.g., provider computing device110and/or consumer computing device120) may retrieve data from third party computing devices140that may be informative of a user's circumstances. For example, accessing a customer's credit report may indicate that a customer may need assistance with improving his or her credit score. Similarly, application250(running on, e.g., a provider computing device110and/or a consumer computing device120) may access social networking applications to identify family members, life events, travel plans, etc. A determination as to which third party data sources to access may be based at least in part on user inputs and/or transactional data. For example, application250may detect a conversation about an upcoming trip without an identification of the destination, or about an upcoming move to a college dorm without an identification of the college or dorm, and in response a provider computing device110may determine that accessing a third party computing device140of a social networking source, a college directory, a ticket purchase identified via travel sites, etc., may help identify the destination, college, and/or dorm. At440, the user inputs, transactional data, and/or third party data may be analyzed by one or more computing devices200(e.g., via analytics engine275of application250running on a provider computing device110and/or on a consumer computing device120) to identify one or more financial issues. For example, based on user inputs acquired via a consumer computing device120, a provider computing device110may determine that a consumer could benefit from a financial product or a certain course of action. In response, at450, the provider computing device110may present, via an application250running on a consumer computing device120, a recommendation. The recommendation may be, for example, to set up an account (e.g., a bank or credit account), divert money into one or more accounts for savings, subscribe to a service, etc. If it is determined that the financial issue warrants review or intervention by a human advisor, the recommendation of provider computing device110(presented via, e.g., application250running on a consumer computing device120) may be to engage with a human advisor (e.g., an advisor generally, an advisor by specialty or expertise, and/or an advisor by name). The advisor manager260running on, for example, a provider computing device110and/or a consumer computing device120may then help the consumer computing device120find and connect with one or more advisor computing devices130. If a customer wishes to proceed with human advising, computing device200(e.g., provider computing device110and/or consumer computing device120) may, at460, facilitate an advising session with a human advisor. This may include identifying potential advisors suitable for the financial issues relevant to the customer's situation (by, e.g., the provider computing device110and/or consumer computing device120accessing advisor biographies stored at one or more provider computing devices110, advisor computing devices130, and/or a third party computing devices140). In some implementations, facilitating an advising session with a human advisor may include the computing device200(e.g., a provider computing device110) arranging a time for the customer to have a discussion with an advisor by accessing calendars on one or more consumer computing devices120and advisor computing devices130, and proposing one or more times during which the customer and the advisor are both available. The provider computing device110may then instruct the consumer computing device120and/or advisor computing device130to update the calendars that are able to be accessed and changed via the consumer computing device120and/or the advisor computing device130. In some implementations, the calendar is additionally or alternatively maintained on dashboard310, which may be linked to other calendars accessible to consumer computing device120and/or advisor computing device130. In some implementations, a provider computing device110may, from within dashboard310, connect a consumer computing device120with an advisor computing device130. This may be accomplished by enabling video chat, audio chat, text chat, or other live interaction sessions. In certain implementations, the provider computing device110may monitor the communications (e.g., by listening to spoken words) or other data exchanged during live interactive sessions between customers and advisors to update customer goals and needs for subsequent use. Monitoring such data can enable the robo-advisor to seamlessly take over from advisor computing device130when the human advising session is concluded and advise or otherwise assist the customer (until human intervention is needed at a future time). In other implementations, provider computing device110does not facilitate a live session between the consumer computing device120and the advisor computing device130, and instead subsequently updates a user profile using data obtained via other channels after the session has concluded. Such data may be obtained by, for example, capturing user inputs (410) (e.g., by listening to a conversation about the session between the customer and another person), accessing transactional data (420), and/or acquiring data from third party source (430). Referring toFIG.5, an example process500for transitioning between robo-advising mode510(on left side) and human advising mode520(on right side) is depicted. At530, provider computing device110surveils consumer computing devices120and third party computing devices140to identify financial issues and changes in/updates to a customer's circumstances. As discussed above, this may be accomplished, for example, via channels that allow for monitoring of communications (e.g., by detecting conversations via a chat bot and/or scanning electronic messages to extract relevant data). Based on the data acquired via such surveillance, at535, provider computing device110and/or consumer computing device120may determine a strategy and present (via, e.g., application250running on the consumer computing device120) one or more recommendations. Based on inputs (e.g., one or more “go-human” triggers), at540, the provider computing device110and/or consumer computing device120may determine that human advising is desirable and recommend a session with a human advisor. At545, the provider computing device110and/or the consumer computing device120may then identify suitable advisors and schedule a communications session with an advisor computing device130. The provider computing device110may then, at550, initiate a live communications session (e.g., with video, audio, and/or text chatting) between the consumer computing device120and the advisor computing device130. Based on the communications between the consumer computing device120and the advisor computing device130, provider computing device110may, at555, update or otherwise revise the profile, financial goals, and strategies of the customer (stored at, e.g., the provider computing device110, the consumer computing device120, the advisor computing device130, and/or the third party computing device140). At565, the provider computing device110may then, in response to a command from the consumer computing device120and/or from the advisor computing device130) terminate the live human advising session and return the customer to robo-advising mode510. In some situations, the customer may receive the help that warranted a human advisor, but the human advising session is not terminated (because, e.g., topics to be discussed were added during a session, because the topics of discussion were too broad to begin with, etc.). The advisor may then be spending time with a customer in human advising520even though the customer could be served just as well via robo-advising510. The provider computing device110and/or advisor computing device130may, in some implementations, monitor the communications between the user computing device120and the advisor computing device130for “back to bot” triggers, or to otherwise determine when the human advisor may no longer be needed, or when the customer has reached a point at which the provider computing device110may be able to assist the customer using automated tools. The provider computing device110and/or advisor computing device130may (via, e.g., dashboard310) present a virtual button, link, “pop up” notification or other message, etc. (see, e.g.,FIG.11), to inform the advisor that one or more matters suspected to be addressable via robo-advising have been identified and/or to otherwise allow the advisor to initiate a “handoff” back to the robo-advisor. In some implementations, such a selection terminates the human advising session. In other implementations, such a selection additionally or alternatively sends a message to consumer computing device120with an option to terminate the advising session and/or a list of one or more topics or selections for issues to address (e.g., enter requested information on financial accounts, income, bills, etc.) outside of the communications session (e.g., in an automated fashion). Advantageously, this can enhance efficiency and save the time of both the advisor and the consumer by using the type of interaction (robo versus human) suited to the stage of advising or the particular issues to be addressed. For example, having a human advisor waiting while the provider computing device110and/or the consumer computing device120collects information (e.g., account numbers, etc.) may not be an ideal use of the advisor's time. Similarly, having a customer waiting as the advisor computing device130retrieves information on a set of available options when the set can be generated by the robo-advisor (potentially more quickly) may not be an ideal use of the customer's time. Referring toFIG.6, illustrated is an example profile600that may, in certain implementations, be generated and/or maintained by provider computing devices110for use by provider computing devices110, consumer computing devices120, and/or advisor computing devices130. This profile may be saved in memory as database records, data packets, text files, or in other suitable formats. As discussed above, a transition module255may determine that it is appropriate to transition a user computing device120from robo-advising to human advising to better assist a customer. To facilitate such determinations, profile600may include go-human triggers605(discussed above) to assist with the identification of a situation in which a human advisor may be suitable. Go-human triggers605may, for example, be unique to the specific customer based on past behaviors (e.g., if a customer has sought human assistance when a certain issue arises, the issue/behavior may indicate a go-human trigger605). Triggers605may also include customer inaction in response to certain life events and/or in response to certain recommendations in situations (which may be unique to a customer) deemed to be significant enough to warrant action sooner rather than later (based on, e.g., certain detected inputs). Similarly, the transition module255may determine a return to robo-advising may be appropriate based on back-to-bot triggers610(discussed above). Back-to-bot triggers610may be based on, for example, certain behaviors of the customer. For example, if a customer is detected to routinely (and in a sufficiently timely manner) handle certain financial situations without advising sessions with advisor computing devices130, then identification of the financial situation may be a back-to-bot trigger that indicates it may be suitable to allow the customer to continue on a robo-advising track or otherwise without human discussion for the time being. Back-to-bot triggers may alternatively or additionally be based on a customer's savviness, expertise, or familiarity with certain situations. For example, if a customer is determined to be sophisticated with respect to certain financial situations, then identification of the corresponding financial situations may indicate that robo-advising may be suitable. In some implementations, a customer's savviness or ability to handle a situation may be determined, for example, via an evaluation (e.g., using analytics engine275running on provider computing device110, consumer computing device120, and/or advisor computing device130) of the customer's sophistication with respect to certain issues. Sophistication may be based on, for example, how advanced the language used by the customer is with respect to an issue. For example, a customer who is detected to discuss available options with respect to a certain financial situation with a family member may be deemed more sophisticated than a customer who is detected only to discuss the circumstances of the financial situation with no talk of viable options for how the customer may proceed. Sophistication (in general or specific to financial issues/situations) may be stored in one or more fields of profile600to help with advising generally and to help make transitions between robo-advising and human advising more effective. In certain implementations, fragmented issue indicators615may be used to allow provider computing device110and/or user computing device120to track and connect inputs over time (as being related or otherwise as building upon each other to form a better picture of circumstances or otherwise better inform advising). In some situations, a person's needs or goals do not become apparent in one conversation, statement, communication, transaction, or other act. For example, the keywords and/or phrases that indicate a user has a certain need or goal may not be detected as part of a single conversation or otherwise within a short period of time. Needs or goals may unravel over time (hours, days, weeks, months, etc.) as a consumer obtains more information and/or contemplates his or her situation based on new events and available information. And the bases for such goals and needs may go unexpressed or otherwise remain unapparent for some time. For example, a consumer device120may detect a customer explaining to a friend that his or her mother had a bad fall, and may detect, in a separate conversation with his or her sibling, the customer explaining “I need to manage her finances.” Separately, these inputs may be insufficient to identify a financial goal or need and make a good recommendation. However, when considered together, these two inputs may be deemed (by, e.g., analytics engine275) to indicate that a user may need certain financial assistance or have a certain financial goal. The consumer computing device120(and/or the provider computing device110using audio or other data received via consumer computing devices120) may (based on, e.g., detected keywords, phrases, or other signals) determine that a piece of information may potentially be relevant to whether a financial goal or need exists. If such a signal is detected, the provider computing device110and/or user computing device120may record such a signal as a fragmented issue indicator615. Then, when a second signal that is similarly determined to include a piece of information that is potentially relevant to some financial issue is detected, the provider computing device110and/or consumer computing device120may access profile600for fragmented issue indicators615that may be relevant. If such a related fragmented issue indicator615is in the user's profile600, the robo-advisor (via, e.g., the provider computing device110and/or the consumer computing device120) may determine that there is a likely need, and generate an appropriate recommendation, or determine that more information (e.g., additional signals or inputs) is needed to generate a relevant or useful recommendation. In the above example, the consumer computing device120and/or provider computing device110may identify a first signal when a phrase such as “my mother had a bad fall last night” is detected. In some implementations, application250may first process the signal to give the signal more meaning or clarity and/or to supplement the signal with additional information. For example, analytics engine275running on provider computing device110may analyze the phrase and retrieve information from various sources to determine who was involved (e.g., who is the speaker's mother based on user records or third party sources), on what date the fall occurred (e.g., what is the date of the day before the day on which the signal was detected), what can be predicted about the fall in the context of the conversation (e.g., if the speaker's voice indicated that the speaker was upset, the fall may be deemed to have been more serious or more recent than if the speaker's voice indicated the speaker was apparently nonchalant about the incident), what a “bad” fall might mean for a person of the mother's age or other known or determinable circumstances (e.g., the mother's age or whether such falls have occurred in the past), etc. Such information may be in the user's record or determinable from third party sources (e.g., from sources of medical information), and the fall may be deemed more serious based on certain criteria (such as the mother's age being above a certain age threshold, or the mother suffering from certain conditions associated with low bone density, etc.). In various implementations, signals (detected via, e.g., provider computing device110and/or consumer computing device120) need not be limited to expressions (e.g., spoken conversations, written discussions, or other communications). Additionally, signals may be actions taken (using, e.g., consumer computing device120), such as opening certain accounts, making certain funds transfers, making certain purchases, and/or traveling to certain locations (such as car dealerships, open houses, baby supply stores, assisted living homes, hospitals in general, specific clinics or doctors' offices with certain specialties, accountants' offices), etc. The provider computing device110and/or consumer computing device120may record a fragmented issue indicator615following the first signal in the profile600. In various implementations, fragmented issue indicator615may state, for example, a derivation of the communicated phrase (e.g., “family member had an accident,” “user's mother had a fall,” etc.), the phrase itself (i.e., “my mother had a bad fall last night”), or a supplemented or otherwise revised version of the phrase (e.g., “my mother had a bad fall [on mm/dd/yyyy],” “[user name's] ‘mother had a bad fall’ on mm/dd/yyyy,” or “[mother's name] ‘had a bad fall’ on mm/dd/yyyy”). Where the fragmented issue indicator615arises from detection of a location of the consumer computing device120, the fragmented issue indicator615may include an identification of the location visited, such as “customer visited open houses at [home 1] and [home 2]” or “customer visited assisted living home [at address].” In some implementations, the identification of the location may be accompanied by an indication of the amount of time spent at the location, such as “customer spent [amount of time] at an assisted living home.” In certain implementations, a visit to a location may not be deemed significant enough to warrant recording a fragmented issue indicator unless the consumer computing device120was detected to have remained at the location for a certain minimum amount of time. For example, a fragmented issue indicator615may not be triggered unless the consumer computing device120was detected to have remained at a relevant location a minimum of 10 minutes. In some implementations, an analytics engine275may decide whether to include a fragmented issue indicator615in profile600by balancing the likely relevance of a statement or a location visited, the amount of time spent at the location, and/or the likely impact on advising or needs and goals of the customer. In some versions, fragmented issue indicators615may be saved as a compilation of, or otherwise associated with, multiple fields. For example, there may be a “subject” or “primary” field that may be populated with a phrase or derivations thereof, identification of certain actions, or other signals. Additional example fields include: time and/or date an input was captured and/or added to profile600; which computing device was used to capture an input; identity of a user associated with the computing device used to capture an input; location of the computing device used to capture an input; identify of the speaker or source of the input; etc. In some implementations, these may be used to give meaning to fragmented issue indicators615or combinations thereof. In some implementations, a user's profile600includes fragmented issue indicators615associated with multiple users. The names of other users (e.g., family members, confidants, etc.) with whom a user is associated may be included in profile600(e.g., in goals and progress625), and fragmented issue indicators615may be stored in multiple profiles600such that any single profile600may include the fragmented issue indicators615of all associated users. For example, a first user's profile600may include fragmented issue indicators615of a second user (and vice versa) who is a family member, friend, or otherwise associated with the first user. Signals acquired from multiple individuals (stored in one or more profiles600) may then be used by, for example, provider computing device110and/or consumer computing device120to generate recommendations. As an illustrative example, a first signal may be based on a first input resulting from a first user (e.g., an adult child) saying “I need to manage her finances.” A second signal may be based on a second input from a second user (e.g., a parent of the adult child) saying “I had a bad fall.” A third signal may be based on detection of the consumer computing device120being located at an assisted living home for more than 30 minutes. These three inputs may be used to generate three fragmented issue indicators615that, together, identify a financial goal of a person wishing to manage another's finances based on the other's needs. Advantageously, inputs related to one user's circumstances, goals, needs, etc., may be more accurately and/or quickly identified by acquiring and considering inputs from multiple user computing devices200associated with multiple other users (who may communicate about each other even if not directly speaking or otherwise communicating with each other). The fragmented issue indicator615(as well as any of the other parameters in profile600) may also include an access permissions field that identifies which fields (if any) of the fragmented issue indicator615(or other parameter corresponding to the access field) are accessible to particular advisors or other users. In some implementations, a recommendation from the robo-advisor may be based on one or more fragmented issue indicators615. Additionally or alternatively, the provider computing device110and/or user computing device120may await a second (or third, fourth, etc.) signal that is relevant to the first signal (or one or more prior signals if more than one) and allows for a more informed or more targeted recommendation. Continuing with the above example, if the user computing device120detects “I need to manage her finances,” application250may determine there is a potential financial issue (based on, e.g., keywords such as “manage” and “finances”) but may also determine that more information is desirable for formulating a suitable recommendation. Such information may, in some implementations, be acquired via dialogue with the customer (e.g., an inquiry, conversation, or other information exchange). For example, chatbot270of application250(running on, e.g., a consumer computing device120) may speak with the customer to ask general questions (e.g., inquiring whether the customer would like assistance with a financial issue, followed by more specific questions) and/or specific questions (e.g., inquiring whether the customer would like to manage all finances or only finances related to certain expenditures, such as health care). In certain implementations, when the second third, or other signal is detected, the provider computing device110and/or user computing device120may access the fragmented issue indicators615for related information. Based on, for example, one or more signals (related to the mother's fall), application250may predict that the person who is to have her finances managed (corresponding to the term “her” in a statement) is the mother's, and the reason for the management of finances might be a “bad fall.” The robo-advisor (via, e.g., provider computing device110and/or user computing device120) may then be more informed about subsequent signals (e.g., that the fall will be subsequently discussed and additional details can be extracted from those subsequent conversations), provide more informed recommendations, or ask more informed questions as part of a dialogue with the customer. Alternatively or additionally, the second signal may be recorded as another fragmented issue indicator615for subsequent use (e.g., in combination with a third signal detected subsequently). In some implementations, the fragmented issue indicators615may be made available to an advisor computing device130prior to or during a human advising session. Such fragmented issue indicators615, or certain fields therein, may be recorded using, for example, “plain” text or other format that is readily interpretable by a financial advisor to help make the transition from robo-advisor to human advisor more efficient by helping the advisor more quickly understand the customer's circumstances (and consequent needs and goals). In some implementations, the user profile600may record encoded versions of the signals as fragmented issue indicators615, and the decoding scheme may be made accessible to specified advisor computing devices130or other devices to help control what information is shared (to save time that might otherwise be spent reviewing information that is not particularly relevant to a topic to be discussed during an advising session, to better maintain confidentiality of certain information, etc.). This approach assists with implementation of pervasive advising, as a more complete picture can be formed even though computing devices200may only detect or acquire part of the picture (e.g., aspects of a customer's circumstances) in a given time period. Multiple segments of a discussion, user entries, etc., in multiple contexts, may be needed or desired to enhance understanding of relevant financial issues and thus enhance the likely value and relevance of resulting recommendations. Advantageously, user computing devices120being used to detect conversations may not always detect a conversation in its entirety, or even if a whole conversation is detected, not all of the words and meanings may have been understood. For example, if the user computing device120detecting a conversation is a smartphone, and the smartphone is placed in a pocket or bag during a conversation, the voices may become muffled, and the portion of the conversation during which the smartphone is in the pocket or bag may be missed. Similarly, if the user computing device120is a smart speaker in one room, and one or more speakers move out of the room or otherwise out of the range of the smart speaker, portions of the conversation may be missed. By combining fragmented issue indicators615, a customer's needs can be evaluated and identified over time as additional user inputs are detected. Example profiles600may also include one or more fields related to exclusions and deferments620. These fields may indicate, for example, that a customer does desire or need assistance with certain matters (exclusion of a matter), or may not desire or need assistance for a certain specified time period (deferment of matters). In some implementations, application250may refer to exclusions and deferments620before a recommendation is formulated or given. For example, conversationalists (via spoken words, written communications, etc.) may make certain statements in certain contexts that are not, taken in isolation, valuable predictors of a user's goals or needs. For example, a speaker may make a statement with a friend for the purpose of making a point, in jest, sarcastically, to be agreeable, and/or to spare feelings. In a hypothetical, if a friend informs a customer that the friend has not done nearly enough to save for the friend's child's education, and, so as to be agreeable, the customer states that the customer has similarly not done nearly enough, the customer does not necessarily need help with the financial goal of saving for the customer's child's education. The customer may not be prioritizing the particular goal, or may have already established the goal and be making progress towards it (as can be confirmed by application250accessing the customer's accounts, prior advising sessions, other communications, etc.), consequently, the customer may not need to immediately address or revisit the issue. In some implementations, such a statement may be deemed to warrant an entry in exclusions and deferments620of the customer's profile to help limit or avoid recommendations on certain topics. Similarly, an exclusion and deferment620may be generated in response to a specific instruction or statement of a customer (e.g., a customer stating to a consumer computing device120directly or making a statement to another person such as “I do not want to be advised on this topic” or “that's not a priority of mine right now, I will deal with that next month/year”). In some implementations, the information on particular topics may still be saved to help form a better picture of a customer's circumstances, but recommendations may be modified to avoid or delay certain topics. Alternatively or additionally, certain statements may be analyzed to generate entries in goals and progress625of profile600. For example, continuing with the above example, the customer saying that he or she also has not done nearly enough to save for college may indicate that, for example, the customer has one or more children (if not already known or determined in another way), that the customer may be considering college savings (especially if the customer has not already been advised on this topic), and/or that the customer may deem college savings a priority or otherwise a relevant consideration in making financial decisions in the future. Such information, recorded in profile600, may then be used by the robo-advisor, and/or presented to an advisor, to better inform recommendations and proposals. Profile600may also include one or more session parameters630. Application250(via, e.g., consumer computing device120) may accept session parameters630(via, e.g., dashboard310) to determine how a human advising session should be conducted. For example, a customer may wish to have audio only, text only, or video chat. The session parameters may be used by provider computing device110, user computing device120, and/or advisor computing device130to provide the customer with human advising sessions that meet the customer's needs. Additionally, a customer may only wish to receive automated recommendations in specified ways, something that can be indicated in robo-advising parameters635of profile600. In some implementations, the consumer computing device120may be programmed to only speak or otherwise make inquiries and provide recommendations under certain conditions but not under other conditions based on robo-advising parameters635. For example, if a user is speaking with a casual friend, it may not be appropriate to converse with the user to inquire as to whether the user wishes to pursue a specified (personal/confidential) financial goal that is identified based on the conversation with the casual friend. Rather, the user may wish to receive recommendations when the user is alone, at home, with close family or friends only, during certain times and days (e.g., not during work hours, or not after dinner when the user may be winding down for sleep and not wishing to consider financial issues, or not on Sundays), and via certain channels and formats. In some implementations, robo-advising parameters635may, for example, prohibit a smart speaker or other consumer computing device120from disrupting the customer or discussing confidential topics at inappropriate times. Profile600may also include human advising parameters640. In some implementations, human advising parameters640may indicate that a customer wishes only to receive high-level advice on overall goals from human advisors (e.g., to discuss the “big picture”). Similarly, the human advising parameters640may indicate that the customer is additionally or alternatively interested in more specific advice on implementing particular goals or executing on action plans. In certain implementations, the fields/values of human advising parameters640may be used by provider computing device110and/or customer computing device120when matching a customer with a suitable human advisor. Profile600may additionally or alternatively include one or more acquisition parameters645. In one or more fields, acquisition parameters645may specify how the customer is to be surveilled (e.g., what inputs may be acquired, how various inputs are captured, etc.) and when/where the customer is not to be surveilled. In some implementations, acquisition parameter645may indicate which consumer computing devices120may be used to detect conversations. For example, a customer may wish to include/exclude detection of conversations via identified smartphones, smart speakers, smart watches, laptops, etc., to control in what circumstances the customer's words may be taken into consideration (e.g., should or should not be used as a source of data for advising purposes). Consumer computing devices120may be identified by, for example, device identification numbers and/or associated users. In various implementations, acquisition parameter645may, alternatively or additionally, identify certain locations (as determined using, e.g., location sensor235) which are “off limits” and conversations should not be surveilled. For example, a customer may identify a doctor's office as a location, and in response to detection that the consumer computing device120is located in, or has moved into, the identified location, the consumer computing device120may cease detection of conversations for use in advising the customer. This would allow the customer to exclude certain private conversations (with, e.g., a therapist) from consideration in advising. In some implementations, acquisition parameters645may be used to indicate that conversations with certain persons are included/excluded as advising inputs, and/or certain modes of communication are included/excluded as advising inputs. With such acquisition parameters645, a consumer computing device120may, for example, not continue detecting a conversation in response to identification of a specified speaker (by, e.g., recognizing a voice signature, detecting the person's name used in a greeting, etc.), and/or may include exclude certain electronic messages (e.g., text messages and/or e-mails) received from specified applications and/or communication channels from being analyzed for inputs relevant to advising of the customer. Parameters and fields corresponding to profile600identified inFIG.6help both the robo-advisor and the human advisor provide more relevant recommendations in a personalized fashion, while more quickly focusing on the topics on which a customer wishes to receive assistance. They also help customers more seamlessly transition between robo-advising and human advising, allowing the more efficient form of advising to be used based on customers' circumstances. Referring toFIG.8, an example graphical user interface of, for example, a potential dashboard310is illustrated. The user interface, which may be viewable via consumer computing device120and/or advisor computing device130, simultaneously or at different times, provides information on financial goals. The issues may have been identified and refined via robo-advising, human advising, or both. Also identified in the example user interface are accounts held by, or otherwise accessible to or viewable by, the customer. These accounts may be used in the fulfilment of financial goals, such as by having provider computing device110and/or customer computing device120transfer funds to/from such accounts or control spending by using credit accounts with particular limits, for certain expenses, etc. The user interface may also identify advisors which whom the customer has conferred. In various implementations (not shown inFIG.8), the interface may also identify, for example, the topics discussed with each advisor, the availability of each advisor, or the recommendations of each advisor. Also identified inFIG.8are the family members of the customer. If authorization is obtained from the family members, even if they are not customers or otherwise being separately advised, conversations or other inputs of the family members may be used to better understand the goals and needs of the customer and thereby enhance the quality of recommendations and transitions between robo-advising and human advising. Some or all of the information in dashboard310may be stored or identified in profile600. For example, fragmented issue indicators615for all of the known family members may be included in profile600. In various implementations, any of the icons or screen elements in the figures can be structured to be clickable or otherwise selectable (using any input mechanism, such as a touchscreen, mouse, voice prompt, gesture, etc.) for accessing additional information (such as details about an advisor, account, goal, etc.) for initiating communications (with, e.g., one of the advisors or family members), etc. With reference toFIG.9, which depicts an example communication between a consumer computing device120and a provider computing device110or an advisor computing device130, in some examples, a person (e.g., a customer) may have difficulty keeping track of his or her finances and managing his or her credit. The person may be expecting to expand his or her family and wish to get his or her finances under control in order to meet a financial goal of, for example, purchasing a new vehicle in anticipation of expanding the family. In some examples, based on recent transaction history indicating the possibility of a new baby and/or a transaction such as a newly established college fund, the provider computing device110may pervasively inquire, via a consumer computing device120(e.g., a proactive listing bot), whether the person would like some help with meeting a financial goal. The financial goal may include buying a new car, establishing good credit, etc. The consumer device120may listen and interpret the voice input of the person that indicates a desire to meet a financial goal. In some examples, the provider computing device110may pervasively inquire, via a consumer computing device120, whether the person would like to set up a virtual meeting (e.g., a session or an appointment) with a banker to discuss the financial goals of the person. After the customer confirms that he or she is interested in a session with an advisor, the provider computing device110may generate a command structured to add the virtual meeting to a calendar accessible to consumer computing device120associated with the customer and/or a calendar accessible to advisor computing device130of an advisor, as shown by the calendar and/or schedule icon (“April 15”) inFIG.9. In some embodiments, the provider computing device110may be part of the computing system of a financial institution. Generally, the financial institution provides financial services (e.g., demand deposit accounts, credit accounts, etc.) to a plurality of customers. The financial institution provides banking services to the customers, for example, so that customers can deposit funds into accounts, withdraw funds from accounts, transfer funds between accounts, view account balances, and the like via one or more provider computing devices110. Returning toFIG.7, a flow diagram of a method700of providing a proactive listening bot structured to generate an expense strategy is described according to an example embodiment. The expense strategy may include a financial plan, budget, or combination thereof. In some arrangements, the expense strategy may be generated and/or provided in real-time or near real-time. In some arrangements, the expense strategy may include transaction data, account data, etc. from a plurality of accounts of a customer that are spread across multiple financial institutions that may or may not be affiliated with the financial institution. Prior to the provision or engagement of a proactive listening bot structured to generate an expense strategy, a user may be authenticated to the provider computing device110and/or consumer computing device120at705. In some examples, prior to allowing the user to engage with the proactive listening bot, the user may be authenticated as an account holder. The user may be authenticated based on the authentication credentials of that user. In arrangements in which the consumer computing device120includes an application250associated with the provider computing device110, the consumer computing device120may receive and transmit user authentication data (e.g., data indicative of the identity of a customer/member of the financial institution and/or a user of various systems, applications, and/or products of the financial institution) to, for example, authentication circuitry230. In such arrangements, the user can be identified and authenticated based on the application of the provider computing device110such that the provision of additional identification information or account information by the user is not required. The user authentication data may include any of a password, a PIN (personal identification number), a user ID, an answer to a verification question, a biometric, an identification of a security image, or a combination thereof. At710, the provider computing device110and/or consumer computing device120detects a voice input (e.g., a voice trigger, voice key, etc.) indicative of a financial goal. For example, a user (e.g., a customer, potential customer, other person, etc.) may be contemplating buying a new car. The provider computing device110and/or consumer computing device120may learn that the user is contemplating buying a new car through active listening to the conversations and/or voice of a user. For example, the user may say “I want to purchase a new car,” “I want to save for a home,” etc. The provider computing device110and/or consumer computing device120may be structured to monitor user account information, user financial information, spending patterns, etc. of the user and receive, retrieve, or otherwise access transaction data (e.g., data indicative of a financial goal such as a transaction, an upcoming transaction, purchase, other financial data, etc.) based on the voice input (e.g., the conversation) of the user. The consumer computing device120may provide advice or otherwise make suggestions to the customer. In some arrangements, the consumer computing device120may utilize speech recognition and natural language processing to detect the voice input and/or to receive such transaction data. In some arrangements, the consumer computing device120may engage in conversation, discussion, or dialogue with the user to learn more about the financial goal and to generate an expense strategy that may be of interest to the user. In some examples, the consumer computing device120may be structured to ask the user questions or otherwise request feedback from the user, such as, “how much do you want to pay for the new car?, how much would you like the monthly loan payment to be?,” etc. Responsive to the request, the user may provide a voice input (e.g., the user may answer the question provided by the consumer computing device120, provide feedback, or otherwise engage in conversation with the consumer computing device120). In some implementations, the consumer computing device120may be structured to receive a voice input from a plurality of users and distinguish the voice input associated with the user from the voice input or sound associated with another person or user. Alternatively or additionally, the provider computing device110and/or consumer computing device120may learn that the user is contemplating a financial goal (e.g., purchasing a new car) via an advisor computing device130of an advisor who may be assisting the user with financial planning, or through other suitable channels. In some implementations, while the user is engaged in conversation with the consumer computing device120, the provider computing device110and/or consumer computing device120may generate an expense strategy structured to meet the financial goal. Alternatively or additionally, the provider computing device110and/or consumer computing device1120may generate an expense strategy structured to meet the financial goal in response to receiving transaction data. For example, the expense strategy may be generated based on one or more user accounts (e.g., a single account or a plurality of accounts of the user) associated with the financial institution. At715, the connected device may be structured to provide an expense strategy structured to meet the financial goal in response to the detection of the voice input. For example, the consumer computing device110may output suggestions for meeting the financial goal such as, but not limited to, the creation of a savings goal, a savings plan to meet the financial goal, an investment portfolio, a savings strategy, etc. In the present example, while listening to a conversation of the user, the consumer computing device120may detect that the user is interested in the financial goal of purchasing a new car. In response, provider computing device110and/or consumer computing device120may generate a financial plan, budget, investment strategy, or combination thereof to meet the financial goal of purchasing a new car. The expense strategy may be audibly output from speakers included with or communicatively coupled to the consumer computing device120. Alternatively or additionally, the expense strategy may be displayed via a mobile application, an in-app message, a social media application, etc. The provider computing device110and/or consumer computing device120may include or may be communicatively coupled, via one or more APIs, to a third party computing device140. The third party computing device140may be structured to provide relevant data associated with financial goal of the user. The relevant data may be utilized to generate an expense strategy comprising various options or suggestions determined to meet the financial goal of the user. In this regard, the provider computing device110and/or consumer computing device120may be communicatively coupled to a third party computing device140structured to provide such data as inventory data and costs of, for example, a car. In some examples, there may be a time period between the receipt of the voice input and the generation of an expense strategy such that transaction data, the voice input, etc. may be stored for later use and/or retrieval. Accordingly, the user may have expressed an interest in the financial goal (e.g., the purchase of a new car, home, property, etc.) minutes, hours, days, or months ago such that the voice input, transaction data, etc. may be stored in, for example, profile600. Later, the voice input, transaction data, etc., may be retrieved or otherwise accessed by the provider computing device110and/or consumer computing device120for generation of an expense strategy and/or loan (e.g., an offer to accept a loan) as described herein. For example, the user may have expressed an interest in purchasing a new car several months ago when the desire was not urgent or otherwise was not a priority. When the consumer computing device120listens to the conversation of the user and detects that the user is now expecting to have a baby, the voice input, transaction data, etc., may be retrieved or otherwise accessed to generate a recommendation. In some arrangements, the consumer computing device120may be structured to detect the urgency of a financial need. Based on the detection of a voice input indicative of an urgent need financial need (e.g., “We are going to have another child, I need a new car!”), the provider computing device110and/or consumer computing device120may generate a financial plan, budget, investment strategy, or combination thereof to meet the financial goal (e.g., the goal to purchase a new car) that is more aggressive, time restrictive, etc. than a financial goal associated with a non-urgent need. In some implementations, the urgency of a suspected need may be identified in profile600(e.g., as part of one or more urgency or timetable fields of goals and progress625) based on the voice or words of a customer. Additionally or alternatively, fragmented issue indicators615of profile600may include a field that characterizes urgency (based on statements indicating urgency, such as “we need a new car this month” or on other contextual information) and/or how much emotion was detected in a statement. The provider computing device110and/or consumer computing device120may include speech recognition and natural language processing algorithms that detect, calculate, or otherwise determine the speed, tone, aggression, etc., of user speech to detect a voice input indicative of, for example, an urgent need financial need. Such indicators may also be provided to advisor computing devices140to inform, for example, how sensitive or emotionally-charged a topic might be for the customer being advised. At720, the provider computing device110and/or consumer computing device120may be structured to determine whether to connect the consumer computing device120to an advisor computing device130based on the expense strategy. In this regard, the consumer computing device120may inquire whether the user would like to set up a session or an appointment (e.g., a virtual session, appointment, meeting, etc.) with an advisor (e.g., a banker) to discuss an expense strategy and/or the financial goals of the user. For example, the consumer computing device120may ask the user if the user would like some help with obtaining credit for a new car and ask whether the user would like the consumer computing device120to connect with an advisor computing device130now or set up a session with the advisor computing device130for later? After the user confirms that he or she is interested in a session with an advisor, the consumer computing device120may initiate a virtual meeting between the user and an advisor. The consumer computing device120and/or advisor computing device may receive and/or retrieve transaction data associated with the user from the provider computing device110and/or a third party computing device130. In turn, the consumer computing device120and/or advisor computing device130may provide the transaction data via dashboard310. FIG.10depicts an example graphical user interface of a potential dashboard310structured to provide robo or human advising according to example embodiments. The consumer computing device120may output, via the graphical user interface, a user profile1005associated with the user based on, for example, transaction or account data. The user profile1005may identify the user and provide relevant information pertaining to the user (e.g., user name “Nancy Isau,” customer status “Customer Since 2012,” etc.). In some examples, the graphical user interface may include or otherwise display data and interactions (e.g., conversations, transactions, and/or other relevant data) as represented by icons and/or graphics1010that have been compiled by the provider computing device110and/or consumer computing device120for that user (e.g., the customer's photograph). This may allow a human advisor to seamlessly start the session with the user where the consumer computing device120and advisor computing device130ended a prior conversation/engagement. The dashboard310may also provide a “Return to Robo-Advising” selection1015to end the session and return the customer to robo-advising. In some implementations, this selection only becomes available when “back to bot” triggers are detected. FIG.11depicts an example graphical user interface of a dashboard310according to example embodiments. The provider computing device110and/or consumer computing device120may be structured to generate an expense strategy according to a time period (e.g., a timeline, one or more minutes, hours, days, years, etc.). During the session (e.g., the virtual robo-advising session with provider computing device110or human advising session with advisor computing device130), the advisor may develop an expense strategy that may be implemented over, for example, a certain period of time based on one or more financial goals. The expense strategy may include one or more icons and graphics structured to represent, for example, a “5 Year Timeline” and/or financial goals of the user. In some arrangements, the graphical user interface may include an image and/or video of an advisor, or audio of the voice of an advisor. The image, video, and/or the audio of the advisor may be provided in real-time or near real-time such that the user may view or otherwise engage with the advisor live. In various implementations, multiple advisees and/or multiple advisors may interact live via dashboard310. In some implementations, an advisor (e.g., “Advisor 2”) may be a robo-advisor helping one or more human advisors (e.g., “Advisor 1”) advise or otherwise assist one or more users (e.g., Users 1 and 2). FIG.12depicts an example graphical user interface1200of a potential dashboard310according to example embodiments. During or after a session, the robo or human advisor may educate the user on the expense strategy1210determined for that user to maintain or otherwise improve progress towards a financial goal (e.g., improve a credit score). As depicted in the graphical user interface1200by the icon1230, the consumer computing device120may speak, or output the speech, conversation, voice, etc. of the human (or robo) advisor. For example, the advisor may suggest that the customer make micro-payments on each credit card by setting up auto-pay (weekly, bi-weekly, monthly, etc.) for each credit card to increase the amount of payments that the user makes on time. In turn, the internal credit score of that user may increase more quickly. In some examples, an expense strategy1210may be displayed with a proposed change in spending, debt payoff, micropayments, etc. The expense strategy may be represented or further detailed by one or more tabs1220. The tabs1220may be structured to display the expense strategy details dynamically responsive to a user clicking or selecting a tab. FIG.13depicts an example graphical user interface1300of a potential dashboard310according to example embodiments. In some examples, the consumer computing device120and/or the advisor computing device130may output the graphical user interface1300. The graphical user interface1300may represent a digital dashboard with icons, images, data, charts, other graphics, etc., that may represent a financial goal, action plan, goal progress, etc. In some arrangements, the graphical user interface1300may include an image and/or video1310representative of an advisor, or audio of the voice of an advisor (and/or a transcription of words spoken by the advisor). The image, video1310, and/or the audio of the voice of the advisor may be provided in real-time or near real-time such that the user may view or otherwise engage with the advisor live. Illustrated inFIG.14is an example graphical user interface1400of an example dashboard310according to example embodiments. In some examples, the dashboard310is structured to present an expense strategy notification (e.g., a message, notice, account update, invitation, offer, etc.). In some implementations, the provider computing system110may provide, send, or otherwise transmit an expense strategy notification to consumer computing device120associated with the customer. The expense strategy notification may be output or otherwise displayed via a user interface200(e.g., via a display, speaker, other audio/visual components of the consumer computing device120). The expense strategy notification may indicate or otherwise inform the user of an action that affects the financial goal of the user as depicted in the graphical user interface1400. As shown inFIG.14, the expense strategy notification may output/indicate when the user took an action that affects the expense strategy. For example, the expense strategy notification may include a time stamp, date stamp (“May 3 Nancy paid . . . ,” “May 15 Nancy set spending limits,” etc.), or other indications of when an action occurred. The expense strategy notification may include a financial goal status based on the action or a plurality of actions taken by the user that affect the expense strategy. In some examples, the provider computing device110may transmit an expense strategy notification to the consumer computing device120and/or advisor computing device130to inform the customer and/or advisor of the amount that the customer can afford to spend or save based on the financial situation of the customer. As shown, the expense strategy notification may include actions from a single user or a plurality of users (e.g., “Bill saved $200 to New Car Goal”, “Nancy set up micro-payments,” etc.). Advantageously, users may find the expense strategy notification as motivational and helpful to improve their financial status and to reach their financial goals (e.g., the goal to purchase a new car). FIG.15Adepicts an example graphical user interface of an example dashboard310. The illustrated exchange may be between the customer computing device120and the advisor computing device130(as part of a human advising session), and/or the exchange may be between the consumer computing device120and the provider computing device110(as part of a robo-advising session). According to an example embodiment, the consumer computing device120may present the expense strategy (e.g., advice, suggestions, etc.), which may have been, for example, generated by the provider computing device110and/or updated via an advisor computing device130, to the customer to help the customer review, maintain, or improve progress towards a financial goal. The graphical user interface may be displayed such that the expense strategy may include icons indicative of a goal status (e.g., checkmarks for completed or accomplished, and exclamation points for incomplete or otherwise requiring attention or response). In some implementations, icons presented along a line may indicate an order or timeline for the goals (e.g., one goal may build on or otherwise follow another goal). The icons may correspond to one or more strategies such as, but not limited to, spending limits, micropayments, car pre-payments, car purchase, etc. In some examples, the icons may indicate whether the customer is off track or on track toward reaching the financial goal. For example, one icon indicates that the customer is off-track with maintaining spending limits toward the goal of purchasing a new car. In some examples, the graphical user interface may allow the user to drill down to receive more detail. For example, a customer may click (or otherwise select) icon1505and/or provide a voice command to see more information about how the customer may get back on track toward meeting the financial goal. In some examples, if a customer selects one of the identified goals inFIG.15A, another graphical user interface, such as the one depicted inFIG.15B, may be presented. The graphical user interface may include icons/graphics that represent, for example, the spending limits of a customer. The graphical user interface may include an adjuster icon1550(e.g., a graphical dial, slider control, etc.) structured to allow the customer (or advisor) to adjust/control, via dashboard310, various values (such as spending limits) as desired by the user. For example, the icon1550may be adjusted up, down, left, right, or in any other direction/position via the customer computing device120and/or advisor computing device130. In some examples, the icon1550may represent a spending limit that is adjustable via the provider computing device110(as part of robo-advising) or the advisor computing device130(as part of human advising). Responsive to the adjustment of the icon1550, the spending limits of the user may then represent whether the user is off-track or on-track toward reaching the financial goal. The provider computing device110, consumer computing device120, and/or advisor computing device130may update profile600(e.g., by entering, updating, or revising values in fields corresponding to the goals and progress625parameter). In some arrangements, the graphical user interface may include an image and/or video representative of an advisor (e.g., at the top right inFIG.15A) and/or audio of the voice of an advisor in real-time or near real-time such that the user may view or otherwise engage with the advisor live. The graphical user interface may include a selection (e.g., the virtual “Save Changes” button) to allow the customer or advisor to save adjustments to the expense strategy, spending limits, etc. FIG.15Cdepicts an example graphical user interface of an potential dashboard310according to example embodiments. In various implementations, the provider computing device110may present the graphical user interface depicted inFIG.15Cto the customer via the consumer computing device120and/or to the advisor via advisor computing device130. In some examples, the graphical user interface may be presented to the user in response to the user clicking an icon or button and/or providing a voice command as described herein. The graphical user interface may include a notification, message, and/or update that includes the current status of the user toward meeting a financial goal. As depicted inFIG.15C, the checkmark icon (formerly an exclamation point) adjacent to “Spending Limits” may indicate the customer is back on track based on, for example, the limits, transactions, adjustments made (via, e.g., the graphical user interface ofFIG.15B), or other actions of the customer. FIG.16depicts an example graphical user interface of an potential dashboard310. According to an example embodiment, the provider computing device110may present the graphical user interface to the customer and/or advisor via the consumer computing device120and/or the advisor computing device130. In some examples, the graphical user interface may represent a digital dashboard that includes icons, images, data, charts (e.g., the graphical charts/graphs), other graphics, etc., that may represent the credit score, spending trends, status of the customer toward reaching the financial goal, etc. According to the current example depicted inFIG.16, the customer has made 100% progress toward the financial goal of buying a new car. The content of the digital dashboard may be provided in real-time or near real-time by the provider computing device110. Advantageously, the customer may be informed of the current status of reaching the financial goal based on the real-time or near real-time update of the digital dashboard. FIG.17is an example graphical user interface of a potential dashboard310according to an example embodiment. In some examples, the provider computing device110is structured to generate an expense strategy notification (e.g., a message, SMS, notice, account update, invitation, offer, etc.). The provider computing device110may provide, send, or otherwise transmit the expense strategy notification to a consumer computing device120and/or advisor computing device130. The expense strategy notification may be output or otherwise presented via a display, speaker, other audio/visual components of the consumer computing device120and/or the advisor computing device130. For example, the expense strategy notification may include an offer for an auto loan transmitted to the consumer computing device120of the customer when the customer meets the financial goal as identified by the provider computing device110and/or advisor computing device130. In some implementations, an expense strategy notification that includes an offer may be transmitted to the consumer computing device120in response to the consumer computing device120transmitting an expense strategy notification (e.g., a SMS that includes information that the customer is ready to buy the car, a home, etc.) to the provider computing device110and/or the advisor computing device130. The embodiments described herein have been described with reference to drawings. The drawings illustrate certain details of specific embodiments that implement the systems, methods and programs described herein. However, describing the embodiments with drawings should not be construed as imposing on the disclosure any limitations that may be present in the drawings. It should be understood that no claim element herein is to be construed under the provisions of 35 U.S.C. § 112(f), unless the element is expressly recited using the phrase “means for.” The various components of the computing systems and user devices (such as modules, monitors, engines, trackers, locators, circuitry, interfaces, sensors, etc.) may be implemented using any combination of hardware and software structured to execute the functions described herein. In some embodiments, each respective component may include machine-readable media for configuring the hardware to execute the functions described herein. The component may be embodied at least in part as one or more circuitry components including, but not limited to, processing circuitry, network interfaces, peripheral devices, input devices, output devices, sensors, etc. In some embodiments, a component may take the form of one or more analog circuits, electronic circuits (e.g., integrated circuits (IC), discrete circuits, system on a chip (SOCs) circuits, etc.), telecommunication circuits, hybrid circuits, and any other type of circuit. In this regard, the component may include any type of element for accomplishing or facilitating achievement of the operations described herein. For example, a component as described herein may include one or more transistors, logic gates (e.g., NAND, AND, NOR, OR, XOR, NOT, XNOR, etc.), resistors, multiplexers, registers, capacitors, inductors, diodes, wiring, and so on). The component may also include one or more processors communicatively coupled to one or more memory or memory devices. In this regard, the one or more processors may execute instructions stored in the memory or may execute instructions otherwise accessible to the one or more processors. In some embodiments, the one or more processors may be embodied in various ways. The one or more processors may be constructed in a manner sufficient to perform at least the operations described herein. In some embodiments, the one or more processors may be shared by multiple circuits (e.g., circuit A and circuit B may comprise or otherwise share the same processor which, in some example embodiments, may execute instructions stored, or otherwise accessed, via different areas of memory). Alternatively or additionally, the one or more processors may be structured to perform or otherwise execute certain operations independent of one or more co-processors. In other example embodiments, two or more processors may be coupled via a bus to enable independent, parallel, pipelined, or multi-threaded instruction execution. Each processor may be implemented as one or more general-purpose processors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), or other suitable electronic data processing components structured to execute instructions provided by memory. The one or more processors may take the form of a single core processor, multi-core processor (e.g., a dual core processor, triple core processor, quad core processor, etc.), microprocessor, etc. In some embodiments, the one or more processors may be external to the apparatus, for example the one or more processors may be a remote processor (e.g., a cloud based processor). Alternatively or additionally, the one or more processors may be internal and/or local to the apparatus. In this regard, a given components or parts thereof may be disposed locally (e.g., as part of a local server, a local computing system, etc.) or remotely (e.g., as part of a remote server such as a cloud based server). To that end, a component as described herein may include elements that are distributed across one or more locations. An example system for implementing the overall system or portions of the embodiments might include a general purpose computing computers in the form of computers, including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. Each memory device may include non-transient volatile storage media, non-volatile storage media, non-transitory storage media (e.g., one or more volatile and/or non-volatile memories), etc. In some embodiments, the non-volatile media may take the form of ROM, flash memory (e.g., flash memory such as NAND,3D NAND, NOR,3D NOR, etc.), EEPROM, MRAM, magnetic storage, hard discs, optical discs, etc. In other embodiments, the volatile storage media may take the form of RAM, TRAM, ZRAM, etc. Combinations of the above are also included within the scope of machine-readable media. In this regard, machine-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. Each respective memory device may be operable to maintain or otherwise store information relating to the operations performed by one or more associated circuits, including processor instructions and related data (e.g., database components, object code components, script components, etc.), in accordance with the example embodiments described herein. Any foregoing references to currency or funds are intended to include fiat currencies, non-fiat currencies (e.g., precious metals), and math-based currencies (often referred to as cryptocurrencies). Examples of math-based currencies include Bitcoin, Litecoin, Dogecoin, and the like. It should be noted that although the diagrams herein may show a specific order and composition of method steps, it is understood that the order of these steps may differ from what is depicted. For example, two or more steps may be performed concurrently or with partial concurrence. Also, some method steps that are performed as discrete steps may be combined, steps being performed as a combined step may be separated into discrete steps, the sequence of certain processes may be reversed or otherwise varied, and the nature or number of discrete processes may be altered or varied. The order or sequence of any element or apparatus may be varied or substituted according to alternative embodiments. Accordingly, all such modifications are intended to be included within the scope of the present disclosure as defined in the appended claims. Such variations will depend on the machine-readable media and hardware systems chosen and on designer choice. It is understood that all such variations are within the scope of the disclosure. Likewise, software and web implementations of the present disclosure could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various database searching steps, correlation steps, comparison steps and decision steps. The foregoing description of embodiments has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from this disclosure. The embodiments were chosen and described in order to explain the principals of the disclosure and its practical application to enable one skilled in the art to utilize the various embodiments and with various modifications as are suited to the particular use contemplated. Other substitutions, modifications, changes and omissions may be made in the design, operating conditions and arrangement of the embodiments without departing from the scope of the present disclosure as expressed in the appended claims.
107,744
11862173
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. Various units, circuits, or other components may be described as “configured to” perform a task or tasks. In such contexts, “configured to” is a broad recitation of structure generally meaning “having circuitry that” performs the task or tasks during operation. As such, the unit/circuit/component can be configured to perform the task even when the unit/circuit/component is not currently on. In general, the circuitry that forms the structure corresponding to “configured to” may include hardware circuits and/or memory storing program instructions executable to implement the operation. The memory can include volatile memory such as static or dynamic random access memory and/or nonvolatile memory such as optical or magnetic disk storage, flash memory, programmable read-only memories, etc. Similarly, various units/circuits/components may be described as performing a task or tasks, for convenience in the description. Such descriptions should be interpreted as including the phrase “configured to.” Reciting a unit/circuit/component that is configured to perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112, paragraph six interpretation for that unit/circuit/component. This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment, although embodiments that include any combination of the features are generally contemplated, unless expressly disclaimed herein. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure. DETAILED DESCRIPTION OF EMBODIMENTS Turning now toFIG.1, a block diagram of one embodiment of a device5is shown. In the illustrated embodiment, the device5may include an integrated circuit (IC)10, which may be an SOC in this example. The SOC10may be coupled to a memory12, an external audio coder/decoder (codec)16, and a power management unit (PMU)20. The audio codec16may be coupled to one or more audio sensors, collectively referred to as sensors26. For example, the audio codec16may be coupled to one or more microphones (mic)26A-26B and one or more speakers (spkr)26C-26D. As implied by the name, the components of the SOC10may be integrated onto a single semiconductor substrate as an integrated circuit “chip.” In some embodiments, the components may be implemented on two or more discrete chips in a system. Additionally, various components may be integrated on any integrated circuit (i.e., it need not be an SOC). However, the SOC10will be used as an example herein. In the illustrated embodiment, the components of the SOC10include a central processing unit (CPU) complex14, peripheral components18A-18B (more briefly, “peripherals”), a memory controller22, an audio filter circuit24, a power manager circuit (PMGR)28, and a communication fabric27. The components14,18A-18B,22,24, and28may all be coupled to the communication fabric27. The memory controller22may be coupled to the memory12during use. Similarly, the peripheral18A may be an interface unit (IFU) coupled to the audio codec16during use, which is further coupled to the audio sensors26during use. The device5may be any type of portable electronic device, such as a cell phone, a smart phone, a PDA, a laptop computer, a net top computer, a tablet device, an entertainment device, etc. In some embodiments, the device5may be a non-portable electronic device such as a desktop computer as well. Such non-portable devices may also benefit from the audio device control features described herein. During times that the device5is idle, portions of the SOC10may be powered down. Particularly, the CPU complex14, the memory controller22, the peripheral18B, the interconnect27, and a portion of the PMGR28may be powered down. If the device5is idle but not completely powered down, on the other hand, the audio filter circuit24may remain powered, as may the IFU18A. Components external to the SOC10may be powered up or down as desired when the device5is idle. Particularly, the memory12may remain powered and thus capable of retaining the data stored therein. In an embodiment in which the memory12is a DRAM of one of various types, the memory12may be placed in self-refresh mode to retain the stored data during times that the device5is idle. During the idle time, the audio filter circuit24may be configured to receive audio samples from the audio codec16, through the IFU18A and may attempt to detect a predetermined pattern in the samples (e.g., the key word/phrase to wake up the device5in order to service a command or request uttered by the user). The predetermined pattern may be programmed into the audio filter circuit24or may be hard coded in the audio filter circuit24. In an embodiment, the predetermined pattern may be captured from the user verbally uttering the key word/phrase, training the device5to the user's particular voice. In another embodiment, the predetermined pattern is a generic pattern that represents the key word/phrase as spoken with a variety of inflections, tones, etc. In response to detecting the pattern, the audio filter24may be configured to cause the memory controller to be powered up and initialized (so that the matching samples and following samples may be stored in memory) and may also be configured to cause the CPU complex14to be powered up to boot the operating system (and potentially other portions of the SOC10, depending on the implementation). In an embodiment, the memory controller22may power up relatively quickly. A phase locked loop for the memory controller22may be locked, and the memory controller22may be initialized, with a fairly predictable delay that is shorter than the booting up of the operating system. The interconnect27may be powered up as well so that the audio filter circuit24may transmit the parameters mentioned below and write memory operations to write the samples to the memory12. The audio filter circuit24may include a sample buffer30, and the audio filter circuit24may be configured to temporarily buffer samples in the sample buffer30for comparison to the predetermined pattern and, once the pattern is detected, to further buffer samples until the memory controller22is ready to receive writes to the memory12. Thus, the size of the sample buffer30may be based on the delay from detecting the pattern until the memory controller22is ready. In some embodiments, the sample buffer30may be sized to permit buffering of the samples that match the predetermined pattern, the subsequently-received samples based on the delay until the memory controller is ready, and one or more samples prior to the samples that matched the predetermined pattern (i.e., the key word/phrase/sound). The prior samples may be processed to determine the background noise being captured by the microphone, which may aid the more accurate processing of the subsequent samples. In some embodiments, the memory controller22may support advanced DRAM technologies which involve training the memory controller22and the memory12to properly sync on the links between them. The parameters of the memory controller22configuration may be programmed into the memory controller22, either directly by hardware via the training or by software (reference numeral34A). To more rapidly restore the memory controller22to operation from the audio filter circuit24, the audio filter circuit24may shadow the parameters (reference numeral34B). Alternatively, the parameters34B may be a conservative set of parameters that are known to work properly across all versions of the DRAMs and all operating conditions in the device5. The audio filter circuit24may transfer the parameters34B to the memory controller22to ensure that the memory controller is prepared to write the memory12. The CPUs may begin execution of the operating system, and may determine that the reason the SOC10is reactivating is that the audio filter24detected the key word/phrase. The CPUs may read the samples from the memory12, and may verify that the key word/phrase is indeed detected. For example, in some embodiments, the audio filter24may use a simpler and coarser-grained (less accurate) matching process than may be supported by the code executed by the CPUs. The CPUs may verify that the code is detected, and may proceed to process the rest of the received audio samples to determine the command/request that was spoken after the key word/phrase. In another embodiment, the CPU complex14may not be awakened in parallel with the memory controller22. For example, in some embodiments, the audio filter circuit24may be configured to perform the processing of the subsequent samples (but may power up the memory controller22to avail itself of the space in the memory12to store samples). In another embodiment, the audio filter circuit24may also be configured to perform other operations when the device5is idle, and the audio filter circuit24may use the memory12for storage for some of the operations. In such embodiments, the memory controller22may be powered up without powering up the CPU complex14. Powering up various components of the SOC10may include communication with the PMU20. In an embodiment, the audio filter circuit24may be configured to communicate with the PMU20to cause the power up the other SOC circuit sections. Alternatively, on chip power gating may be implemented to power up/power down various components of the SOC10. The internal PMGR28may be configured to implement the on-chip power gating and the audio filter circuit24may be configured to communicate with the PMGR28to cause the power up. In still other embodiments, a combination of the PMGR28and the PMU20may be used. In yet another embodiment, the PMGR28may be configured to communicate with the PMU20and audio filter circuit24may communicate power up requests to the PMGR28, which may communicate with the PMU20as needed. Between the sample buffer30and the memory12, there may be little to no sample loss in the audio data from the microphone(s)26A-26B. Accordingly, the user may speak the key work/phrase and continue without any required hesitation to speak the request/command. In various embodiments, the audio filter circuit24may include any combination of fixed hardware and/or one or more processors that execute software. The software may be firmware included in the audio filter circuit24(e.g., stored in a non-volatile memory in the audio filter circuit24). Alternatively, the firmware may be included in other non-volatile storage in the device5to be accessible for execution. If a fixed hardware implementation is used, the sample pattern may still be programmable as an input to the fixed hardware. Such programmability may allow different key words/phrases/sounds to be used, for multiple languages to be supported, etc. Implementing a fixed hardware audio filter circuit24may provide a more power-efficient solution to monitoring the audio samples than a processor executing software may provide. It is noted that, while the description here may refer to a key word or phrase that may be used to activate the command mode, in general any sound may be used in various embodiments (e.g., a whistle, a hand clap, a non-verbal orally-generated sound, etc.). As used herein, the term “power up” may refer to applying power to a circuit that is currently powered down (or powered off). In some embodiments, a given circuit may support more than one power state (e.g., voltage and frequency combinations). Powering up may refer to establishing any of the power states supported by the circuit. Powering up may also be referred to as powering on. The term “power down” or “power off” may refer to reducing the power supply voltage magnitude to zero volts. The audio codec16may be a general coder/decoder of audio data. The codec may include analog to digital converters configured to convert the signals received from the microphones26A-26B into digital samples that may be transmitted to the SOC10. The codec may include digital to analog converters configured to receive digital audio data from the SOC10and to convert the digital audio data to an analog signal to be played on the speakers. In an embodiment, the audio codec16may support one or more low power modes which may be used during times that the device5is idle. For example, the audio codec16may reduce the number of microphones that are open (or “on”), and may turn off the speakers. In some embodiments, the audio sample rate may be decreased in the low power mode. The CPU complex14may include one or more processors that serve as the CPU of the SOC10. The CPU of the system includes the processor(s) that execute the main control software of the system, such as an operating system. Generally, software executed by the CPU during use may control the other components of the device5/SOC10to realize the desired functionality of the device5. The CPU processors may also execute other software, such as application programs. The application programs may provide user functionality, and may rely on the operating system for lower level device control. Accordingly, the CPU processors may also be referred to as application processors. The CPU complex may further include other hardware such as a level 2 (L2) cache and/or an interface to the other components of the system (e.g., an interface to the communication fabric27). The peripherals18A-18B may be any set of additional hardware functionality included in the SOC10. More particularly, the peripheral18A may be an interface unit configured to couple to the audio codec16. Any interface may be used (e.g., the serial peripheral interface (SPI), serial or parallel ports, a proprietary interface for the audio codec16, etc.). The peripheral18B may include video peripherals such as video encoder/decoders, scalers, rotators, blenders, graphics processing units, display controllers, etc. The peripherals may include interface controllers for various interfaces external to the SOC10including interfaces such as Universal Serial Bus (USB), peripheral component interconnect (PCI) including PCI Express (PCIe), serial and parallel ports, etc. The peripherals may include networking peripherals such as media access controllers (MACs). Any set of hardware may be included. The memory controller22may generally include the circuitry for receiving memory requests from the other components of the SOC10and for accessing the memory12to complete the memory requests. The memory controller22may be configured to access any type of memory12. For example, the memory12may be static random access memory (SRAM), dynamic RAM (DRAM) such as synchronous DRAM (SDRAM) including double data rate (DDR, DDR2, DDR3, etc.) DRAM. Low power/mobile versions of the DDR DRAM may be supported (e.g., LPDDR, mDDR, etc.). In some embodiments, the memory12may be packaged separate from the SOC10(e.g., in a single inline memory module (SIMM), a dual inline memory module (DIMM) or one or more DRAM chips mounted to a circuit board to which the SOC10is mounted). In other embodiments, the memory12may be packaged with the SOC10(e.g., in a package-on-package or chip-on-chip configuration). The communication fabric27may be any communication interconnect and protocol for communicating among the components of the SOC10. The communication fabric27may be bus-based, including shared bus configurations, cross bar configurations, and hierarchical buses with bridges. The communication fabric27may also be packet-based, and may be hierarchical with bridges, cross bar, point-to-point, or other interconnects. As mentioned above, the power manager28may manage internal power sequencing within the SOC10. The power manager28may be configured to establish various power/performance states in various components within the SOC10to balance computational demands and power consumption in the device5. The power manager28may be programmable with the desired power/performance states and may manage the power on/off and clock frequency setting of the various components based on the programmed states. The PMU20may generally be responsible for supplying power to the components of the device5, including the SOC10, the audio codec16, the peripherals26A-26D, and the memory12. The PMU20may be coupled to receive voltage magnitude requests from at least some of the components (e.g., the SOC10) and may include voltage regulators configured to supply the requested voltages. The SOC10may receive multiple voltages (e.g., a CPU voltage for the CPU complex14, a memory voltage for memory arrays in the SOC10such as caches, an SOC voltage or voltages for other components of the SOC, etc.). The microphones26A-26B may be any device capable of receiving sound and providing an output signal that represents the received sound. In some cases, more than one microphone may be desirable. For example, in a smart phone with video capability, in may be desirable to include a microphone near where the user's mouth would be when making a voice call, as well as one near the video camera for capturing sound from the subject being filmed. Any number of microphones may be included in various embodiments, and any number of the included microphones may be open when the device5is idle. The speakers26C-26D may be any device capable of receiving an input signal and generating sound represented by the signal. In some cases, more than one speaker may be desirable. For example, multiple speakers may permit stereo-type sound effects, and multiple speakers may permit sound production to be optimized based on the orientation of the device. Any number of speakers may be included in various embodiments. It is noted that the number of components of the SOC10(and the number of subcomponents for those shown inFIG.1, such as within the CPU complex14) may vary from embodiment to embodiment. There may be more or fewer of each component/subcomponent than the number shown inFIG.1. Similarly, the type and number of components external to the SOC10but in the device5may be varied, and other components not shown inFIG.1may be included (e.g., a display to provide a visual interface to the user, which may be a touch display, networking components, antennas, radio frequency components such as wifi or cell phone components, etc.). Turning next toFIG.2, a flowchart is shown illustrating operation of one embodiment of the audio filter circuit24and certain other parts of the device5during times that that the SOC10(or at least the CPU complex14and the memory controller22) are powered down to conserve power (e.g., when the device5is idle). While the blocks are shown in a particular order for ease of understanding, other orders may be used. Blocks may be performed in parallel by combinatorial logic circuitry in the audio filter circuit24(including the blocks expressly shown in parallel inFIG.2, and possibly other blocks). Blocks, combinations of blocks, and/or the flowchart as a whole may be pipelined over multiple clock cycles. Blocks may be implemented by a processor executing software in some embodiments, or the blocks may be fixed hardware, or any combination thereof. The audio filter circuit24may be configured to implement the operation shown inFIG.2. The audio filter circuit24may receive one or more audio samples from the audio codec16into the sample buffer30(block40) and may compare the samples to the predetermined pattern that is used as the key word/phrase/sound to activate the voice command mode in the device5(block42). If there is not a match (decision block44, “no” leg), the audio filter circuit24may continue receiving samples into the sample buffer30and comparing the samples. The sample buffer30may overwrite the oldest samples with new samples once the sample buffer30is full. That is, a sample buffer30having N entries for samples (where N is a positive integer) may have the most recent N samples at any given point in time. Responsive to detecting a match (decision block44, “yes” leg), the audio filter circuit24may be configured to request that the CPU complex14and the memory controller22be powered up (block46). The request may be transmitted to the PMU20, the PMGR28, or a combination of the two depending on the implementation. As mentioned previously, in other embodiments, only the memory controller22may be powered up. Alternatively, the memory controller22may be powered up first, and the CPU complex14may be powered up subsequently. Such a staggered power up may be used in cases in which powering up the memory controller22(and the fabric27) in parallel with the CPU complex14may have the potential to exceed the allowable amount of current during the power up (the so-called “inrush current”). The memory controller22may be powered up, and the memory controller parameters34B from the audio filter circuit24may be restored to the parameters34A in the memory controller22(block48). The parameters may be “restored” if the parameters34B are a shadow of the most recent parameters34A that were in use in the memory controller22(prior to powering down the memory controller22). As mentioned above, in another embodiments, the parameters34B may be a set of conservative “known good” parameters that will successfully permit access to the memory12but may not be optimized for maximum performance. In this case, “restoring” the parameters may refer to establishing the conservative parameters34B as the parameters34A. Subsequently, the memory controller22may be trained to the memory12and the parameters may be modified. The audio filter circuit24may write the matching samples and subsequent samples from the sample buffer30to the memory12through the memory controller22, and may continue writing the samples until operation is terminated by the CPU complex14, in an embodiment (block50). Additionally, the processors in the CPU complex14may boot into the operating system after being powered up and reset (block52). The operating system, executing on the CPU complex14, may process the samples stored in the memory12to verify that the key word/phrase/sound was indeed detected and to determine what the user's request is. The device5may attempt to perform the command/request (block54). Booting the operating system may include testing and programming the various components of the SOC10, and may be a time-consuming task as compared to powering up and restoring the memory controller22. The operating system may be designed to check if the reason for booting is due to detection of the key word/phrase/sound early in the process of booting, and may process at least the samples the represent the key word/phrase/sound to verify the detection. If the operating system determines that the detection by the audio filter circuit24was false, the operating system may cease the booting process and return the device5to an idle state (powering off the CPU complex14and the memory controller22). FIG.3is a flowchart illustrating operation of one embodiment of the operating system to train the memory controller22and to provide shadow memory controller parameters to the audio filter circuit24. While the blocks are shown in a particular order for ease of understanding, other orders may be used. Blocks may be implemented by a processor executing operating system software in some embodiments, as mentioned above. The operating system may activate training in the memory controller22, causing the memory controller22to sync with the memory12and establish a high performance connection to the memory12(block60). Responsive to training completion, the parameters34A may represent the configuration. The operating system may copy the parameters34A to the shadow parameters34B (block62). Alternatively, the shadowing may be implemented in hardware. In yet another embodiment, a different set of parameters may be provided to the shadow parameters34B, to ensure that the memory controller22may operate properly when restored due to detection of the key word/phrase/sound. Turning now toFIG.4, a timing diagram is shown illustrating operation of one embodiment of the device5. Time increases from left to right inFIG.4. At the beginning of the timing diagram, on the left, the device5may be idle and thus the audio filter circuit24may be monitoring the audio samples. Other portions of the SOC10, such as the memory controller22and the CPU complex14, may be powered down. The sentence across the top of the timing diagram may be uttered by the user, and in this example the key phrase may be “Hey Siri.” However, any key word/phrase may be used in various embodiments. As the audio samples generated in response to the microphone are processed by the audio filter24, the audio filter24may detect the key phrase (reference numeral70). Responsive to the detection, the audio filter24may request power up of the memory controller22and the CPU complex14(reference numerals72and74). The audio filter24may restore the memory controller22from the parameters34B, so that the memory controller22may become available to accept write operations. Subsequently, the audio filter24may write the audio samples that matched the pattern, and the subsequent samples (representing “where is the closest pizza restaurant?”), to memory (reference numeral76). Meanwhile, the CPU may power up, reset, and boot the operating system (reference numerals74and78). As illustrated inFIG.4, the booting of the operating system, to the point at which the audio sample processing may begin (reference numeral80), may take longer than the restoration of the memory controller22. The samples that are received and captured by the memory controller, e.g., the word or words immediately following the key word, would not be captured if only the operating system were capturing the words after boot. Thus, continuous speaking by the user may be captured and a more natural (to the user) interface may be available. As mentioned previously, in some embodiments, the CPU may not power up in parallel with the memory controller22. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
27,243
11862174
DETAILED DESCRIPTION Automatic speech recognition (ASR) is a field of computer science, artificial intelligence, and linguistics concerned with transforming audio data representing speech into text data representative of that speech. Natural language understanding (NLU) is a field of computer science, artificial intelligence, and linguistics concerned with enabling computers to derive meaning from text data containing natural language. Text-to-speech (TTS) is a field of computer science, artificial intelligence, and linguistics concerned with enabling computers to output synthesized speech. ASR, NLU, and TTS may be used together as part of a speech processing system. Certain systems implement virtual assistants. A user may speak an input to a system and the system may perform an action. For example, the system may output music, images, video, or other content responsive to the user input; may provide an answer to a question asked by the user; may interact with third party systems to cause ride sharing trips to be booked; etc. Such systems may implement one or more speechlets (e.g., skills). Each speechlet may enable the system to perform certain functionality. For example, a weather speechlet may enable a system to provide users with weather information, a music speechlet may enable a system to output music to users, a video speechlet may enable a system to display videos to users, etc. However, conventional systems only process voice commands received by an unlocked device. For example, the device must be unlocked in order for the conventional system to process the voice command. If the device is locked, a user must unlock the device and then repeat the voice command for the conventional system to process the voice command. To improve a user experience and provide additional functionality, systems and methods are disclosed that process voice commands from a locked device. For example, a locked device may store a voice command and automatically send the voice command after the device is unlocked. When the device is locked, the system may generate a prompt requesting that the user unlock the device before the voice command will be processed. For example, the system may generate audio data requesting that the device be unlocked and may generate display data that displays a number keypad or other user interface with which the user may input login information to unlock the device. Once the device is unlocked, the device may automatically send the voice command and the system may process the voice command without the user repeating the voice command. In some examples, the system may process certain voice commands even when the device is locked. In order to identify intents that may be processed even when the device is in the locked state, the system may include a whitelist filter that compares an intent associated with the voice command to whitelisted intents from a whitelist database. For example, an intent may be compared to the whitelist database before being dispatched to (e.g., processed by) a speechlet. If the intent is included in the whitelist database, the system may process the intent as it would normally be processed if the device was unlocked. However, if the intent is not included in the whitelist database, the system may generate the prompt requesting that the user unlock the device before the voice command can be processed. Once the device is unlocked, the device may automatically send the voice command and the system may process the voice command without the user repeating the voice command. Thus, the system may perform certain voice commands even while the device is in the locked state, while other voice commands may be automatically processed after the device is unlocked without the user repeating the voice command. FIG.1illustrates a system configured to process voice commands (e.g., voice inputs) using natural language understanding (NLU) processing. Although the figures and discussion of the present disclosure illustrate certain operational steps of the system in a particular order, the steps described may be performed in a different order (as well as certain steps removed or added) without departing from the intent of the disclosure. A plurality of devices may communicate across one or more networks199. For example,FIG.1illustrates an example of a device110(e.g., a tablet) local to a user5communicating with the server(s)120. The server(s)120may be configured to process voice commands (e.g., voice inputs) received from the device110. For example, the device110may capture input audio data111corresponding to a voice command from the user5(e.g., an utterance) and may send the input audio data111to the server(s)120for processing. The server(s)120may receive the input audio data111, may identify the voice command represented in the input audio data111, may determine one or more action(s) to perform, may perform at least some of the one or more action(s), and/or may send a command to the device110to perform at least some of the one or more action(s). Thus, the server(s)120may identify the voice command and may perform action(s) and/or send a command to the device110to perform action(s) corresponding to the voice command. FIG.1illustrates the server(s)120processing a voice command when an utterance is received from a device110. For example, the server(s)120may generate NLU intent data based on the input audio data111and may perform one or more action(s) based on the NLU intent data. The server(s)120may process the input audio data111and generate output audio data121as a response to the user5. For example, the input audio data111may correspond to a voice command to stream music (e.g., “Alexa, please play electronic dance music”) and the output audio data121may correspond to confirmation that the voice command was received (e.g., “Here is a playlist of electronic dance music.”). In some examples, the server(s)120may only process the NLU intent data from an unlocked device (e.g., the device110is in an unlocked state). However, the system100enables the device110to process voice commands even when the device110is locked (e.g., the device110is in a locked state). To reduce a risk of privacy issues and/or improve a customer experience, the system100may process the utterance differently when the device110is in a locked state. For example, the server(s)120may receive device context data from the device110and may generate state information data (e.g., lockscreen state information) from the device context data, indicating whether the device110is in the locked state or the unlocked state. When the server(s)120determine that the device110is in the unlocked state, the server(s)120may process the NLU intent normally and may send directive(s) to the device110. Thus, the server(s)120may process a voice command and determine to perform one or more action(s) and/or send a command to the device110to perform one or more actions corresponding to the voice command. When the device110is in the locked state, however, the server(s)120may generate a prompt requesting that a user unlock the device110before the server(s)120processes the voice command. For example, the server(s)120may generate TTS audio data requesting that the device110be unlocked and may generate display data that displays a number keypad or other user interface with which the user may input login information to unlock the device. Thus, the directive(s) sent to the device110include output data associated with requesting the login information before the server(s)120processes the NLU intent. In some examples, the server(s)120may process certain NLU intents even when the device110is in the locked state. For example, the server(s)120may process NLU intents associated with playing music (e.g., favorable/unfavorable feedback regarding a song, requesting an individual song be played, requesting information about a currently playing song, and/or commands associated with play, stop, pause, shuffle, mute, unmute, volume up, volume down, next, previous, fast forward, rewind, cancel, add to queue, add to playlist, create playlist, etc.), reading a book (e.g., start book, show next chapter, show next page, add bookmark, remove bookmark, rate book, remaining time in audiobook, navigate within book, change speed of audiobook, etc.), with news updates (e.g., sports updates, sports briefing, sports summary, daily briefing, read daily brief, etc.), weather updates (e.g., get weather forecast), cinema showtimes (e.g., what movies are in theaters, requesting movie times for a particular movie, requesting movie times for a particular theater, etc.), general questions (e.g., user asks a question and the server(s)120generate a response, such as “What time is it,” “What day is it,” “Did the Patriots win today,” etc.), local searches (e.g., address/phone number associated with a business, hours of the business, what time the business opens or closes, directions to the business, etc.), flight information (e.g., status, arrival time, and/or departure time of a flight), list generating (e.g., creating or browsing to-do lists), notifications (e.g., creating, browsing, modifying, and/or canceling notifications such as alarms, timers, other notifications, and/or the like), suggestions (e.g., “show me things to try,” “what can I say,” “help me,” “what are examples of . . . ,” etc.). In order to identify the certain NLU intents that may be processed even when the device110is in the locked state, the server(s)120may include a whitelist filter that compares the NLU intent to a list of whitelisted intents from a whitelist database. For example, each of the potential intents listed above may be included in the whitelist database and an incoming NLU intent may be compared to the whitelist database before being sent to one or more speechlet(s). If the NLU intent is included in the whitelist database, the server(s)120may send the NLU intent to the one or more speechlet(s) and process the NLU intent as it would normally be processed if the device110was unlocked. However, if the NLU intent is not included in the whitelist database, the server(s)120may generate the prompt requesting that a user unlock the device110before the server(s)120processes the voice command. Thus, the server(s)120may perform certain voice commands even while the device110is in the locked state, while other voice commands result in the server(s)120sending a prompt to unlock the device. As illustrated inFIG.1, the server(s)120may receive (130) input audio data including an utterance and may receive (132) device context data that indicates a state of the device110. For example, the device context data may indicate that the device110is in an unlocked state or a locked state. The server(s)120may perform (134) speech processing on the audio data to determine intent data. For example, the server(s)120may perform automatic speech recognition (ASR) processing on the input audio data111to generate first text data and may perform natural language understanding (NLU) processing on the first text data to determine an intent of the user5. The server(s)120may determine (136) state information data based on the device context data, and may determine (138) that the device110is locked (e.g., in a locked state) based on the state information data. The server(s)120may determine (140) that the intent data is not whitelisted (e.g., included in a whitelist database), may generate (142) output data requesting that the device110be unlocked, and may send (144) the output data to the device110. For example, the output data may include audio data (e.g., synthesized speech) and/or display data indicating that the device110must be unlocked to proceed with the voice command. If the device110is in an unlocked state, the server(s)120may process the intent data as normal. Additionally or alternatively, if the intent data is included in the whitelist database, the server(s)120may process the intent data as normal. However, since the server(s)120determined that the device110is in the locked state and that the intent data is not included in the whitelist database, the server(s)120sends a prompt to the device110indicating that the device110needs to be unlocked to continue processing. As used herein, information about the user5may be stored as user profile data (e.g., user profile). For example, information such as a name, an address, a phone number, user preferences, and/or other information associated with the user5may be stored in the user profile. As used herein, the device110represents any device that is associated with the server(s)120, such as a device that uses the server(s)120to interpret voice commands, perform other functionality, and/or the like. Thus, whileFIG.1illustrates the device110as a tablet, the disclosure is not limited thereto and the device110may be a speech enabled device, a computer, a smartphone, a television, and/or any other device that is associated with the server(s)120and/or an account that is associated with the server(s)120. While not illustrated inFIG.1, there may be additional dialog between the server(s)120and the user5to clarify the voice command. For example, the server(s)120may receive additional input audio data from the device110, perform speech processing to understand the query, update information associated with the voice command (e.g., potential intents, entities, etc.), and/or generate additional output audio data to respond. Thus, whileFIG.1only illustrates a simple interaction between the user5and the server(s)120, the disclosure is not limited thereto. Instead, the server(s)120may be configured for extended interactions with the user5, generating follow up questions and/or explanations in order to acquire and/or convey as much information as needed to process the voice command. The system may operate using various components as described inFIG.2. The various components may be located on same or different physical devices. Communication between various components may occur directly or across a network(s)199. An audio capture component(s), such as a microphone(s)114or an array of microphones of the device110, captures audio11. The device110processes audio data, representing the audio11, to determine whether speech is detected. The device110may use various techniques to determine whether audio data includes speech. Some embodiments may apply voice activity detection (VAD) techniques. Such techniques may determine whether speech is present in audio data based on various quantitative aspects of the audio data, such as the spectral slope between one or more frames of the audio data; the energy levels of the audio data in one or more spectral bands; the signal-to-noise ratios of the audio data in one or more spectral bands; or other quantitative aspects. In other examples, the device110may implement a limited classifier configured to distinguish speech from background noise. The classifier may be implemented by techniques such as linear classifiers, support vector machines, and decision trees. In still other examples, Hidden Markov Model (HMM) or Gaussian Mixture Model (GMM) techniques may be applied to compare the audio data to one or more acoustic models in storage, which acoustic models may include models corresponding to speech, noise (e.g., environmental noise or background noise), or silence. Still other techniques may be used to determine whether speech is present in audio data. Once speech is detected in audio data representing the audio11, the device110may use a wakeword detection component220to perform wakeword detection to determine when a user intends to speak an input to the device110. This process may also be referred to as keyword detection, with a wakeword being a specific example of a keyword. An example wakeword is “Alexa.” Wakeword detection is typically performed without performing linguistic analysis, textual analysis, or semantic analysis. Instead, the audio data representing the audio11is analyzed to determine if specific characteristics of the audio data match preconfigured acoustic waveforms, audio signatures, or other data to determine if the audio data “matches” stored audio data corresponding to a wakeword. Thus, the wakeword detection component220may compare audio data to stored models or data to detect a wakeword. One approach for wakeword detection applies general large vocabulary continuous speech recognition (LVCSR) systems to decode audio signals, with wakeword searching conducted in the resulting lattices or confusion networks. LVCSR decoding may require relatively high computational resources. Another approach for wakeword spotting builds HMMs for each wakeword and non-wakeword speech signals, respectively. The non-wakeword speech includes other spoken words, background noise, etc. There can be one or more HMMs built to model the non-wakeword speech characteristics, which are named filler models. Viterbi decoding is used to search the best path in the decoding graph, and the decoding output is further processed to make the decision on wakeword presence. This approach can be extended to include discriminative information by incorporating a hybrid DNN-HMM decoding framework. In another example, the wakeword detection component220may be built on deep neural network (DNN)/recursive neural network (RNN) structures directly, without HMM being involved. Such an architecture may estimate the posteriors of wakewords with context information, either by stacking frames within a context window for DNN, or using RNN. Follow-on posterior threshold tuning or smoothing is applied for decision making. Other techniques for wakeword detection, such as those known in the art, may also be used. Once the wakeword is detected, the device110may “wake” and begin transmitting audio data211, representing the audio11, to the server(s)120. The audio data211may include data corresponding to the wakeword, or the portion of the audio data211corresponding to the wakeword may be removed by the device110prior to sending the audio data211to the server(s)120. Upon receipt by the server(s)120, the audio data211may be sent to an orchestrator component230. The orchestrator component230may include memory and logic that enables the orchestrator component230to transmit various pieces and forms of data to various components of the system, as well as perform other operations. The orchestrator component230sends the audio data211to an automatic speech recognition (ASR) component250. The ASR component250transcribes the audio data211into text data. The text data output by the ASR component250represents one or more than one (e.g., in the form of an N-best list) hypotheses representing speech represented in the audio data211. The ASR component250interprets the speech in the audio data211based on a similarity between the audio data211and pre-established language models. For example, the ASR component250may compare the audio data211with models for sounds (e.g., subword units, such as phonemes, etc.) and sequences of sounds to identify words that match the sequence of sounds of the speech represented in the audio data211. The ASR component250sends the text data generated thereby to a natural language understanding (NLU) component260, either directly or via the orchestrator component230. The text data sent from the ASR component250to the NLU component260may include a top scoring hypothesis or may include an N-best list including multiple hypotheses. An N-best list may additionally include a respective score associated with each hypothesis represented therein. Each score may indicate a confidence of ASR processing performed to generate the hypothesis with which the score is associated. Alternatively, the device110may send text data213to the server(s)120. Upon receipt by the server(s)120, the text data213may be sent to the orchestrator component230. The orchestrator component230may send the text data213to the NLU component260. The NLU component260attempts to make a semantic interpretation of the phrases or statements represented in the text data input therein. That is, the NLU component260determines one or more meanings associated with the phrases or statements represented in the text data based on words represented in the text data. The NLU component260determines an intent representing an action that a user desires be performed as well as pieces of the input text data that allow a device (e.g., a device110, the server(s)120, etc.) to execute the intent. For example, if the text data corresponds to “call John,” the NLU component260may determine an intent that the system establish a two-way communication channel between the device110originating the call and a device of the recipient “John.” The NLU component260outputs NLU results to the orchestrator component230. The NLU results may include a representation of a single intent and corresponding slotted data that may be used by a downstream component to perform the intent. Alternatively, the NLU results data may include multiple NLU hypotheses, with each NLU hypothesis representing an intent and corresponding slotted data. Each NLU hypothesis may be associated with a confidence value representing a confidence of the NLU component260in the processing performed to generate the NLU hypothesis associated with the confidence value. The orchestrator component230may send the NLU results to an associated speechlet component290. If the NLU results include multiple NLU hypotheses, the orchestrator component230may send a portion of the NLU results corresponding to the top scoring NLU hypothesis to a speechlet component290associated with the top scoring NLU hypothesis. A “speechlet” or “speechlet component” may be software running on the server(s)120that is akin to a software application running on a traditional computing device. That is, a speechlet component290may enable the server(s)120to execute specific functionality in order to perform one or more actions (e.g., provide information to a user, display content to a user, output music, or perform some other requested action). The server(s)120may be configured with more than one speechlet component290. For example, a weather speechlet component may enable the server(s)120to provide weather information, a ride sharing speechlet component may enable the server(s)120to schedule a trip with respect to a ride sharing service, a restaurant speechlet component may enable the server(s)120to order a pizza with respect to a restaurant's online ordering system, a communications speechlet component may enable the system to perform messaging or multi-endpoint communications, a device-specific speechlet may enable the system to perform one or more actions specific to the device110, etc. A speechlet component290may operate in conjunction between the server(s)120and other devices such as a device110local to a user in order to complete certain functions. Inputs to a speechlet component290may come from various interactions and input sources. The functionality described herein as a speechlet or speechlet component may be referred to using many different terms, such as an action, bot, app, or the like. A speechlet component290may include hardware, software, firmware, or the like that may be dedicated to the particular speechlet component290or shared among different speechlet components290. A speechlet component290may be part of the server(s)120(as illustrated inFIG.2) or may be located at whole (or in part) with one or more separate servers. Unless expressly stated otherwise, reference to a speechlet, speechlet device, or speechlet component may include a speechlet component operating within the server(s)120(for example as speechlet component290) and/or speechlet component operating within a server(s) separate from the server(s)120. A speechlet component290may be configured to perform one or more actions. An ability to perform such action(s) may sometimes be referred to as a “skill.” That is, a skill may enable a speechlet component290to execute specific functionality in order to provide data or produce some other output requested by a user. A particular speechlet component290may be configured to execute more than one skill. For example, a weather skill may involve a weather speechlet component providing weather information to the server(s)120, a ride sharing skill may involve a ride sharing speechlet component scheduling a trip with respect to a ride sharing service, an order pizza skill may involve a restaurant speechlet component ordering a pizza with respect to a restaurant's online ordering system, a windows control skill may involve a device-specific speechlet component causing a vehicle to move its windows, etc. A speechlet component290may implement different types of skills. Types of skills include home automation skills (e.g., skills that enable a user to control home devices such as lights, door locks, cameras, thermostats, etc.), entertainment device skills (e.g., skills that enable a user to control entertainment devices such as smart TVs), video skills, flash briefing skills, device-specific skills, as well as custom skills that are not associated with any pre-configured type of skill. In some examples, the system may be configured with different device-specific speechlet components (illustrated as part of the speechlet components290inFIG.2). A device-specific speechlet component may be specific to a vehicle manufacturer, an appliance manufacturer, or some other device manufacturer that does not control or maintain the server(s)120. A user profile may be configured with top-level speechlets. Thus, a user may invoke a top-level speechlet without explicitly referring to the speechlet in the user input. For example, a weather speechlet may be a top-level speechlet. A user may say “Alexa, what is the weather.” In response, the system may call the weather speechlet to provide weather information, even though the user did not explicitly refer to the weather speechlet in the user input. A user profile may also be configured with non-top-level speechlets. Thus, a user may need to explicitly refer to a non-top-level speechlet in a user input in order to cause the system to call the particular non-top-level speechlet to perform an action responsive to the user input. For example, the system may be configured with a top-level weather speechlet and a non-top-level Weather Underground speechlet. To cause the non-top-level Weather Underground speechlet to be called instead of the top-level weather speechlet, a user may need to explicitly refer to the non-top-level Weather Underground speechlet in the user input, for example by saying “Alexa, ask Weather Underground what is the weather for tomorrow.” In certain instances, the server(s)120may receive or determine text data responsive to a user input, when it may be more appropriate for audio to be output to a user. The server(s)120may include a TTS component280that generates audio data (e.g., synthesized speech) from text data using one or more different methods. In one method of synthesis called unit selection, the TTS component280matches text data against a database of recorded speech. The TTS component280selects matching units of recorded speech and concatenates the units together to form audio data. In another method of synthesis called parametric synthesis, the TTS component280varies parameters such as frequency, volume, and noise to create audio data including an artificial speech waveform. Parametric synthesis uses a computerized voice generator, sometimes called a vocoder. The server(s)120may include profile storage270. The profile storage270may include a variety of information related to individual users, groups of users, etc. that interact with the system. The profile storage270may include one or more user profiles, with each user profile being associated with a different user identifier. Each user profile may include various user identifying information. Each user profile may also include preferences of the user. Each user profile may also include one or more device identifiers, representing one or more devices of the user. The profile storage270may include one or more group profiles. Each group profile may be associated with a different group identifier. A group profile may be an umbrella profile specific to a group of users. That is, a group profile may be associated with two or more individual user profiles. For example, a group profile may be a household profile that is associated with user profiles associated with multiple users of a single household. A group profile may include preferences shared by all the user profiles associated therewith. Each user profile associated with a single group profile may additionally include preferences specific to the user associated therewith. That is, each user profile may include preferences unique from one or more other user profiles associated with the same group profile. A user profile may be a stand-alone profile or may be associated with a group profile. A user profile may represent speechlet components enabled by the user associated with the user profile. The system may be configured such that certain speechlet components may not be invoked by a user's input unless the user has enabled the speechlet component. The system may automatically enable a device-specific speechlet component with respect to a user profile when the user associates a device, associated with the device-specific speechlet component, with the user's profile. For example, if the user associates a vehicle with their user profile, the system may enable the vehicle manufacturer's speechlet component without a particular user request to do so. The system may hide a device-specific speechlet component from a user until the user has associated a device (associated with the device-specific speechlet component) with their user profile. This is because device-specific speechlet components may be configured to only provide functionality useful to users having devices associated with the device-specific speechlet components. For example, a particular vehicle manufacturer's speechlet component may only provide functionality useful to a user having one or more of the vehicle manufacturer's vehicles. When a user associates a device with their user profile, the user may provide the system with account information (e.g., account number, username, password, etc.). The server(s)120(or components thereof) may use the account information to communicate with a device server(s) associated with the vehicle. The server(s)120may be restricted from sending data to or receiving data from a device server(s) until the server(s)120authenticates itself with the device server(s) using the account information and/or a device identifier specific to the device newly associated with the user profile. The profile storage270, or a different storage, may store device profiles. Each device profile may be associated with a different device identifier. Each device profile may represent output capabilities (e.g., audio, video, quality of output, etc.) of the device. Each device profile may also represent a speechlet component identifier specific to a device-specific speechlet component associated with the device. For example, if the device110is a vehicle, the speechlet component identifier may represent a vehicle manufacturer speechlet component associated with the vehicle. For further example, if the device110is an appliance, the speechlet component identifier may represent an appliance manufacturer speechlet component associated with the appliance. The system may be configured to incorporate user permissions and may only perform activities disclosed herein if approved by a user. As such, the systems, devices, components, and techniques described herein would be typically configured to restrict processing where appropriate and only process user information in a manner that ensures compliance with all appropriate laws, regulations, standards, and the like. The system and techniques can be implemented on a geographic basis to ensure compliance with laws in various jurisdictions and entities in which the component(s) of the system(s) and/or user are located. The server(s)120may include a user recognition component295that recognizes one or more users associated with data input to the system. The user recognition component295may take as input the audio data211, text data213, and/or text data output by the ASR component250. The user recognition component295determines scores indicating whether user input originated from a particular user. For example, a first score may indicate a likelihood that the user input originated from a first user, a second score may indicate a likelihood that the user input originated from a second user, etc. The user recognition component295also determines an overall confidence regarding the accuracy of user recognition operations. The user recognition component295may perform user recognition by comparing audio characteristics in the audio data211to stored audio characteristics of users. The user recognition component295may also perform user recognition by comparing biometric data (e.g., fingerprint data, iris data, etc.) received by the system in correlation with the present user input to stored biometric data of users. The user recognition component295may further perform user recognition by comparing image data (e.g., including a representation of at least a feature of a user) received by the system in correlation with the present user input with stored image data including representations of features of different users. The user recognition component295may perform additional user recognition processes, including those known in the art. Output of the user recognition component295may include a single user identifier corresponding to the most likely user that originated the present input. Alternatively, output of the user recognition component295may include an N-best list of user identifiers with respective scores indicating likelihoods of respective users originating the present input. The output of the user recognition component295may be used to inform NLU processing as well as processing performed by speechlet components290. FIG.3illustrates how NLU processing is performed on text data. Generally, the NLU component260attempts to make a semantic interpretation of text data input thereto. That is, the NLU component260determines the meaning behind text data based on the individual words and/or phrases represented therein. The NLU component260interprets text data to derive an intent of the user as well as pieces of the text data that allow a device (e.g., the device110, the server(s)120, etc.) to complete that action. For example, if the NLU component260receives text data corresponding to “tell me the weather,” the NLU component260may determine that the user intends the system to output weather information. The NLU component260may process text data including several hypotheses. For example, if the ASR component250outputs text data including an N-best list of ASR hypotheses, the NLU component260may process the text data with respect to all (or a portion of) the ASR hypotheses represented therein. Even though the ASR component250may output an N-best list of ASR hypotheses, the NLU component260may be configured to only process with respect to the top scoring ASR hypothesis in the N-best list. The NLU component260may annotate text data by parsing and/or tagging the text data. For example, for the text data “tell me the weather for Seattle,” the NLU component260may tag “Seattle” as a location for the weather information. The NLU component260may include one or more recognizers363. Each recognizer363may be associated with a different speechlet component290. Each recognizer363may process with respect to text data input to the NLU component260. Each recognizer363may operate in parallel with other recognizers363of the NLU component260. Each recognizer363may include a named entity recognition (NER) component362. The NER component362attempts to identify grammars and lexical information that may be used to construe meaning with respect to text data input therein. The NER component362identifies portions of text data that correspond to a named entity that may be applicable to processing performed by a speechlet component290, associated with the recognizer363implementing the NER component362. The NER component362(or other component of the NLU component260) may also determine whether a word refers to an entity whose identity is not explicitly mentioned in the text data, for example “him,” “her,” “it” or other anaphora, exophora or the like. Each recognizer363, and more specifically each NER component362, may be associated with a particular grammar model and/or database373, a particular set of intents/actions374, and a particular personalized lexicon386. Each gazetteer384may include speechlet-indexed lexical information associated with a particular user and/or device110. For example, a Gazetteer A (384a) includes speechlet-indexed lexical information386aato386an. A user's music speechlet lexical information might include album titles, artist names, and song names, for example, whereas a user's contact list speechlet lexical information might include the names of contacts. Since every user's music collection and contact list is presumably different, this personalized information improves entity resolution. An NER component362applies grammar models376and lexical information386associated with the speechlet component290(associated with the recognizer363implementing the NER component362) to determine a mention of one or more entities in text data. In this manner, the NER component362identifies “slots” (corresponding to one or more particular words in text data) that may be needed for later processing. The NER component362may also label each slot with a type (e.g., noun, place, city, artist name, song name, etc.). Each grammar model376includes the names of entities (i.e., nouns) commonly found in speech about the particular speechlet component290to which the grammar model376relates, whereas the lexical information386is personalized to the user and/or the device110from which the user input originated. For example, a grammar model376associated with a shopping speechlet component may include a database of words commonly used when people discuss shopping. A downstream process called named entity resolution (discussed in detail elsewhere herein) actually links a portion of text data to an actual specific entity known to the system. To perform named entity resolution, the NLU component260may utilize gazetteer information (384a-384n) stored in an entity library storage382. The gazetteer information384may be used to match text data with different entities, such as song titles, contact names, etc. Gazetteers384may be linked to users (e.g., a particular gazetteer may be associated with a specific user's music collection), may be linked to certain speechlet components290(e.g., a shopping speechlet component, a music speechlet component, a video speechlet component, a device-specific speechlet component, etc.), or may be organized in a variety of other ways. Each recognizer363may also include an intent classification (IC) component364. An IC component364parses text data to determine an intent(s), associated with the speechlet component290(associated with the recognizer363implementing the IC component364), that potentially represents the user input. An intent represents to an action a user desires be performed. An IC component364may communicate with a database374of words linked to intents. For example, a music intent database may link words and phrases such as “quiet,” “volume off,” and “mute” to a <Mute> intent. An IC component364identifies potential intents by comparing words and phrases in text data to the words and phrases in an intents database374, associated with the speechlet component290that is associated with the recognizer363implementing the IC component364. The intents identifiable by a specific IC component364are linked to speechlet-specific (i.e., the speechlet component290associated with the recognizer363implementing the IC component364) grammar frameworks376with “slots” to be filled. Each slot of a grammar framework376corresponds to a portion of text data that the system believes corresponds to an entity. For example, a grammar framework376corresponding to a <PlayMusic> intent may correspond to text data sentence structures such as “Play {Artist Name},” “Play {Album Name},” “Play {Song name},” “Play {Song name} by {Artist Name},” etc. However, to make resolution more flexible, grammar frameworks376may not be structured as sentences, but rather based on associating slots with grammatical tags. For example, an NER component362may parse text data to identify words as subject, object, verb, preposition, etc. based on grammar rules and/or models prior to recognizing named entities in the text data. An IC component364(implemented by the same recognizer363as the NER component362) may use the identified verb to identify an intent. The NER component362may then determine a grammar model376associated with the identified intent. For example, a grammar model376for an intent corresponding to <PlayMusic> may specify a list of slots applicable to play the identified “object” and any object modifier (e.g., a prepositional phrase), such as {Artist Name}, {Album Name}, {Song name}, etc. The NER component362may then search corresponding fields in a lexicon386(associated with the speechlet component290associated with the recognizer363implementing the NER component362), attempting to match words and phrases in text data the NER component362previously tagged as a grammatical object or object modifier with those identified in the lexicon386. An NER component362may perform semantic tagging, which is the labeling of a word or combination of words according to their type/semantic meaning. An NER component362may parse text data using heuristic grammar rules, or a model may be constructed using techniques such as hidden Markov models, maximum entropy models, log linear models, conditional random fields (CRF), and the like. For example, an NER component362implemented by a music speechlet recognizer may parse and tag text data corresponding to “play mother's little helper by the rolling stones” as {Verb}: “Play,” {Object}: “mother's little helper,” {Object Preposition}: “by,” and {Object Modifier}: “the rolling stones.” The NER component362identifies “Play” as a verb based on a word database associated with the music speechlet, which an IC component364(also implemented by the music speechlet recognizer) may determine corresponds to a <PlayMusic> intent. At this stage, no determination has been made as to the meaning of “mother's little helper” and “the rolling stones,” but based on grammar rules and models, the NER component362has determined the text of these phrases relates to the grammatical object (i.e., entity) of the user input represented in the text data. The frameworks linked to the intent are then used to determine what database fields should be searched to determine the meaning of these phrases, such as searching a user's gazetteer384for similarity with the framework slots. For example, a framework for a <PlayMusic> intent might indicate to attempt to resolve the identified object based on {Artist Name}, {Album Name}, and {Song name}, and another framework for the same intent might indicate to attempt to resolve the object modifier based on {Artist Name}, and resolve the object based on {Album Name} and {Song Name} linked to the identified {Artist Name}. If the search of the gazetteer384does not resolve a slot/field using gazetteer information, the NER component362may search a database of generic words associated with the speechlet component290(in the knowledge base372). For example, if the text data includes “play songs by the rolling stones,” after failing to determine an album name or song name called “songs” by “the rolling stones,” the NER component362may search the speechlet vocabulary for the word “songs.” In the alternative, generic words may be checked before the gazetteer information, or both may be tried, potentially producing two different results. An NER component362may tag text data to attribute meaning thereto. For example, an NER component362may tag “play mother's little helper by the rolling stones” as: {speechlet} Music, {intent}<PlayMusic>, {artist name} rolling stones, {media type} SONG, and {song title} mother's little helper. For further example, the NER component362may tag “play songs by the rolling stones” as: {speechlet} Music, {intent}<PlayMusic>, {artist name} rolling stones, and {media type} SONG. The NLU component260may generate cross-speechlet N-best list data440, which may include a list of NLU hypotheses output by each recognizer363(as illustrated inFIG.4). A recognizer363may output tagged text data generated by an NER component362and an IC component364operated by the recognizer363, as described above. Each NLU hypothesis including an intent indicator and text/slots called out by the NER component362may be grouped as an NLU hypothesis represented in the cross-speechlet N-best list data440. Each NLU hypothesis may also be associated with one or more respective score(s) for the NLU hypothesis. For example, the cross-speechlet N-best list data440may be represented as, with each line representing a separate NLU hypothesis:[0.95] Intent: <PlayMusic> ArtistName: Lady Gaga SongName: Poker Face[0.95] Intent: <PlayVideo> ArtistName: Lady Gaga VideoName: Poker Face[0.01] Intent: <PlayMusic> ArtistName: Lady Gaga AlbumName: Poker Face[0.01] Intent: <PlayMusic> SongName: Pokerface The NLU component260may send the cross-speechlet N-best list data440to a pruning component450. The pruning component450may sort the NLU hypotheses represented in the cross-speechlet N-best list data440according to their respective scores. The pruning component450may then perform score thresholding with respect to the cross-speechlet N-best list data440. For example, the pruning component450may select NLU hypotheses represented in the cross-speechlet N-best list data440associated with confidence scores satisfying (e.g., meeting and/or exceeding) a threshold confidence score. The pruning component450may also or alternatively perform number of NLU hypothesis thresholding. For example, the pruning component450may select a maximum threshold number of top scoring NLU hypotheses. The pruning component450may generate cross-speechlet N-best list data460including the selected NLU hypotheses. The purpose of the pruning component450is to create a reduced list of NLU hypotheses so that downstream, more resource intensive, processes may only operate on the NLU hypotheses that most likely represent the user's intent. The NLU component260may also include a light slot filler component452. The light slot filler component452can take text data from slots represented in the NLU hypotheses output by the pruning component450and alter it to make the text data more easily processed by downstream components. The light slot filler component452may perform low latency operations that do not involve heavy operations such as reference to a knowledge base. The purpose of the light slot filler component452is to replace words with other words or values that may be more easily understood by downstream system components. For example, if an NLU hypothesis includes the word “tomorrow,” the light slot filler component452may replace the word “tomorrow” with an actual date for purposes of downstream processing. Similarly, the light slot filler component452may replace the word “CD” with “album” or the words “compact disc.” The replaced words are then included in the cross-speechlet N-best list data460. The NLU component260sends the cross-speechlet N-best list data460to an entity resolution component470. The entity resolution component470can apply rules or other instructions to standardize labels or tokens from previous stages into an intent/slot representation. The precise transformation may depend on the speechlet component290. For example, for a travel speechlet component, the entity resolution component270may transform text data corresponding to “Boston airport” to the standard BOS three-letter code referring to the airport. The entity resolution component470can refer to a knowledge base that is used to specifically identify the precise entity referred to in each slot of each NLU hypothesis represented in the cross-speechlet N-best list data460. Specific intent/slot combinations may also be tied to a particular source, which may then be used to resolve the text data. In the example “play songs by the stones,” the entity resolution component470may reference a personal music catalog, Amazon Music account, user profile data, or the like. The entity resolution component470may output text data including an altered N-best list that is based on the cross-speechlet N-best list data460, and that includes more detailed information (e.g., entity IDs) about the specific entities mentioned in the slots and/or more detailed slot data that can eventually be used by a speechlet component290. The NLU component260may include multiple entity resolution components470and each entity resolution component470may be specific to one or more speechlet components290. The entity resolution component470may not be successful in resolving every entity and filling every slot represented in the cross-speechlet N-best list data460. This may result in the entity resolution component470outputting incomplete results. The NLU component260may include a ranker component490. The ranker component490may assign a particular confidence score to each NLU hypothesis input therein. The confidence score of an NLU hypothesis may represent a confidence of the system in the NLU processing performed with respect to the NLU hypothesis. The confidence score of a particular NLU hypothesis may be affected by whether the NLU hypothesis has unfilled slots. For example, if an NLU hypothesis associated with a first speechlet component includes slots that are all filled/resolved, that NLU hypothesis may be assigned a higher confidence score than another NLU hypothesis including at least some slots that are unfilled/unresolved by the entity resolution component470. The ranker component490may apply re-scoring, biasing, or other techniques to determine the top scoring NLU hypotheses. To do so, the ranker component490may consider not only the data output by the entity resolution component470, but may also consider other data491. The other data491may include a variety of information. The other data491may include speechlet component290rating or popularity data. For example, if one speechlet component290has a particularly high rating, the ranker component490may increase the score of an NLU hypothesis associated with that speechlet component290. The other data491may also include information about speechlet components290that have been enabled for the user identifier and/or device identifier associated with the current user input. For example, the ranker component490may assign higher scores to NLU hypotheses associated with enabled speechlet components290than NLU hypotheses associated with non-enabled speechlet components290. The other data491may also include data indicating user usage history, such as if the user identifier associated with the current user input is regularly associated with user input that invokes a particular speechlet component290or does so at particular times of day. The other data491may additionally include data indicating date, time, location, weather, type of device110, user identifier, device identifier, context, as well as other information. For example, the ranker component490may consider when any particular speechlet component290is currently active (e.g., music being played, a game being played, etc.) with respect to the user or device associated with the current user input. The other data291may also include device type information. For example, if the device110does not include a display, the ranker component490may decrease the score associated with NLU hypotheses that would result in displayable content being presented to a user. Following ranking by the ranker component490, the NLU component260may output NLU results data485to the orchestrator component230. The NLU results data485may include first NLU results data485aincluding tagged text data associated with a first speechlet component290a, second NLU results data485bincluding tagged text data associated with a second speechlet component290b, etc. The NLU results data485may include the top scoring NLU hypotheses (e.g., in the form of an N-best list) as determined by the ranker component490. Alternatively, the NLU results data485may include the top scoring NLU hypothesis as determined by the ranker component490. Prior to the orchestrator component230sending text data to the NLU component260, the orchestrator component230may determine whether the device110is associated with a device-specific speechlet component290. The orchestrator component230may use the device identifier, received from the device110, to determine device profile data associated with the device110. The orchestrator component230may determine the device profile data represents a speechlet component identifier unique to a device-specific speechlet component associated with the device110. Alternatively, the orchestrator component230may determine the device profile data represents a manufacturer of the device110. The orchestrator component230may then determine whether the system includes a device-specific speechlet component associated with the device manufacturer. If the orchestrator component230determines the device110is associated with a device-specific speechlet component, the orchestrator component230calls the NLU component260twice. The orchestrator component230calls the NLU component260to perform NLU processing on text data (received from the device110, or output by the ASR component250) with respect to various speechlet components of the system, as described above with respect toFIGS.3and4. The orchestrator component230also separately calls the NLU component260to perform NLU processing on the text data specifically with respect to the device-specific speechlet component. The NLU component260may perform the foregoing processing at least partially in parallel, and output NLU results of the respective processing to the orchestrator component230. The orchestrator component230may then rank the received NLU results to determine which speechlet component should be called to execute with respect to the current user input. FIG.5illustrates data stored and associated with user accounts according to embodiments of the present disclosure. The server(s)120may include or refer to data regarding user accounts502(e.g., user profile(s)), shown by the profile storage270illustrated inFIG.5. The profile storage270may be located proximate to server(s)120, or may otherwise be in communication with various components, for example over network(s)199. In an example, the profile storage270is a cloud-based storage. As discussed above, the profile storage270may include a variety of information related to individual users, households, accounts, etc. that interact with the system100. For illustration, as shown inFIG.5, each user profile502may include data such as device type information, device location information, session ID information, and processes performed with respect to each session ID. Each user profile502may also include information about previous usage history (e.g., number of times an application is used), previous commands/intents, temporal information or the like. In addition, a user profile502may store other data as well. In some examples, the profile storage270may include data regarding devices associated with particular individual user accounts502. Such data may include device identifier (ID) and internet protocol (IP) address information for different devices as well as names by which the devices may be referred to by a user. Further qualifiers describing the devices may also be listed along with a description of the type of object of the device. FIG.6illustrates an example of a text-to-speech (TTS) component280generating TTS or synthesized speech according to examples of the present disclosure. The TTS component/processor280includes a TTS front end (TTSFE)652, a speech synthesis engine654, and TTS storage670. The TTSFE652transforms input text data (for example from command processor290) into a symbolic linguistic representation for processing by the speech synthesis engine654. The TTSFE652may also process tags or other data input to the TTS component that indicate how specific words should be pronounced (e.g., an indication that a word is an interjection). The speech synthesis engine654compares the annotated phonetic units models and information stored in the TTS storage670for converting the input text into speech. The TTSFE652and speech synthesis engine654may include their own controller(s)/processor(s) and memory or they may use the controller/processor and memory of the server(s)120, device110, or other device, for example. Similarly, the instructions for operating the TTSFE652and speech synthesis engine654may be located within the TTS component280, within the memory and/or storage of the server(s)120, device110, or within an external device. Text input into a TTS component280may be sent to the TTSFE652for processing. The front-end may include components for performing text normalization, linguistic analysis, and linguistic prosody generation. During text normalization, the TTSFE processes the text input and generates standard text, converting such things as numbers, abbreviations (such as Apt., St., etc.), symbols ($, %, etc.) into the equivalent of written out words. During linguistic analysis the TTSFE652analyzes the language in the normalized text to generate a sequence of phonetic units corresponding to the input text. This process may be referred to as phonetic transcription. Phonetic units include symbolic representations of sound units to be eventually combined and output by the system as speech. Various sound units may be used for dividing text for purposes of speech synthesis. A TTS component280may process speech based on phonemes (individual sounds), half-phonemes, di-phones (the last half of one phoneme coupled with the first half of the adjacent phoneme), bi-phones (two consecutive phonemes), syllables, words, phrases, sentences, or other units. Each word may be mapped to one or more phonetic units. Such mapping may be performed using a language dictionary stored by the system, for example in the TTS storage670. The linguistic analysis performed by the TTSFE652may also identify different grammatical components such as prefixes, suffixes, phrases, punctuation, syntactic boundaries, or the like. Such grammatical components may be used by the TTS component280to craft a natural sounding audio waveform output. The language dictionary may also include letter-to-sound rules and other tools that may be used to pronounce previously unidentified words or letter combinations that may be encountered by the TTS component280. Generally, the more information included in the language dictionary, the higher quality the speech output. Based on the linguistic analysis the TTSFE652may then perform linguistic prosody generation where the phonetic units are annotated with desired prosodic characteristics, also called acoustic features, which indicate how the desired phonetic units are to be pronounced in the eventual output speech. During this stage the TTSFE652may consider and incorporate any prosodic annotations that accompanied the text input to the TTS component280. Such acoustic features may include pitch, energy, duration, and the like. Application of acoustic features may be based on prosodic models available to the TTS component280. Such prosodic models indicate how specific phonetic units are to be pronounced in certain circumstances. A prosodic model may consider, for example, a phoneme's position in a syllable, a syllable's position in a word, a word's position in a sentence or phrase, neighboring phonetic units, etc. As with the language dictionary, prosodic model with more information may result in higher quality speech output than prosodic models with less information. The output of the TTSFE652, referred to as a symbolic linguistic representation, may include a sequence of phonetic units annotated with prosodic characteristics. This symbolic linguistic representation may be sent to a speech synthesis engine654, also known as a synthesizer, for conversion into an audio waveform of speech for output to an audio output device and eventually to a user. The speech synthesis engine654may be configured to convert the input text into high-quality natural-sounding speech in an efficient manner. Such high-quality speech may be configured to sound as much like a human speaker as possible, or may be configured to be understandable to a listener without attempts to mimic a precise human voice. A speech synthesis engine654may perform speech synthesis using one or more different methods. In one method of synthesis called unit selection, described further below, a unit selection engine656matches the symbolic linguistic representation created by the TTSFE652against a database of recorded speech, such as a database of a voice corpus. The unit selection engine656matches the symbolic linguistic representation against spoken audio units in the database. Matching units are selected and concatenated together to form a speech output. Each unit includes an audio waveform corresponding with a phonetic unit, such as a short .wav file of the specific sound, along with a description of the various acoustic features associated with the .wav file (such as its pitch, energy, etc.), as well as other information, such as where the phonetic unit appears in a word, sentence, or phrase, the neighboring phonetic units, etc. Using all the information in the unit database, a unit selection engine656may match units to the input text to create a natural sounding waveform. The unit database may include multiple examples of phonetic units to provide the system with many different options for concatenating units into speech. One benefit of unit selection is that, depending on the size of the database, a natural sounding speech output may be generated. As described above, the larger the unit database of the voice corpus, the more likely the system will be able to construct natural sounding speech. In another method of synthesis called parametric synthesis parameters such as frequency, volume, noise, are varied by a parametric synthesis engine658, digital signal processor or other audio generation device to create an artificial speech waveform output. Parametric synthesis uses a computerized voice generator, sometimes called a vocoder. Parametric synthesis may use an acoustic model and various statistical techniques to match a symbolic linguistic representation with desired output speech parameters. Parametric synthesis may include the ability to be accurate at high processing speeds, as well as the ability to process speech without large databases associated with unit selection, but also typically produces an output speech quality that may not match that of unit selection. Unit selection and parametric techniques may be performed individually or combined together and/or combined with other synthesis techniques to produce speech audio output. Parametric speech synthesis may be performed as follows. A TTS component280may include an acoustic model, or other models, which may convert a symbolic linguistic representation into a synthetic acoustic waveform of the text input based on audio signal manipulation. The acoustic model includes rules which may be used by the parametric synthesis engine658to assign specific audio waveform parameters to input phonetic units and/or prosodic annotations. The rules may be used to calculate a score representing a likelihood that a particular audio output parameter(s) (such as frequency, volume, etc.) corresponds to the portion of the input symbolic linguistic representation from the TTSFE652. The parametric synthesis engine658may use a number of techniques to match speech to be synthesized with input phonetic units and/or prosodic annotations. One common technique is using Hidden Markov Models (HMIs). HMMs may be used to determine probabilities that audio output should match textual input. HMIs may be used to translate from parameters from the linguistic and acoustic space to the parameters to be used by a vocoder (the digital voice encoder) to artificially synthesize the desired speech. Using HMMs, a number of states are presented, in which the states together represent one or more potential acoustic parameters to be output to the vocoder and each state is associated with a model, such as a Gaussian mixture model. Transitions between states may also have an associated probability, representing a likelihood that a current state may be reached from a previous state. Sounds to be output may be represented as paths between states of the HMM and multiple paths may represent multiple possible audio matches for the same input text. Each portion of text may be represented by multiple potential states corresponding to different known pronunciations of phonemes and their parts (such as the phoneme identity, stress, accent, position, etc.). An initial determination of a probability of a potential phoneme may be associated with one state. As new text is processed by the speech synthesis engine654, the state may change or stay the same, based on the processing of the new text. For example, the pronunciation of a previously processed word might change based on later processed words. A Viterbi algorithm may be used to find the most likely sequence of states based on the processed text. The HMMs may generate speech in parametrized form including parameters such as fundamental frequency (f0), noise envelope, spectral envelope, etc. that are translated by a vocoder into audio segments. The output parameters may be configured for particular vocoders such as a STRAIGHT vocoder, TANDEM-STRAIGHT vocoder, HNM (harmonic plus noise) based vocoders, CELP (code-excited linear prediction) vocoders, GlottHMM vocoders, HSM (harmonic/stochastic model) vocoders, or others. Unit selection speech synthesis may be performed as follows. Unit selection includes a two-step process. First a unit selection engine656determines what speech units to use and then it combines them so that the particular combined units match the desired phonemes and acoustic features and create the desired speech output. Units may be selected based on a cost function which represents how well particular units fit the speech segments to be synthesized. The cost function may represent a combination of different costs representing different aspects of how well a particular speech unit may work for a particular speech segment. For example, a target cost indicates how well a given speech unit matches the features of a desired speech output (e.g., pitch, prosody, etc.). A join cost represents how well a speech unit matches a consecutive speech unit for purposes of concatenating the speech units together in the eventual synthesized speech. The overall cost function is a combination of target cost, join cost, and other costs that may be determined by the unit selection engine656. As part of unit selection, the unit selection engine656chooses the speech unit with the lowest overall combined cost. For example, a speech unit with a very low target cost may not necessarily be selected if its join cost is high. The system may be configured with one or more voice corpuses for unit selection. Each voice corpus may include a speech unit database. The speech unit database may be stored in TTS storage670and/or in another storage component. For example, different unit selection databases may be stored in TTS voice unit storage672. Each speech unit database includes recorded speech utterances with the utterances' corresponding text aligned to the utterances. A speech unit database may include many hours of recorded speech (in the form of audio waveforms, feature vectors, or other formats), which may occupy a significant amount of storage. The unit samples in the speech unit database may be classified in a variety of ways including by phonetic unit (phoneme, diphone, word, etc.), linguistic prosodic label, acoustic feature sequence, speaker identity, etc. The sample utterances may be used to create mathematical models corresponding to desired audio output for particular speech units. When matching a symbolic linguistic representation the speech synthesis engine654may attempt to select a unit in the speech unit database that most closely matches the input text (including both phonetic units and prosodic annotations). Generally the larger the voice corpus/speech unit database the better the speech synthesis may be achieved by virtue of the greater number of unit samples that may be selected to form the precise desired speech output. Audio waveforms including the speech output from the TTS component280may be sent to an audio output component, such as a speaker for playback to a user or may be sent for transmission to another device, such as another server(s)120, for further processing or output to a user. Audio waveforms including the speech may be sent in a number of different formats such as a series of feature vectors, uncompressed audio data, or compressed audio data. For example, audio speech output may be encoded and/or compressed by an encoder/decoder (not shown) prior to transmission. The encoder/decoder may be customized for encoding and decoding speech data, such as digitized audio data, feature vectors, etc. The encoder/decoder may also encode non-TTS data of the system, for example using a general encoding scheme such as .zip, etc. A TTS component280may be configured to perform TTS processing in multiple languages. For each language, the TTS component280may include specially configured data, instructions and/or components to synthesize speech in the desired language(s). To improve performance, the TTS component280may revise/update the contents of the TTS storage670based on feedback of the results of TTS processing, thus enabling the TTS component280to improve speech recognition. Other information may also be stored in the TTS storage670for use in speech recognition. The contents of the TTS storage670may be prepared for general TTS use or may be customized to include sounds and words that are likely to be used in a particular application. For example, for TTS processing by a global positioning system (GPS) device, the TTS storage670may include customized speech specific to location and navigation. In certain instances the TTS storage670may be customized for an individual user based on his/her individualized desired speech output. For example a user may prefer a speech output voice to be a specific gender, have a specific accent, speak at a specific speed, have a distinct emotive quality (e.g., a happy voice), or other customizable characteristic(s) (such as speaking an interjection in an enthusiastic manner) as explained in other sections herein. The speech synthesis engine654may include specialized databases or models to account for such user preferences. For example, to create the customized speech output of the system, the system may be configured with multiple voice corpuses/unit databases678a-678n, where each unit database is configured with a different “voice” to match desired speech qualities. The voice selected by the TTS component280to synthesize the speech. For example, one voice corpus may be stored to be used to synthesize whispered speech (or speech approximating whispered speech), another may be stored to be used to synthesize excited speech (or speech approximating excited speech), and so on. To create the different voice corpuses a multitude of TTS training utterance may be spoken by an individual and recorded by the system. The TTS training utterances used to train a TTS voice corpus may be different from the training utterances used to train an ASR system or the models used by the speech quality detector. The audio associated with the TTS training utterances may then be split into small audio segments and stored as part of a voice corpus. The individual speaking the TTS training utterances may speak in different voice qualities to create the customized voice corpuses, for example the individual may whisper the training utterances, say them in an excited voice, and so on. Thus the audio of each customized voice corpus may match the respective desired speech quality. The customized voice corpuses678may then be used during runtime to perform unit selection to synthesize speech having a speech quality corresponding to the input speech quality. FIG.7is a signal flow diagram illustrating an example of processing speech and generating output audio according to embodiments of the present disclosure. A device110receives (702) input audio corresponding to an utterance of a user. The device110generates input audio data corresponding to the received input audio and sends (704) the input audio data to the server(s)120for processing. When the server(s)120receives the first input audio data, the server(s)120performs (706) speech recognition on the first input audio data to generate first input text data. The server(s)120also performs (708) natural language processing on the first input text data to determine an intent of a user command represented in the utterance of the input audio. Based on the intent of the user command, the server(s)120determine (710) an action to perform and perform (712) the action. For example, the server(s)120may determine that the user wants to play music and may identify a music source available to the user from which to stream. However, the disclosure is not limited thereto and the server(s)120may perform any action known to one of skill in the art without departing from the disclosure. After performing the action in step712, the server(s)120may generate (714) output data in response to the first utterance and may perform (716) text-to-speech (TTS) processing on the output data to generate first output audio data. For example, the output data may include text data to be output to a user as synthesized speech and the server(s)120may perform TTS processing to generate the output audio data including the synthesized speech. The server(s)120may send (718) the output audio data to the user device110and the device110may output (720) audio corresponding to the output audio data. Thus, the device110may output the audio to a user5local to the device110. If the user5responds to the audio, the device110may receive second input audio corresponding to a second utterance and repeat the steps listed above. For ease of illustration,FIG.7illustrates a high level signal flow diagram encompassing the overall system for processing speech and generating output audio. However, the server(s)120may perform additional steps to determine an intent corresponding to the speech and generate output audio. In some examples, the server(s)120may determine that there is enough information to process the speech and select an action that corresponds to the speech without further input from the user5. For example, the server(s)120may generate one or more candidate actions and select one of the actions using the orchestrator230. The server(s)120may determine a confidence score associated with the selected action, which indicates a likelihood that the action corresponds to the speech, and if the confidence score is above a threshold value the server(s)120may dispatch the action to a speechlet290associated with the selected action. Dispatching the action refers to sending an instruction to the speechlet290to execute a command, which may be indicated by a framework having slots/fields that correspond to the selected action. In other examples, the server(s)120may determine that there is not enough information to select an action and may request additional information from the user5. The server(s)120may utilize thresholding to determine whether a specific action is being invoked by the user5or whether there is insufficient information to select an action. For example, if the server(s)120determines one or more intents that may correspond to the speech, but none of the intents are associated with a confidence value meeting or exceeding a threshold value, the server(s)120may request additional information. While the server(s)120may dispatch the selected action despite the confidence score being below the threshold value, a lower confidence score corresponds to an increased likelihood that the selected action is not what the user5intended. Thus, dispatching the selected action may result in performing a command that is different than the user5requested, resulting in a lower user satisfaction value after the command is executed. In order to increase the likelihood that the selected action corresponds to the speech, the server(s)120may generate a prompt requesting additional information and/or clarification from the user5. For example, in response to a request to “book a flight to Portland,” the server(s)120may generate a prompt that solicits the user as to whether Portland corresponds to Portland, Oregon or Portland, Maine (e.g., “Would you like to fly to Portland, Oregon, or to Portland, Maine?”). The solicitation may take the form of text output via a display of a user device or audio output by a speaker of a user device. The solicitation may be output by a device different from the device that received the speech. For example, a first device110amay generate the input audio data but a second device110bmay output the solicitation to the user without departing from the disclosure. Accordingly, if the solicitation to the user is to be audio, the TTS component280may generate output audio data based on the text data of the prompt and the device110may output audio corresponding to the output audio data. In response to the output audio, the user may provide additional information. Thus, the server(s)120may receive second input audio data and perform speech recognition processing and natural language process on the second input audio data to determine the additional information. If the additional information clarifies the request, the server(s)120may select an action having a confidence score above the threshold value and execute a command. As described above,FIG.7illustrates a high level signal flow diagram encompassing the overall system for processing speech and generating output audio. For example,FIG.7illustrates an example of the server(s)120receiving input audio data representing a voice command, processing the input audio data to determine an intent and a corresponding action associated with the voice command, performing the action, and then generating output audio data in response to the voice command. For ease of illustration, the following drawings may not go into detail about how the server(s)120process input audio data and generate output audio data. Instead, the following drawings may omit details in order to illustrate important concepts of the invention. In some examples, a skill may be customized to control which user profile(s) and/or account(s) are given access to the skill. For example, a skill/intent may be customized to only be accessible by friends and family of the skill creator. Additionally or alternatively, a business may customize the business enterprise skill to only be accessible by employees of the business. To restrict access, the server(s)120may perform some form of filtering to identify whether a particular user profile and/or account is permitted to access to the skill. For example, the server(s)120may determine that a voice command is invoking the skill, determine that a corresponding user profile is not permitted access to the skill, and explicitly deny access to the skill. Additionally or alternatively, the server(s)120may implicitly restrict access by ignoring potential intents associated with the skill. For example, the server(s)120may determine a plurality of potential intents associated with the voice command, determine that a highest confidence score of the plurality of potential intents corresponds to a first potential intent associated with the skill, determine that the user profile does not have access to the skill, and select a second potential intent having a second confidence score lower than the first confidence score. Thus, if the user profile had access to the skill the server(s)120would select the first potential intent, but since the user profile does not have access the server(s)120may select the second potential intent instead. In some examples, instead of restricting access to the skill by filtering potential intents, the server(s)120may enable access to the skill by adding potential intents associated with the skill to a top-level domain. For example, user profile(s) and/or account(s) that are given access to the skill and/or corresponding speechlet may be configured such that the speechlet is included as a top-level speechlet. Thus, a user may invoke a top-level speechlet without explicitly referring to the speechlet. For example, a weather speechlet may be a top-level speechlet and a user may say “Alexa, what is the weather” to invoke the weather speechlet. Additionally or alternatively, the user profile(s) and/or account(s) that are given access to the skill and/or corresponding speechlet may be configured such that the speechlet is associated with the user profile and/or account but included as a non-top-level speechlet. Thus, a user may need to explicitly refer to a non-top-level speechlet in a user input in order to cause the system to call the particular non-top-level speechlet to perform an action responsive to the user input. For example, the user profile may be configured with a top-level weather speechlet and a non-top-level Weather Underground speechlet. To cause the non-top-level Weather Underground speechlet to be called instead of the top-level weather speechlet, a user may need to explicitly refer to the non-top-level Weather Underground speechlet, for example by saying “Alexa, ask Weather Underground what is the weather for tomorrow.” When user profile(s) and/or account(s) are not given access to the skill and/or corresponding speechlet, the speechlet is not associated with the user profile(s) and/or account(s) and the server(s) do not associate potential intents corresponding to the skill with the user profile. Thus, the user cannot invoke the skill even when explicitly referring to the speechlet. Similarly, a skill and/or intent may be customized to control whether the skill/intent may be invoked when the device110is locked. For example, a skill/intent may be customized to only be accessible (e.g., invoked or processed) when the device110is in an unlocked state, thus restricting access to the skill/intent and protecting a privacy of the user profile. When the device110is locked and the skill/intent is invoked, the server(s)120may determine that a voice command is invoking the skill/intent, determine that a corresponding user profile is not permitted access to the skill/intent when the device110is locked, and may explicitly deny access to the skill/intent (e.g., send a prompt to unlock the device110). Additionally or alternatively, the server(s)120may implicitly restrict access by ignoring potential intents associated with the skill/intent when the device110is locked. For example, the server(s)120may determine a plurality of potential intents associated with the voice command, determine that a highest confidence score of the plurality of potential intents corresponds to a first potential intent associated with the skill, determine that the user profile does not have access to the skill/intent when the device110is locked, and select a second potential intent having a second confidence score lower than the first confidence score. Thus, if the user profile had access to the skill/intent the server(s)120would select the first potential intent (e.g., if the device110was unlocked the server(s)120would select the first potential intent), but since the user profile does not have access when the device110is locked, the server(s)120may select the second potential intent instead. In some examples, instead of restricting access to the skill/intent by filtering potential intents, the server(s)120may enable access to the skill/intent by adding potential intents associated with the skill/intent to a top-level domain when the device110is unlocked and adding potential intents associated with the skill/intent to a non-top-level domain when the device110is locked. For example, user profile(s) and/or account(s) that are given access to the skill and/or corresponding speechlet may be configured such that the speechlet is included as a top-level speechlet when the device110is unlocked. Thus, when the device110is unlocked, a user may invoke a top-level speechlet without explicitly referring to the speechlet. For example, a weather speechlet may be a top-level speechlet and a user may say “Alexa, what is the weather” to invoke the weather speechlet. However, when the device110is locked, the user may invoke the non-top-level speechlet by explicitly referring to the speechlet. FIGS.8A-8Eillustrate examples of processing an utterance received from a locked device according to embodiments of the present disclosure. As illustrated inFIG.8A, to process an utterance received from an unlocked device, the server(s)120may receive (810) the utterance (e.g., audio data including the utterance) from the device110, may determine (820) an intent of the utterance, and may send (830) the intent to one or more speechlet(s) for processing. For example, the speechlet(s) may determine an action to perform and the server(s)120may perform the action. In contrast, the server(s)120may process an utterance differently when the utterance is received from a locked device110. In one example illustrated inFIG.8B, the server(s)120may receive (810) the utterance and determine (840) whether the device110is locked. For example, the server(s)120may receive device context data from the device110and may determine state information indicating whether the device110is locked or unlocked, although the disclosure is not limited thereto. If the server(s)120determine (842) that the device110is unlocked, the server(s)120may perform the steps described above and determine (820) the intent of the utterance and send (830) the intent to one or more speechlet(s) for processing. However, if the server(s)120determine (844) that the device110is locked, the server(s)120may send (846) a request for device unlock to the device110. For example, the server(s)120may generate output data corresponding to a request to input login information to unlock the device110, as described in greater detail above. In a second example illustrated inFIG.8C, the server(s)120may receive (810) the utterance and determine (820) an intent of the utterance for every utterance, regardless of whether the device110is locked or unlocked. After determining the intent, the server(s)120may determine (840) whether the device is locked. If the server(s)120determine (842) that the device110is unlocked, the server(s)120may send (830) the previously determined intent to one or more speechlet(s) for processing. However, if the server(s)120determine (844) that the device110is locked, the server(s)120may send (846) a request for device unlock to the device110. Thus, in the example illustrated inFIG.8Bthe server(s)120determine whether the device110is locked as an initial step before determining the intent, whereas in the example illustrated inFIG.8Cthe server(s)120determine the intent as an initial step and a later processing step determines whether the device110is locked or unlocked. Additionally or alternatively, the server(s)120may process certain intents (e.g., perform certain voice commands) even while the device110is locked. For example, the server(s)120may whitelist certain intents that do not access sensitive information on the device110and/or a user profile associated with the device110, enabling a user of the device110to process certain voice commands even when the device110is locked.FIG.8Dillustrates a first example wherein this process (e.g., whitelist filtering) is performed prior to sending the intent to one or more speechlet(s), whileFIG.8Eillustrates a second example wherein the server(s)120send the intent to the one or more speechlet(s) and the speechlet(s) perform this process (e.g., whitelist filtering) prior to processing the intent (e.g., determining an action to perform and/or performing an action based on the intent). As illustrated inFIG.8D, the server(s)120may receive (810) the utterance and determine (820) an intent of the utterance. However, the disclosure is not limited thereto and the server(s)120may determine the intent of the utterance at a later step, as discussed above with regard toFIG.8B. After determining the intent, the server(s)120may determine (840) whether the device is locked. If the server(s)120determine (842) that the device110is unlocked, the server(s)120may send (830) the previously determined intent to one or more speechlet(s) for processing. If the server(s)120determine (844) that the device110is locked, the server(s)120may determine (850) whether the intent is whitelisted and, if so, may loop to step830and send the previously determined intent to the one or more speechlet(s) for processing. If the server(s)120determine that the device110is locked in step844and that the intent is not whitelisted in step850, the server(s)120may send (846) a request for device unlock to the device110. As illustrated inFIG.8E, the server(s)120may receive (810) the utterance and determine (820) an intent of the utterance. However, the disclosure is not limited thereto and the server(s)120may determine the intent of the utterance at a later step, as discussed above with regard toFIG.8B. WhereasFIG.8Dillustrates the server(s)120determining whether the device110is locked after determining the intent,FIG.8Eillustrates that the server(s)120may send (860) the intent to one or more speechlet(s) regardless of whether the device is locked or unlocked. Thus, the whitelist filtering is performed by the one or more speechlet(s) instead of a previous component. Using each of the one or more speechlet(s), the server(s)120may determine (862) whether the device is locked. If the server(s)120determine (864) that the device110is unlocked, the server(s)120may process (866) the intent normally using the current speechlet. If the server(s)120determine (866) that the device110is locked, the server(s)120may determine870whether the intent is whitelisted for the current speechlet and, if so, may loop to step866to process the intent normally using the current speechlet. If the server(s)120determine that the device110is locked in step868and that the intent is not whitelisted in step870, the server(s)120may send (846) a request for device unlock to the device110. As illustrated inFIGS.8D-8E, the server(s)120may perform whitelist filtering on a device level (e.g., determining whether the device110is locked using a component prior to sending the intent to one or more speechlet(s) for processing) or on a more granular, speechlet-specific level (e.g., sending the intent to one or more speechlet(s) and then determining whether the device110is locked and/or the intent is whitelisted by each of the one or more speechlet(s)). The example illustrated inFIG.8Dis easier to implement and can improve efficiency by reducing redundant processing, whereas the example illustrated inFIG.8Eprovides more customization as a particular intent may be processed by a first speechlet but not by a second speechlet. For example, processing the intent by a weather speechlet may not result in privacy concerns, whereas processing the intent by a banking speechlet may result in privacy concerns. Thus, by determining whether to process the intent individually for each speechlet, the example illustrated inFIG.8Eenables the server(s)120to perform additional functionality compared to the example illustrated inFIG.8Dwithout sacrificing privacy protection. FIGS.9A-9Dillustrate example component diagrams for a server processing an utterance received from a locked device according to embodiments of the present disclosure. To clarify the different components/steps involved with processing an utterance received from a locked device,FIG.9Aillustrates an example component diagram for the server(s)120processing an utterance received from an unlocked device. As illustrated inFIG.9A, the device110may send an utterance (e.g., audio data corresponding to a voice command) to the server(s)120(e.g., step1). The server(s)120may receive the utterance at a gatekeeper910and the gatekeeper910may send the utterance, along with device context data, to the orchestrator230(e.g., step2). The orchestrator230may send the audio data to the automatic speech recognition (ASR) component250and may receive text data associated with the audio data from the ASR component250(e.g., step3). The orchestrator230may send the text data to the natural language understanding (NLU) component260and may receive a list of n best intents from the NLU component260(e.g., step4). The orchestrator may then send a speechlet request, which includes the NLU intent data (e.g., n best intents, top rated intent, and/or combination thereof) and the device context data, to a remote application engine (RAE)920for further processing (e.g., step5). In some examples, the orchestrator230may perform additional processing to determine the most relevant intent and therefore the NLU intent data may correspond to a single NLU intent. For example, the orchestrator230may send the list of n best intents to another component (not illustrated) that selects the most relevant intent to be included in the speechlet request. The most relevant intent may be selected using rule-based techniques (e.g., a rule may indicate that a certain keyword is associated with a certain intent, so whenever the keyword is detected the rule is applied and the intent selected), based on a confidence score (e.g., when no rule applies, the intent having the highest confidence score may be selected), and/or the like. However, the disclosure is not limited thereto and the NLU intent data may include the n best intents without departing from the disclosure. The RAE920acts as an interface between the orchestrator230and the speechlet(s)290. Thus, the RAE920may perform various functions associated with the speechlet request, such as preparing exchanges between the orchestrator230and the speechlet(s)290, modifying an envelope associated with the speechlet request, dispatching the speechlet request to one or more speechlet(s)290, and/or the like. For example, the RAE920may include a first component (e.g., Speechlet Request Envelope Handler) that formats the speechlet request (e.g., wraps the request and response exceptions to the speechlet) and a second component (e.g., Speechlet Dispatcher Handler) that may invoke the speechlets and/or perform dispatching, although the disclosure is not limited thereto. The RAE920may invoke one or more speechlet(s)290(e.g., first speechlet290a, second speechlet290b, etc.) by sending or dispatching the speechlet request to the one or more speechlet(s)290(e.g., step6). For example, the RAE920may determine a speechlet290(e.g., speechlet A290a) or a plurality of speechlets (e.g., speechlet A290a, speechlet B290b, and/or additional speechlets) that are registered to receive the NLU intent. The speechlet(s)290may be associated with skill(s)930(e.g., skills A930aassociated with speechlet A290a, Skills B930bassociated with speechlet B290b, etc.) and may execute the speechlet request (e.g., process the NLU intent data) using these skills930(e.g., step7). For example, speechlet A290amay process the NLU intent data included in the speechlet request using skills A930a, such that speechlet A290adetermines an action to perform and sends the action to interfaces940(e.g., step8). Interfaces940may include one or more components or processes that generate output data to be sent back to the device110. For example, the action received from the speechlet(s)290may indicate that the device110generate audio output including a notification of the action being performed. Thus, the action would include text data and interfaces940would generate the text-to-speech audio data (e.g., synthesized speech) based on the text data. Additionally or alternatively, the action may indicate that the device110display a graphical output on a display, such as a visual notification or other graphic, and interfaces940may generate display data corresponding to the graphical output to be displayed. Thus, interfaces940may include a speech synthesizer, graphical components, and/or other components used to interface with a user of the device110(e.g., components used to generate output data in order to convey information to the user). Interfaces940may send the output data as one or more directive(s) to the gatekeeper910(e.g., step9) and the gatekeeper910may send the one or more directive(s) to the device110(e.g., step10). FIG.9Aillustrates the server(s)120processing NLU intent data when an utterance is received from an unlocked device (e.g., the device110is in an unlocked state). However, the system100enables the device110to process voice commands (e.g., voice inputs) even when the device110is locked (e.g., the device110is in a locked state). To reduce a risk of privacy issues and/or improve a customer experience, the system100may process the utterance differently when the device110is in a locked state, as illustrated inFIG.9B. For example, the server(s)120may receive device context data from the device110and may generate state information data (e.g., lockscreen state information) from the device context data, indicating whether the device110is in the locked state or the unlocked state. When the server(s)120determine that the device110is in the unlocked state, the server(s)120may proceed as described above with regard toFIG.9Ato process the NLU intent and send directive(s) to the device110. When the device110is in the locked state, however, the server(s)120may generate a prompt requesting that a user unlock the device110. For example, interfaces940may generate TTS audio data requesting that the device110be unlocked and generate display data that displays a number keypad or other user interface with which the user may input login information to unlock the device. Thus, the directive(s) sent to the device110include output data associated with requesting the login information before proceeding with processing the NLU intent. As illustrated inFIG.9B, the server(s)120may include a lockscreen service912. The device110may send the utterance (e.g., audio data including a voice command) and device context data to the server(s)120, which may be received by the gatekeeper910(e.g., step1). Before processing the audio data, the gatekeeper910may send the device context data to the lockscreen service912(e.g., step2) and the lockscreen service912may determine whether the device110is in the locked state or the unlocked state based on the device context data. For example, the lockscreen service912may generate state information data, as discussed above. If the lockscreen service912determines that the device110is in the unlocked state, the lockscreen service912may send an indication of the unlocked state to the gatekeeper910(e.g., step3a) and the server(s)120may proceed with processing the audio data as described above with regard toFIG.9A. For example, the gatekeeper910may send the utterance, along with device context data, to the orchestrator230(e.g., step4). The orchestrator230may send the audio data to the automatic speech recognition (ASR) component250and may receive text data associated with the audio data from the ASR component250(e.g., step5). The orchestrator230may send the text data to the natural language understanding (NLU) component260and may receive a list of n best intents from the NLU component260(e.g., step6). The orchestrator230may then send a speechlet request, which includes the NLU intent data (e.g., n best intents, top rated intent, and/or combination thereof) and the device context data, to the RAE920for further processing (e.g., step7). The RAE920may invoke one or more speechlet(s)290by sending or dispatching the speechlet request to the one or more speechlet(s)290(e.g., step8). The speechlet(s)290may be associated with skill(s)930and may execute the speechlet request (e.g., process the NLU intent data) using these skills930(e.g., step9). For example, speechlet A290amay process the NLU intent data included in the speechlet request using skills A930a, such that speechlet A290adetermines an action to perform and sends the action to interfaces940(e.g., step10). Interfaces940may receive the action to be performed, may generate output data to be sent to the device110(e.g., TTS audio data and/or display data), and may send the output data as one or more directive(s) to the gatekeeper910(e.g., step11). The gatekeeper910may send the one or more directive(s) to the device110(e.g., step12). However, if the lockscreen service912determines that the device110is in the locked state, the lockscreen service912may send an indication of the locked state to interfaces940and interfaces940may generate directive(s) corresponding to the prompt described above. For example, interfaces940may generate output data, including display data and TTS audio data that includes synthesized speech, which prompts a user of the device110to input login information to unlock the device110. Thus, the server(s)120may not proceed with processing the utterance and instead requests that the device110be unlocked before continuing. WhileFIG.9Billustrates the lockscreen service912sending an indication of the unlocked state directly to the gatekeeper910and the gatekeeper910sending the audio data and device context data to the orchestrator230in response to receiving the unlocked state, the disclosure is not limited thereto. Instead, the gatekeeper910may send the audio data and the device context data to the orchestrator230for every utterance and the lockscreen service912may send the indication of the unlocked state to the interfaces940without departing from the disclosure. Thus, the orchestrator230may determine whether the device110is in the locked state or the unlocked state by retrieving state information data from the interfaces940prior to sending the speechlet request to the RAE920. Additionally or alternatively, the RAE920may determine whether the device110is in the locked state or the unlocked state by retrieving state information data from the interfaces940prior to dispatching the speechlet request to the one or more speechlet(s)290. In some examples, the server(s)120may process certain NLU intents even when the device110is in the locked state. For example, the server(s)120may process NLU intents associated with playing music (e.g., favorable/unfavorable feedback regarding a song, requesting an individual song be played, requesting information about a currently playing song, and/or commands associated with play, stop, pause, shuffle, mute, unmute, volume up, volume down, next, previous, fast forward, rewind, cancel, add to queue, add to playlist, create playlist, etc.), reading a book (e.g., start book, show next chapter, show next page, add bookmark, remove bookmark, rate book, remaining time in audiobook, navigate within book, change speed of audiobook, etc.), with news updates (e.g., sports updates, sports briefing, sports summary, daily briefing, read daily brief, etc.), weather updates (e.g., get weather forecast), cinema showtimes (e.g., what movies are in theaters, requesting movie times for a particular movie, requesting movie times for a particular theater, etc.), general questions (e.g., user asks a question and the server(s)120generate a response, such as “What time is it,” “What day is it,” “Did the Patriots win today,” etc.), local searches (e.g., address/phone number associated with a business, hours of the business, what time the business opens or closes, directions to the business, etc.), flight information (e.g., status, arrival time, and/or departure time of a flight), list generating (e.g., creating or browsing to-do lists), notifications (e.g., creating, browsing, modifying, and/or canceling notifications such as alarms, timers, other notifications, and/or the like), suggestions (e.g., “show me things to try,” “what can I say,” “help me,” “what are examples of . . . ,” etc.). In addition to the lockscreen service912mentioned above with regard toFIG.9B,FIG.9Cillustrates that the server(s)120may include a whitelist filter922and a whitelist database932. WhereasFIG.9Billustrates the server(s)120determining whether to process the utterance based only on whether the device110is in the locked state or the unlocked state,FIG.9Cillustrates the server(s)120filtering by NLU intent data and determining to process a first plurality of intents when the device110is in the locked state. Thus, the server(s)120may perform certain voice commands even while the device110is in the locked state, while other voice commands result in the server(s)120sending a prompt to unlock the device. As illustrated inFIG.9C, the device110may send an utterance (e.g., audio data corresponding to a voice command) and device context data to the server(s)120(e.g., step1). The server(s)120may receive the utterance and the device context data at the gatekeeper910and the gatekeeper910may send the device context data to the lockscreen service912(e.g., step2). The lockscreen service912may determine whether the device110is in the locked state or the unlocked state based on the device context data and may send an indication of the lockscreen state (e.g., state information data or lockscreen state information) to interfaces940(e.g., step3). For example, the lockscreen service912may generate state information data based on the device context data as discussed above. Interfaces940may store the indication of the lockscreen state and may make this information available to other components within the server(s)120, such as the orchestrator230, the RAE920, and/or the speechlet(s)290. The gatekeeper910may send the utterance, along with device context data, to the orchestrator230(e.g., step4). The orchestrator230may send the audio data to the automatic speech recognition (ASR) component250and may receive text data associated with the audio data from the ASR component250(e.g., step5). The orchestrator230may send the text data to the natural language understanding (NLU) component260and may receive a list of n best intents from the NLU component260(e.g., step6). The orchestrator230may then send a speechlet request, which includes the NLU intent data (e.g., n best intents, top rated intent, and/or combination thereof) and the device context data, to the RAE920for further processing (e.g., step7). In some examples, the NLU intent data corresponds to a single NLU intent, although the disclosure is not limited thereto and the NLU intent data may include the n best intents without departing from the disclosure. The RAE920may perform various functions associated with the speechlet request, such as modifying an envelope and/or dispatching the speechlet request to one or more speechlet(s)290. For example, the RAE920may include a first component (e.g., Speechlet Request Envelope Handler) that wraps the request and response exceptions to the speechlet and a second component (e.g., Speechlet Dispatcher Handler) that may invoke the speechlets and/or perform dispatching, although the disclosure is not limited thereto. In addition to these other components, in some examples the RAE920may include a whitelist filter922that may filter based on the NLU intent data included in the speechlet request. For example, the RAE920may retrieve state information data from interfaces940and may determine whether the device110is in a locked state. If the RAE920determines that the device110is in an unlocked state, the RAE920may dispatch the speechlet request to the one or more speechlet(s)290regardless of the NLU intent data, as discussed below with regard to step9a. However, if the RAE920determines that the device110is in the locked state, the whitelist filter922may retrieve a list of whitelisted NLU intents from the whitelist database932and may compare the NLU intent data included in the speechlet request with the list of whitelisted NLU intents (e.g., step8). If the NLU intent data is included in the list, the RAE920may invoke one or more speechlet(s)290(e.g., first speechlet290a, second speechlet290b, etc.) by sending or dispatching the speechlet request to the one or more speechlet(s)290(e.g., step9a). For example, an NLU intent may be included in the list and the RAE920may determine a speechlet290(e.g., speechlet A290a) or a plurality of speechlets (e.g., speechlet A290a, speechlet B290b, and/or additional speechlets) that are registered to receive the NLU intent. The speechlet(s)290may be associated with skill(s)930and may execute the speechlet request (e.g., process the NLU intent data) using these skills930(e.g., step10). For example, speechlet A290amay process the NLU intent data included in the speechlet request using skills A930a, such that speechlet A290adetermines an action to perform and sends the action to interfaces940(e.g., step11). Interfaces940may receive the action to be performed, may generate output data to be sent to the device110(e.g., TTS audio data and/or display data), and may send the output data as one or more directive(s) to the gatekeeper910(e.g., step12). The gatekeeper910may send the one or more directive(s) to the device110(e.g., step13). If the NLU intent data is not included in the list, the RAE920may send a prompt requesting that the device110be unlocked to interfaces940(e.g., step9b). For example, interfaces940may generate TTS audio data requesting that the device110be unlocked and/or may generate display data that displays a number keypad or other user interface with which the user may input login information to unlock the device. Thus, the directive(s) sent to the device110in response to the prompt include output data indicating that the login information must be entered before the NLU intent will be processed. WhileFIG.9Cillustrates the server(s)120filtering the speechlet requests based only on NLU intent (e.g., the whitelist filter922applies a whitelist filter globally for all speechlet(s)290), the disclosure is not limited thereto and the server(s)120may filter based on NLU intent and speechlet(s)290without departing from the disclosure. For example, an NLU intent may be associated with two or more speechlet(s)290and the steps illustrated inFIG.9Cmay result in the NLU intent being whitelisted or not whitelisted for all of the two or more speechlet(s)290. To provide additional control over which voice commands to process, in some examples the server(s)120may perform the whitelist filtering using individual speechlet(s)290. For example, the NLU intent may be whitelisted for first speechlet A290abut not whitelisted for second speechlet B290b. FIG.9Dillustrates an example of filtering based on NLU intent and speechlet(s)290. As illustrated inFIG.9D, the device110may send an utterance (e.g., audio data corresponding to a voice command) and device context data to the server(s)120(e.g., step1). The server(s)120may receive the utterance and the device context data at the gatekeeper910and the gatekeeper910may send the device context data to the lockscreen service912(e.g., step2). The lockscreen service912may determine whether the device110is in the locked state or the unlocked state based on the device context data and may send an indication of the lockscreen state (e.g., state information data or lockscreen state information) to interfaces940(e.g., step3). For example, the lockscreen service912may generate state information data based on the device context data as discussed above. Interfaces940may store the indication of the lockscreen state and may make this information available to other components within the server(s)120, such as the orchestrator230, the RAE920, and/or the speechlet(s)290. The gatekeeper910may send the utterance, along with device context data, to the orchestrator230(e.g., step4). The orchestrator230may send the audio data to the automatic speech recognition (ASR) component250and may receive text data associated with the audio data from the ASR component250(e.g., step5). The orchestrator230may send the text data to the natural language understanding (NLU) component260and may receive a list of n best intents from the NLU component260(e.g., step6). The orchestrator230may then send a speechlet request, which includes the NLU intent data (e.g., n best intents, top rated intent, and/or combination thereof) and the device context data, to the RAE920for further processing (e.g., step7). In some examples, the NLU intent data corresponds to a single NLU intent, although the disclosure is not limited thereto and the NLU intent data may include the n best intents without departing from the disclosure. The RAE920may invoke one or more speechlet(s)290(e.g., first speechlet290a, second speechlet290b, etc.) by sending or dispatching the speechlet request to the one or more speechlet(s)290(e.g., step8). For example, an NLU intent may be included in the list and the RAE920may determine a speechlet290(e.g., speechlet A290a) or a plurality of speechlets (e.g., speechlet A290a, speechlet B290b, and/or additional speechlets) that are registered to receive the NLU intent. As illustrated inFIG.9D, the RAE920may dispatch the speechlet request to the one or more speechlet(s)290without performing whitelist filtering. Instead, each individual speechlet290may include a whitelist filter and may perform whitelist filtering based on the NLU intent data included in the speechlet request. For example, the speechlet(s)290may retrieve state information data from interfaces940and may determine whether the device110is in a locked state. If the speechlet(s)290determine that the device110is in an unlocked state, the speechlet(s)290may process the NLU intent normally, as described below with regard to step10. However, if the speechlet(s)290determine that the device110is in a locked state, the whitelist filter for each individual speechlet(s)290may retrieve a list of whitelisted NLU intents from the whitelist database932and may compare the NLU intent data included in the speechlet request with the list of whitelisted NLU intents (e.g., step9). If the device110is in an unlocked state and/or if the NLU intent data is included in the list of whitelisted NLU intents, the speechlet(s)290may execute the speechlet request (e.g., process the NLU intent data) using the skills930(e.g., step10). For example, speechlet A290amay process the NLU intent data included in the speechlet request using skills A930a, such that speechlet A290adetermines an action to perform and sends the action to interfaces940(e.g., step11). Interfaces940may receive the action to be performed, may generate output data to be sent to the device110(e.g., TTS audio data and/or display data), and may send the output data as one or more directive(s) to the gatekeeper910(e.g., step12). The gatekeeper910may send the one or more directive(s) to the device110(e.g., step13). If the device110is in a locked state and the NLU intent data is not included in the list of whitelisted NLU intents, the speechlet(s)290may send a prompt requesting that the device110be unlocked to interfaces940(e.g., step11). For example, interfaces940may generate TTS audio data requesting that the device110be unlocked and/or may generate display data that displays a number keypad or other user interface with which the user may input login information to unlock the device. Thus, the directive(s) sent to the device110in response to the prompt include output data indicating that the login information must be entered before the NLU intent will be processed. While the above description refers to the speechlet(s)290as a group, each speechlet may perform whitelist filtering using a specific list of whitelisted NLU intents that corresponds to the speechlet. For example, the first speechlet A290amay compare the NLU intent data to a first list, the second speechlet B290bmay compare the NLU intent data to a second list, and so on. As a result, the whitelist filtering may vary based on the speechlet. For example, the first speechlet A290amay determine that the NLU intent data is included in the first list and may process the NLU intent data normally, whereas the second speechlet B290bmay determine that the NLU intent data is not included in the second list and may send a prompt to the device110requesting that the device110be unlocked. FIGS.10A-10Dillustrate example component diagrams for a device processing a voice command while locked according to embodiments of the present disclosure. To clarify the different components/steps involved with processing an utterance according to embodiments of the present disclosure,FIG.10Aillustrates an example component diagram for the device110processing an utterance in a conventional system. As illustrated inFIG.10A, the device110may capture audio data corresponding to the utterance using a microphone array114and may send the audio data to a wakeword detection component220(e.g., step1). The wakeword detection component220may detect that the wakeword is included in the audio data and may store at least a portion of the audio data corresponding to the utterance in a cache1010(e.g., step2). In addition, the wakeword detection component220and/or the cache1010may send the audio data to the interface1020(e.g., step3) and the interface1020may send the audio data corresponding to the utterance to the server(s)120via the gatekeeper910(e.g., step4). The server(s)120may process the audio data, as described above, and may generate one or more directive(s) corresponding to action(s) that were performed by the server(s)120and/or action(s) to be performed by the device110. The gatekeeper910may send the one or more directive(s) to the interface1020(e.g., step5) and the interface1020may send the one or more directive(s) to a dialog manager1030to be executed (e.g., step6). Based on the directive(s), the dialog manager1030may send output audio data to the loudspeaker(s) (e.g., step7a), display data to a display1040(e.g., step7b), other portions of output data to other components, and/or the like. WhileFIG.10Aillustrates the directive(s) being sent to the dialog manager1030, this is intended for illustrative purposes only and the disclosure is not limited thereto. Instead, the directive(s) may be sent to any component within the device110without departing from the disclosure. Additionally or alternatively, whileFIG.10Aillustrates the dialog manager1030sending the output audio data to the loudspeaker(s)116and sending the display data to the display1040, the disclosure is not limited thereto. Instead, the dialog manager1030may only send the output audio data to the loudspeaker(s)116, may only send the display data to the display1040, and/or may send other portions of the output data to other components of the device110without departing from the disclosure. As described above,FIG.10Aillustrates an example component diagram for the device110processing an utterance in a conventional system. In order to distinguish between the device110being in the unlocked state and the locked state, the device110needs to send additional information (e.g., device context data) to the server(s)120to indicate the current state of the device110. Thus,FIG.10Billustrates an example component diagram for the device110processing an utterance in an unlocked state. As illustrated inFIG.10B, the device110may capture audio data corresponding to the utterance using a microphone array114and may send the audio data to a wakeword detection component220(e.g., step1). The wakeword detection component220may detect that the wakeword is included in the audio data and may store at least a portion of the audio data corresponding to the utterance in a cache1010(e.g., step2). The wakeword detection component220and/or the cache1010may send the audio data to the interface1020(e.g., step3). In addition, a lockscreen manager1050may determine device context data and may send the device context data to the interface1020(e.g., step4). Thus, the interface1020may send the device context data, along with the audio data corresponding to the utterance, to the server(s)120via the gatekeeper910(e.g., step5). As the device context data indicates that the device110is in an unlocked state, the server(s)120may process the audio data, as described above, and may generate one or more directive(s) corresponding to action(s) that were performed by the server(s)120and/or action(s) to be performed by the device110. The gatekeeper910may send the one or more directive(s) to the interface1020(e.g., step6) and the interface1020may send the one or more directive(s) to a dialog manager1030to be executed (e.g., step7). Based on the directive(s), the dialog manager1030may send output audio data to the loudspeaker(s) (e.g., step8a), display data to a display1040(e.g., step8b), other portions of output data to other components, and/or the like. WhileFIG.10Billustrates the directive(s) being sent to the dialog manager1030, this is intended for illustrative purposes only and the disclosure is not limited thereto. Instead, the directive(s) may be sent to any component within the device110without departing from the disclosure. Additionally or alternatively, whileFIG.10Billustrates the dialog manager1030sending the output audio data to the loudspeaker(s)116and sending the display data to the display1040, the disclosure is not limited thereto. Instead, the dialog manager1030may only send the output audio data to the loudspeaker(s)116, may only send the display data to the display1040, and/or may send other portions of the output data to other components of the device110without departing from the disclosure. While the description ofFIG.10Brefers to the device110being in an unlocked state, the same steps apply when the device110is in a locked state but the voice command is whitelisted. For example, while the device context data may indicate that the device110is in the locked state, the server(s)120may process the audio data to determine an NLU intent, may determine that the NLU intent data corresponds to a whitelisted intent, and may perform an action based on the NLU intent despite the device110being in the locked state. Thus, no further action is needed by the device110. In contrast,FIG.10Cillustrates an example component diagram for the device110processing an utterance in a locked state (e.g., when the voice command is not whitelisted). As illustrated inFIG.10C, the device110may capture audio data corresponding to the utterance using a microphone array114and may send the audio data to a wakeword detection component220(e.g., step1). The wakeword detection component220may detect that the wakeword is included in the audio data and may store at least a portion of the audio data corresponding to the utterance in a cache1010(e.g., step2). The wakeword detection component220and/or the cache1010may send the audio data to the interface1020(e.g., step3). In addition, the lockscreen manager1050may determine first device context data and may send the first device context data to the interface1020(e.g., step4). Thus, the interface1020may send the first device context data, along with the audio data corresponding to the utterance, to the server(s)120via the gatekeeper910(e.g., step5). In some examples, the server(s)120may determine, based on the first device context data, that the device110is in the locked state and may generate a prompt requesting that the device110be unlocked. In other examples, the server(s)120may determine that the device110is in the locked state, may process the audio data to determine an NLU intent, may determine that the NLU intent is not included in the list of whitelisted intents, and may generate a prompt requesting that the device110be unlocked. Thus, the server(s)120may generate one or more directive(s) corresponding to the prompt, the directive(s) including output data (e.g., output audio data, display data, and/or the like) requesting that the device110be unlocked. The gatekeeper910may send the one or more directive(s) corresponding to the prompt to the interface1020(e.g., step6) and the interface1020may send the one or more directive(s) to a dialog manager1030to be executed (e.g., step7). Based on the directive(s), the dialog manager1030may send output audio data to the loudspeaker(s) (e.g., step8a), display data to a display1040(e.g., step8b), other portions of output data to other components, and/or the like. Thus, the device110may output an audio notification and/or display a visual indication indicating that the device110needs to be unlocked to continue. Additionally or alternatively, the device110may display a user interface to input login information, such as a keypad to input a personal identification number (PIN). The device110may receive input using an input device1060(e.g., touchscreen display, physical buttons, etc.) and may send the input data to the lockscreen manager1050(e.g., step9). The lockscreen manager1050may determine that the input data corresponds to the login information required to transition to the unlocked state (e.g., login information required to unlock the device) and may send second device context data to the cache1010(e.g., step10a) and/or the interface1020(e.g., step10b). In response to the second device context data, the cache1010may send the audio data corresponding to the utterance to the interface1020(e.g., step11) and the interface1020may send the second device context data, along with the audio data corresponding to the utterance, to the server(s)120via the gatekeeper910(e.g., step12). WhileFIG.10Cillustrates the lockscreen manager1050sending the second device context data to the cache1010in step10a, the disclosure is not limited thereto. Instead, the lockscreen manager1050may send to the cache1010an indication that the device110is in an unlocked state and/or an instruction to send the audio data to the server(s)120without departing from the disclosure. Additionally or alternatively, the lockscreen manager1050may send the second device context data to the interface1020and/or another component and the interface1020and/or the other component may send an indication and/or instruction to the cache1010. The server(s)120may determine, based on the second device context data, that the device110is in the unlocked state and may process the audio data. Thus, the server(s)120may generate one or more directive(s) corresponding to action(s) that were performed by the server(s)120and/or action(s) to be performed by the device110. The gatekeeper910may send the one or more directive(s) to the interface1020(e.g., step13) and the interface1020may send the one or more directive(s) to a dialog manager1030to be executed (e.g., step14). Based on the directive(s), the dialog manager1030may send output audio data to the loudspeaker(s) (e.g., step15a), display data to a display1040(e.g., step15b), other portions of output data to other components, and/or the like. WhileFIG.10Cillustrates the directive(s) being sent to the dialog manager1030, this is intended for illustrative purposes only and the disclosure is not limited thereto. Instead, the directive(s) may be sent to any component within the device110without departing from the disclosure. Additionally or alternatively, whileFIG.10Cillustrates the dialog manager1030sending the output audio data to the loudspeaker(s)116and sending the display data to the display1040, the disclosure is not limited thereto. Instead, the dialog manager1030may only send the output audio data to the loudspeaker(s)116, may only send the display data to the display1040, and/or may send other portions of the output data to other components of the device110without departing from the disclosure. WhileFIG.9Cillustrates an example wherein the device110caches the audio data and resends the audio data to the server(s)120after being unlocked, the disclosure is not limited thereto. For example, instead of the device110caching the audio data, the server(s)120may cache the NLU intent and/or other information associated with the utterance (e.g., speechlet request, etc.). FIG.10Dillustrates an example component diagram for the device110processing an utterance in a locked state (e.g., when the voice command is not whitelisted) when the server(s)120cache the NLU intent. Therefore, the device110does not need to cache the audio data and can instead send an indication that the device110is in an unlocked state to the server(s)120in order for the server(s)120to proceed with processing the voice command. As illustrated inFIG.10D, the device110may capture audio data corresponding to the utterance using a microphone array114and may send the audio data to a wakeword detection component220(e.g., step1). The wakeword detection component220may detect that the wakeword is included in the audio data and may store at least a portion of the audio data corresponding to the utterance in a cache1010(e.g., step2). The wakeword detection component220and/or the cache1010may send the audio data to the interface1020(e.g., step3). In addition, the lockscreen manager1050may determine first device context data and may send the first device context data to the interface1020(e.g., step4). Thus, the interface1020may send the first device context data, along with the audio data corresponding to the utterance, to the server(s)120via the gatekeeper910(e.g., step5). In some examples, the server(s)120may determine, based on the first device context data, that the device110is in the locked state and may generate a prompt requesting that the device110be unlocked. In other examples, the server(s)120may determine that the device110is in the locked state, may process the audio data to determine an NLU intent, may determine that the NLU intent is not included in the list of whitelisted intents, and may generate a prompt requesting that the device110be unlocked. Thus, the server(s)120may generate one or more directive(s) corresponding to the prompt, the directive(s) including output data (e.g., output audio data, display data, and/or the like) requesting that the device110be unlocked. The gatekeeper910may send the one or more directive(s) corresponding to the prompt to the interface1020(e.g., step6) and the interface1020may send the one or more directive(s) to a dialog manager1030to be executed (e.g., step7). Based on the directive(s), the dialog manager1030may send output audio data to the loudspeaker(s) (e.g., step8a), display data to a display1040(e.g., step8b), other portions of output data to other components, and/or the like. Thus, the device110may output an audio notification and/or display a visual indication indicating that the device110needs to be unlocked to continue. Additionally or alternatively, the device110may display a user interface to input login information, such as a keypad to input a personal identification number (PIN). The device110may receive input using an input device1060(e.g., touchscreen display, physical buttons, etc.) and may send the input data to the lockscreen manager1050(e.g., step9). The lockscreen manager1050may determine that the input data corresponds to the login information required to transition to the unlocked state (e.g., login information required to unlock the device) and may send second device context data to the interface1020(e.g., step10). The interface1020may send the second device context data to the server(s)120via the gatekeeper910(e.g., step11). While the cache1010may be used to store audio data as it is being captured, in this implementation the device110does not need to send the audio data corresponding to the utterance back to the server(s)120a second time. The server(s)120may determine, based on the second device context data, that the device110is in the unlocked state and may process the audio data. Thus, the server(s)120may generate one or more directive(s) corresponding to action(s) that were performed by the server(s)120and/or action(s) to be performed by the device110. The gatekeeper910may send the one or more directive(s) to the interface1020(e.g., step12) and the interface1020may send the one or more directive(s) to a dialog manager1030to be executed (e.g., step13). Based on the directive(s), the dialog manager1030may send output audio data to the loudspeaker(s) (e.g., step14a), display data to a display1040(e.g., step14b), other portions of output data to other components, and/or the like. WhileFIG.10Dillustrates the directive(s) being sent to the dialog manager1030, this is intended for illustrative purposes only and the disclosure is not limited thereto. Instead, the directive(s) may be sent to any component within the device110without departing from the disclosure. Additionally or alternatively, whileFIG.10Dillustrates the dialog manager1030sending the output audio data to the loudspeaker(s)116and sending the display data to the display1040, the disclosure is not limited thereto. Instead, the dialog manager1030may only send the output audio data to the loudspeaker(s)116, may only send the display data to the display1040, and/or may send other portions of the output data to other components of the device110without departing from the disclosure. FIGS.11A-11Bare flowcharts conceptually illustrating example methods for processing an utterance received from a locked device according to embodiments of the present disclosure. As illustrated inFIG.11A, the server(s)120may receive (1110) audio data including an utterance and may receive (1112) device context data that indicates a state of the device110. The server(s)120may determine (1114) state information data based on the device context data and may determine (1116) whether the device110is locked (e.g., in a locked state) based on the state information data. If the device110is not in a locked state (e.g., in an unlocked state), the server(s)120may determine (1118) intent data based on the audio data, send (1120) the intent data to a speechlet (or two or more speechlets), determine (1122) an action to perform, and perform (1124) the action. Thus, the server(s)120may process the intent data when the device110is in an unlocked state. If the device110is in a locked state, the server(s)120may generate (1126) output data requesting that the device110be unlocked and may send (1128) the output data to the device110. For example, the output data may include audio data (e.g., synthesized speech) and/or display data indicating that the device110must be unlocked to proceed with the voice command. WhileFIG.11Aillustrates the server(s)120determining whether the device110is locked prior to determining the intent data, the disclosure is not limited thereto. Instead, the server(s)120may determine the intent data prior to determining whether the device110is locked without departing from the disclosure. FIG.11Billustrates an example of determining the intent data prior to determining whether the device110is locked. As illustrated inFIG.11B, the server(s)120may receive (1110) audio data including an utterance and may receive (1112) device context data that indicates a state of the device110. The server(s)120may determine (1118) the intent data based on the audio data, may determine (1114) state information data based on the device context data, and may determine (1116) whether the device110is locked (e.g., in a locked state) based on the state information data. If the device110is not in a locked state (e.g., in an unlocked state), the server(s)120may send (1120) the intent data to a speechlet (or two or more speechlets), determine (1122) an action to perform, and perform (1124) the action. Thus, the server(s)120may process the intent data when the device110is in an unlocked state. If the device110is in a locked state, the server(s)120may generate (1126) output data requesting that the device110be unlocked and may send (1128) the output data to the device110. For example, the output data may include audio data (e.g., synthesized speech) and/or display data indicating that the device110must be unlocked to proceed with the voice command. WhileFIGS.11A-11Billustrate the server(s)120not processing the intent data when the device110is in a locked state, the disclosure is not limited thereto. Instead, the server(s)120may perform whitelist filtering to determine whether the intent data is included in a whitelist database. When the intent data is included in the whitelist database, the server(s)120may process the intent data even when the device110is in the locked state. FIGS.12A-12Bare flowcharts conceptually illustrating example methods for processing an utterance received from a locked device using whitelist filtering according to embodiments of the present disclosure. As illustrated inFIG.12A, the server(s)120may receive (1210) audio data including an utterance and may receive (1212) device context data that indicates a state of the device110. The server(s)120may determine (1214) intent data based on the audio data, may determine (1216) state information data based on the device context data, and may determine (1218) whether the device110is locked (e.g., in a locked state) based on the state information data. If the device110is not in a locked state (e.g., in an unlocked state), the server(s)120may send (1220) the intent data to a speechlet (or two or more speechlets), determine (1222) an action to perform, and perform (1224) the action. Thus, the server(s)120may process the intent data when the device110is in an unlocked state. If the device110is in a locked state, the server(s)120may determine (1226) whether the intent data is whitelisted (e.g., included in a whitelist database). If the intent data is included in the whitelist database, the server(s)120may loop to step1220and perform steps1220-1224for the intent data. However, if the intent data is not included in the whitelist database, the server(s)120may generate (1228) output data requesting that the device110be unlocked and may send (1230) the output data to the device110. For example, the output data may include audio data (e.g., synthesized speech) and/or display data indicating that the device110must be unlocked to proceed with the voice command. WhileFIG.12Aillustrates an example in which the server(s)120determine whether the intent data is whitelisted for all speechlet(s)290, the disclosure is not limited thereto. Instead, the server(s)120may perform whitelisting individually for each of the speechlet(s)290without departing from the disclosure, as shown inFIG.12B. As illustrated inFIG.12B, the server(s)120may receive (1210) audio data including an utterance and may receive (1212) device context data that indicates a state of the device110. The server(s)120may determine (1214) intent data based on the audio data, may determine (1216) state information data based on the device context data, and may send (1250) the intent data to the speechlet (or two or more speechlets). The server(s)120(e.g., using the speechlet) may determine (1252) whether the device110is locked (e.g., in a locked state) based on the state information data. For example, each individual speechlet that receives the intent data may retrieve the state information data from interfaces940. If the device110is not in a locked state (e.g., in an unlocked state), the server(s)120(e.g., using the speechlet) may determine (1222) an action to perform and perform (1224) the action. Thus, the server(s)120may process the intent data when the device110is in an unlocked state. If the device110is in a locked state, the server(s)120(e.g., using the speechlet) may determine (1254) whether the intent data is whitelisted (e.g., included in a whitelist database). For example, each individual speechlet that receives the intent data may retrieve whitelist data (e.g., a list of whitelisted intents) from the whitelist database932and compare the intent data to the whitelist data. If the intent data is included in the whitelist data, the server(s)120may loop to step1222and perform steps1222-1224to determine an action to perform and perform the action. However, if the intent data is not included in the whitelist data, the server(s)120may generate (1228) output data requesting that the device110be unlocked and may send (1230) the output data to the device110. For example, the output data may include audio data (e.g., synthesized speech) and/or display data indicating that the device110must be unlocked to proceed with the voice command. FIGS.13A-13Bare flowcharts conceptually illustrating example methods for caching an intent while processing an utterance received from a locked device according to embodiments of the present disclosure. As illustrated inFIG.13A, the server(s)120may receive (1310) audio data including an utterance and may receive (1312) device context data that indicates a state of the device110. The server(s)120may determine (1314) intent data based on the audio data, may determine (1316) state information data based on the device context data, and may determine (1318) whether the device110is locked (e.g., in a locked state) based on the state information data. If the device110is not in a locked state (e.g., in an unlocked state), the server(s)120may send (1320) the intent data to a speechlet (or two or more speechlets), determine (1322) an action to perform, and perform (1324) the action. Thus, the server(s)120may process the intent data when the device110is in an unlocked state. However, if the intent data is not included in the whitelist database, the server(s)120may store (11F26) the intent data (e.g., in a cache), may generate (1328) output data requesting that the device110be unlocked and may send (1330) the output data to the device110. For example, the output data may include audio data (e.g., synthesized speech) and/or display data indicating that the device110must be unlocked to proceed with the voice command. The server(s)120may determine (1332) whether the device110is unlocked within a desired period of time (e.g., 5 seconds, 10 seconds, etc.). For example, the server(s)120may receive a notification from the device110indicating that the device110is unlocked (e.g., receive device context data indicating that the device110is in an unlocked state) within the period of time. If the server(s)120determine that the device110is unlocked within the period of time, the server(s)120may retrieve (1334) the intent data, loop to step1320, and perform steps1320-1324to process the retrieved intent data. If the server(s)120determine that the device110is not unlocked within the period of time, the server(s)120may delete (1336) the stored intent data and end processing. WhileFIG.13Aillustrates the server(s)120sending a prompt to unlock the device110when the device110is in an unlocked state,FIG.13Billustrates an example of caching the intent data while performing whitelist filtering to process certain intents even when the device110is in a locked state. As illustrated inFIG.13B, the server(s)120may receive (1310) audio data including an utterance and may receive (1312) device context data that indicates a state of the device110. The server(s)120may determine (1314) intent data based on the audio data, may determine (1316) state information data based on the device context data, and may determine (1318) whether the device110is locked (e.g., in a locked state) based on the state information data. If the device110is not in a locked state (e.g., in an unlocked state), the server(s)120may send (1320) the intent data to a speechlet (or two or more speechlets), determine (1322) an action to perform, and perform (1324) the action. Thus, the server(s)120may process the intent data when the device110is in an unlocked state. If the device110is in a locked state, the server(s)120may determine (1350) whether the intent data is whitelisted (e.g., included in a whitelist database). If the intent data is included in the whitelist database, the server(s)120may loop to step1320and perform steps1320-1324for the intent data. However, if the intent data is not included in the whitelist database, the server(s)120may store (11F26) the intent data (e.g., in a cache), may generate (1328) output data requesting that the device110be unlocked and may send (1330) the output data to the device110. For example, the output data may include audio data (e.g., synthesized speech) and/or display data indicating that the device110must be unlocked to proceed with the voice command. The server(s)120may determine (1332) whether the device110is unlocked within a desired period of time (e.g., 5 seconds, 10 seconds, etc.). For example, the server(s)120may receive a notification from the device110indicating that the device110is unlocked (e.g., receive device context data indicating that the device110is in an unlocked state) within the period of time. If the server(s)120determine that the device110is unlocked within the period of time, the server(s)120may retrieve (1334) the intent data, loop to step1320, and perform steps1320-1324to process the retrieved intent data. If the server(s)120determine that the device110is not unlocked within the period of time, the server(s)120may delete (1336) the stored intent data and end processing. FIGS.14A-14Care flowcharts conceptually illustrating example methods for unlocking a device to process a voice command according to embodiments of the present disclosure. As illustrated inFIG.14A, the device110may capture (1410) audio data, detect (1412) that a wakeword is represented in the audio data, and may optionally store (1414) the audio data in a cache. The device110may determine (1416) device context data corresponding to a current state of the device110, may send (1418) the audio data to the server(s)120and may send (1420) the device context data to the server(s)120. After the server(s)120processes the audio data, the device110may receive (1422) from the server(s)120a command to perform one or more action(s), may optionally perform (1424) the action(s) (e.g., if an action is local to the device110), and may optionally generate (1426) output audio and/or output display based on the command. For example, the command may correspond to one or more directive(s) received from the server(s)120and the directive(s) may include output audio data (e.g., synthesized speech) and/or display data that indicates the action that was performed. In some examples, the server(s)120may only instruct the device110to generate the output audio and/or generate the output display. Thus, the action(s) to be performed are to generate the output audio based on output audio data and/or to generate the output display based on display data, and the device110does not need to perform step1424as there are no additional action(s) to perform. For example, the voice command may correspond to an action performed by the server(s)120, such as getting information about music that is currently playing or streaming music from a new music station, and the device110may generate output audio including a notification of the action that was performed (e.g., “Playing music from custom playlist”). However, the disclosure is not limited thereto and in other examples, the server(s)120may instruct the device110to perform an action without generating output audio and/or output display. For example, the voice command may correspond to increasing or decreasing a volume of music being streamed by the device110, and the device110may increase or decrease the volume (e.g., perform the action in step1424) without an explicit notification of the action that was performed. Additionally or alternatively, the server(s)120may instruct the device110to perform an action as well as generate output audio and/or generate an output display. For example, the voice command may correspond to restarting a song that is currently playing, and the device110may restart the song (e.g., perform the action in step1424) while also generating output audio and/or an output display including a notification of the action that was performed (e.g., “Playing Bohemian Rhapsody from the beginning”). FIG.14Aillustrates an example in which the server(s)120processes the voice command and sends a command to the device110to perform one or more action(s). This may occur when the device110is in an unlocked state and/or when the voice command corresponds to a whitelisted intent, as discussed above. However, in some examples the server(s)120may not process the voice command and may instead send a prompt to the device110to request login information to enter an unlocked state before the voice command may be processed by the server(s)120.FIG.14Billustrates an example in which the device110stores the audio data in a cache and retrieves the audio data after the device110is unlocked. As illustrated inFIG.14B, the device110may capture (1410) audio data, detect (1412) that a wakeword is represented in the audio data, and may optionally store (1414) the audio data in a cache. In the example illustrated inFIG.14B, the device110must store the audio data in the cache in order to later retrieve the audio data in step1462. The device110may determine (1416) device context data corresponding to a current state of the device110, may send (1418) the audio data to the server(s)120and may send (1420) the device context data to the server(s)120. The server(s)120may determine that the device110is in a locked state and/or that an intent associated with the audio data is not included in a whitelist database. Therefore, the server(s)120may send a prompt to the device110to request login information, such as requesting a personal identification number (PIN) or other information that enables the device110to enter the unlocked state. The device110may receive (1450) from the server(s)120a command to request login information and may generate (1452) output audio and/or an output display requesting the login information from a user. For example, the command may correspond to one or more directive(s) received from the server(s)120and the directive(s) may include output audio data (e.g., synthesized speech) and/or display data that indicates that the device110must be unlocked in order for the voice command to be processed. The display data may correspond to a user interface that enables the user to input the login information, such as a number keypad (e.g., to enter a PIN) or the like. The device110may determine (1454) whether input data is received within a desired timeframe (e.g., desired period of time) and, if not, may delete (1456) the cached audio data from the cache. If the device110determines that input data is received within the desired time frame, the device110may determine (1458) if the device110is unlocked (e.g., the login information is correct and the device110entered an unlocked state). If the device110determines that it is not unlocked within a desired period of time, the device110may loop to step1456and delete the cached audio data from the cache. If the device110determines that it is unlocked within the desired period of time, the device110may determine (1460) second device context data indicating that the device110is in the unlocked state (e.g., the login attempt was successful), may retrieve (1462) the audio data from the cache, may send (1464) the audio data to the server(s)120again, and may send (1466) the second device context data to the server(s)120to indicate that the device110is in the unlocked state. After the server(s)120processes the audio data a second time, the device110may receive (1422) from the server(s)120a command to perform one or more action(s), may optionally perform (1424) the action(s) (e.g., if an action is local to the device110), and may optionally generate (1426) output audio and/or output display based on the command. For example, the command may correspond to one or more directive(s) received from the server(s)120and the directive(s) may include output audio data (e.g., synthesized speech) and/or display data that indicates the action that was performed. In some examples, the server(s)120may only instruct the device110to generate the output audio and/or generate the output display. Thus, the action(s) to be performed are to generate the output audio based on output audio data and/or to generate the output display based on display data, and the device110does not need to perform step1424as there are no additional action(s) to perform. For example, the voice command may correspond to an action performed by the server(s)120, such as getting information about music that is currently playing or streaming music from a new music station, and the device110may generate output audio including a notification of the action that was performed (e.g., “Playing music from custom playlist”). However, the disclosure is not limited thereto and in other examples, the server(s)120may instruct the device110to perform an action without generating output audio and/or output display. For example, the voice command may correspond to increasing or decreasing a volume of music being streamed by the device110, and the device110may increase or decrease the volume (e.g., perform the action in step1424) without an explicit notification of the action that was performed. Additionally or alternatively, the server(s)120may instruct the device110to perform an action as well as generate output audio and/or generate an output display. For example, the voice command may correspond to restarting a song that is currently playing, and the device110may restart the song (e.g., perform the action in step1424) while also generating output audio and/or an output display including a notification of the action that was performed (e.g., “Playing Bohemian Rhapsody from the beginning”). WhileFIG.14Billustrates an example of the device110storing the audio data in a cache on the device110and sending the audio data to the server(s)120a second time after the device110is unlocked, the disclosure is not limited thereto. As described above with regard toFIG.10D, the server(s)120may instead store intent data corresponding to the audio data in a cache on the server(s)120and the device110may only send an indication that the device110is unlocked for the server(s)120to process the intent data. As illustrated inFIG.14C, the steps performed by the device110are identical to those described above with regard toFIG.14B, with the exception that the device110does not need to perform step1414(e.g., storing the audio data in a cache, although the device110may store the audio data in the cache for other reasons), step1456(e.g., delete the stored audio data from the cache), step1462(e.g., retrieve the audio data from the cache), and/or step1464(e.g., send the audio data to the server(s)120again). Thus, the device110may determine (1458) that the login information is correct and that the device110entered the unlocked state, may determine (1460) second device context data indicating that the device110is in the unlocked state (e.g., the login attempt was successful), and may send (1466) the second device context data to the server(s)120to indicate that the device110is in the unlocked state. After the server(s)120processes the audio data a second time, the device110may receive (1422) from the server(s)120a command to perform one or more action(s), may optionally perform (1424) the action(s) (e.g., if an action is local to the device110), and may optionally generate (1426) output audio and/or output display based on the command. For example, the command may correspond to one or more directive(s) received from the server(s)120and the directive(s) may include output audio data (e.g., synthesized speech) and/or display data that indicates the action that was performed. FIGS.15A-15Dillustrate examples of whitelist databases according to embodiments of the present disclosure. As discussed above, a whitelist database includes a list of a plurality of intents that may be processed while the device110is in the locked state.FIG.15Aillustrates an example of a whitelist database1510that provides contextual information for each of the intents. For example, the whitelist database1510includes a column corresponding to a domain, an intent, an action, and example utterance(s). The domain corresponds to a general category associated with a plurality of intents, enabling the system100to group similar intents with a particular category. For example, the whitelist database1510includes a Notification domain (e.g., intents associated with alarms, timers, notifications, etc.), a ToDos domain (e.g., intents associated with creating and modifying lists of things to do or the like), a Music domain (e.g., intents associated with music playback, such as selecting a song/station, pausing or resuming a song, increasing or decreasing volume, skipping to a next or previous song, etc.), a LocalSearch domain (e.g., intents associated with finding information about local businesses, such as hours, phone numbers, addresses, directions, services, etc.), a Global domain (e.g., intents that are general, such as a current time or day, etc.). However, the disclosure is not limited thereto and any number of domains may be included in the whitelist database1510. In some examples, the domain may correspond to a particular speechlet290or process running on the server(s)120. For example, a first domain may correspond to a first speechlet290a, such that all intents associated with the first domain are processed by the first speechlet290a. However, the disclosure is not limited thereto and the first domain may correspond to multiple speechlets290(e.g., first speechlet290aand second speechlet290b) without departing from the disclosure. For example, a first intent (e.g., PlayMusicIntent) may be associated with multiple different speechlets290, such that the first intent may be processed using two or more speechlets290(e.g., PlayMusicIntent can play music using either a first music service or a second music service). While not illustrated in the whitelist database1510, some domains may be top-level domains whereas other domains may be non-top-level domains. For example, a top-level domain may be invoked by a voice command without specifying a particular domain, speechlet, process, etc. (e.g., “What is the weather” invokes a top-level weather domain) and/or may be invoked even when the device110is in the locked state. In contrast, a non-top-level domain may be invoked by a voice command that specifies the domain/speechlet/process (e.g., “What is the weather using WeatherApp” invokes a non-top-level domain named WeatherApp) and/or may not be invoked when the device110is in the locked state. The intent column of the whitelist database1510indicates specific intents that are whitelisted (e.g., can be processed while the device110is in the locked state). For example, the whitelist database1510illustrates a list of intents corresponding to each of the domains (e.g., SetNotificationIntent, SilenceNotificationIntent, BrowseNotificationIntent, etc.). However, the disclosure is not limited thereto, and any intent known to one of skill in the art may be included in the whitelist database1510. Additionally or alternatively, while the whitelist database1510illustrates a single intent associated with each entry, the disclosure is not limited thereto and multiple intents may be listed in a single entry. For ease of illustration, the whitelist database1510includes a column indicating action(s) that correspond to the intent as well as example utterance(s) that invoke the intent and/or action(s). For example, a first intent (e.g., SetNotificationIntent) may correspond to a first action (e.g., set an alarm) and may be invoked by a first utterance (e.g., Set an alarm for 6 PM). While the whitelist database1510only illustrates a single example utterance for each intent, the disclosure is not limited thereto and the first intent may be invoked using any number of utterances without departing from the disclosure. For example, the user may say “set an alarm for six tomorrow night,” “set an alarm for six PM,” “set a timer for twenty minutes,” “set five minute timer,” etc. without departing from the disclosure. Additionally or alternatively, while the whitelist database1510illustrates a single action (e.g., set alarm) corresponding to the first intent, the disclosure is not limited thereto and additional actions (e.g., set timer, set notification, etc.) may correspond to the first intent without departing from the disclosure. While the whitelist database1510illustrated inFIG.15Aincludes contextual information associated with the intents, the disclosure is not limited thereto. As illustrated inFIG.15B, whitelist database15120only includes a list of a plurality of intents that may be processed when the device110is in the locked state. Thus, the server(s)120may determine whether a specific intent is included in the whitelist database1520and, if so, may dispatch the specific intent to one or more speechlet(s) or other processes even when the device110is in the locked state. However, the server(s)120cannot differentiate between different speechlet(s) or other processes and any whitelisted intent is processed by any corresponding speechlet. In some examples, the whitelist database may include additional contextual information to differentiate between speechlets, processes, and/or the like (e.g., perform whitelist filtering differently based on the speechlet). As illustrated inFIG.15C, whitelist database1530may associate the intent with a corresponding domain, enabling the server(s)120to determine if a particular intent is whitelisted for a specific domain (e.g., category, speechlet, process, etc.). The domain indicated in the whitelist database1530may correspond to a general category (e.g., Music domain corresponds to multiple music services available to the system100, such that the intent may be processed by multiple speechlets) and/or a specific service (e.g., StreamingMusic corresponds to a specific streaming music service, such that intents are processed only by a single speechlet). In some examples, the whitelist database may include a list of a plurality of intents that may be processed and/or actions that may be performed. While a single intent may correspond to multiple actions that may be performed, the whitelist database may include a list of whitelisted actions (e.g., actions that may be performed while the device110is in the locked state) and the system100may perform whitelist filtering based on action to be performed instead of the intent to be processed. As illustrated inFIG.15D, whitelist database1540may include domain(s), intent(s) and/or action(s) without departing from the disclosure. WhileFIG.15Dillustrates the whitelist database1540including the domains corresponding to the intents and action(s), the disclosure is not limited thereto and the intents/actions may not be associated with any domains. Additionally or alternatively, whileFIG.15Dillustrates the whitelist database1540including both the intents and corresponding action, the disclosure is not limited thereto and the whitelist database1540may only include a list of actions that can be performed while the device110is in the locked state without departing from the disclosure. Thus, the system100may perform whitelist filtering using the actions instead of the intents. FIG.16is a block diagram conceptually illustrating a device110that may be used with the system.FIG.17is a block diagram conceptually illustrating example components of a remote device, such as the server(s)120, which may assist with ASR processing, NLU processing, etc. The term “server” as used herein may refer to a traditional server as understood in a server/client computing structure but may also refer to a number of different computing components that may assist with the operations discussed herein. For example, a server may include one or more physical computing components (such as a rack server) that are connected to other devices/components either physically and/or over a network and is capable of performing computing operations. A server may also include one or more virtual machines that emulates a computer system and is run on one or across multiple devices. A server may also include other combinations of hardware, software, firmware, or the like to perform operations discussed herein. The server(s) may be configured to operate using one or more of a client-server model, a computer bureau model, grid computing techniques, fog computing techniques, mainframe techniques, utility computing techniques, a peer-to-peer model, sandbox techniques, or other computing techniques. Multiple servers120may be included in the system, such as one or more servers120for performing ASR processing, one or more servers120for performing NLU processing, etc. In operation, each of these devices (or groups of devices) may include computer-readable and computer-executable instructions that reside on the respective device (110/120), as will be discussed further below. Each of these devices (110/120) may include one or more controllers/processors (1604/1704), which may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory (1606/1706) for storing data and instructions of the respective device. The memories (1606/1706) may individually include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive memory (MRAM), and/or other types of memory. Each device (110/120) may also include a data storage component (1608/1708) for storing data and controller/processor-executable instructions. Each data storage component (1608/1708) may individually include one or more non-volatile storage types such as magnetic storage, optical storage, solid-state storage, etc. Each device (110/120) may also be connected to removable or external non-volatile memory and/or storage (such as a removable memory card, memory key drive, networked storage, etc.) through respective input/output device interfaces (1602/1702). Computer instructions for operating each device (110/120) and its various components may be executed by the respective device's controller(s)/processor(s) (1604/1704), using the memory (1606/1706) as temporary “working” storage at runtime. A device's computer instructions may be stored in a non-transitory manner in non-volatile memory (1606/1706), storage (1608/1708), or an external device(s). Alternatively, some or all of the executable instructions may be embedded in hardware or firmware on the respective device in addition to or instead of software. Each device (110/120) includes input/output device interfaces (1602/1702). A variety of components may be connected through the input/output device interfaces (1602/1702), as will be discussed further below. Additionally, each device (110/120) may include an address/data bus (1624/1724) for conveying data among components of the respective device. Each component within a device (110/120) may also be directly connected to other components in addition to (or instead of) being connected to other components across the bus (1624/1724). Referring toFIG.16, the device110may include input/output device interfaces1602that connect to a variety of components such as an audio output component such as a loudspeaker(s)116, a wired headset or a wireless headset (not illustrated), or other component capable of outputting audio. The device110may also include an audio capture component. The audio capture component may be, for example, microphone(s)114or array of microphones, a wired headset or a wireless headset (not illustrated), etc. If an array of microphones is included, approximate distance to a sound's point of origin may be determined by acoustic localization based on time and amplitude differences between sounds captured by different microphones of the array. The device110may additionally include a display1616for displaying content. The device110may further include a camera1618. Via antenna(s)1614, the input/output device interfaces1602may connect to one or more networks199via a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, 4G network, 5G network, etc. A wired connection such as Ethernet may also be supported. Through the network(s)199, the system may be distributed across a networked environment. The I/O device interface (1602/1702) may also include communication components that allow data to be exchanged between devices such as different physical servers in a collection of servers or other components. The components of the device(s)110and the server(s)120may include their own dedicated processors, memory, and/or storage. Alternatively, one or more of the components of the device(s)110and the server(s)120may utilize the I/O interfaces (1602/1702), processor(s) (1604/1704), memory (1606/1706), and/or storage (1608/1708) of the device(s)110and server(s)120, respectively. Thus, the ASR component250may have its own I/O interface(s), processor(s), memory, and/or storage; the NLU component260may have its own I/O interface(s), processor(s), memory, and/or storage; and so forth for the various components discussed herein. As noted above, multiple devices may be employed in a single system. In such a multi-device system, each of the devices may include different components for performing different aspects of the system's processing. The multiple devices may include overlapping components. The components of the device110and the server(s)120, as described herein, are illustrative, and may be located as a stand-alone device or may be included, in whole or in part, as a component of a larger device or system. As illustrated inFIG.18, multiple devices (110a-110g,120) may contain components of the system and the devices may be connected over a network(s)199. The network(s)199may include a local or private network or may include a wide network such as the Internet. Devices may be connected to the network(s)199through either wired or wireless connections. For example, a speech-detection device110a, a smart phone110b, a smart watch110c, a tablet computer110d, a vehicle110e, a display device110f, and/or smart television110gmay be connected to the network(s)199through a wireless service provider, over a WiFi or cellular network connection, via an adapter from a public switched telephone network (PSTN), and/or the like. Other devices are included as network-connected support devices, such as the server(s)120, and/or others. The support devices may connect to the network(s)199through a wired connection or wireless connection. Networked devices may capture audio using one-or-more built-in or connected microphones or other audio capture devices, with processing performed by ASR components, NLU components, or other components of the same device or another device connected via the network(s)199, such as the ASR component250, the NLU component260, etc. of one or more servers120. The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, speech processing systems, and distributed computing environments. The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers and speech processing should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein. Aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk, and/or other media. In addition, components of system may be implemented as in firmware or hardware, such as an acoustic front end (AFE), which comprises, among other things, analog and/or digital filters (e.g., filters configured as firmware to a digital signal processor (DSP)). Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
176,476
11862175
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS Subject matter will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific example embodiments. This description is not intended as an extensive or detailed discussion of known concepts. Details that are well known may have been omitted, or may be handled in summary fashion. The following subject matter may be embodied in a variety of different forms, such as methods, devices, components, and/or systems. Accordingly, this subject matter is not intended to be construed as limited to any example embodiments set forth herein. Rather, example embodiments are provided merely to be illustrative. Such embodiments may, for example, take the form of hardware, software, firmware or any combination thereof. The following provides a discussion of some types of computing scenarios in which the disclosed subject matter may be utilized and/or implemented. One or more systems and/or techniques for user identification and authentication are provided. A network may comprise various devices and/or services configured to provide functionality to users and client devices. For example, an authentication service may authenticate users connecting to the network, a messaging service may provide notifications and messages to client devices such as a push notification to a mobile device, an email service may provide users with access to email accounts, etc. In order to provide services and functionality to a large and/or ever growing user base of client devices, the network would scale out with additional computing resources and bandwidth in order to reduce latency otherwise experienced by client devices. However, scaling out the network with additional computing resources and bandwidth can be costly and/or impractical for a service provider. Accordingly, as provided herein, a user identification and authentication framework is provided for offloading user identification and authentication functionality to client devices connected to the network in order to scale out the network in a secure and efficient manner. The user identification and authentication framework may allow users to choose what client devices to utilize for authentication, such as a digital assistant, a camera, or any other type of client device, and to seamlessly switch between different client devices for authentication. The user identification and authentication framework allows users to opt-in their client devices for performing user identification and authentication functionality. Offloading such processing from devices and/or services of the network to client devices provides the ability to scale out the network to support more users and client devices. This is because computing and processing resources of the devices and/or services of the network are no longer being consumed for performing user identification and authentication, and instead can be used for providing other services and functionality to users. Having additional computing and processing resources for providing services and functionality to users will reduce latency and/or processing overhead. Various types of client devices may be utilized by the identification and authentication framework for providing user identification functionality and/or authentication functionality. In some examples, a voice command device may be utilized by the identification and authentication framework for providing user identification functionality. The voice command device may comprise a device that operates on voice commands, and is capable of identifying users based upon the voice commands. For example, the voice command device may create and maintain unique voice profiles for users based upon voice characteristics of the users, such as phonation, pitch, loudness, texture, tone, speech rate, etc. In some examples, the voice command device may comprise a smart speaker, a watch, a mobile device, a thermostat, a camera, a television, a videogame console, or any other client device capable of identifying users. In some examples, the voice command device may be incorporated into and/or is part of a communication device capable of connecting to the network, or may be a separate device from the communication device. It may be appreciated that a wide variety of other client devices beyond voice command devices may be utilized for user identification. Such client devices leveraged for user identification may include devices capable of identifying users based upon biometric data (e.g., facial recognition, fingerprinting, etc.), devices capable of identifying users based upon user authentication information (e.g., a username and password), videogame consoles, televisions, mobile devices, smart home devices, enterprise devices, multi-access edge computing (MEC) client devices, a device capable of implementing multiple techniques for authentication (e.g., a device that implements both voice print authentication and fingerprint authentication), etc. In this way, any type of device capable of identifying users (e.g., utilizing fingerprint profiles of user fingerprints, facial recognition profiles of user faces, a username/password repository of user accounts of users, etc.) may be utilized by the identification and authentication framework for providing user identification functionality. These client devices may communicate with other devices and/or services of the network in order to communicate user identification information, account information, authentication information, and/or service requests to perform actions. For example, once a user has been identified and has registered an account linked to a profile of the user, the user may issue commands to the client device for routing to services within the network for performing actions (e.g., pay a bill, schedule a service appointment, purchase a product, etc.). In some examples, a device, such as a client device, may be utilized by the identification and authentication framework for providing user authentication functionality. The client device may comprise any type of device capable of authenticating a user. For example, the client device may comprises a smartphone, a smart watch, a computer, a laptop, a videogame system, a tablet, a vehicle computer, or any other type of client device capable of authenticating a user. The client device may utilize various types of authentication mechanisms for authenticating the user, which may be utilized in conjunction with an authentication service (e.g., an identity online server). For example, the client device may utilize biometric verifications such as facial recognition, a fingerprint, etc., a password or pin, or any other type of authentication mechanism in order to authenticate the user. The device client may operate in conjunction with other devices and/or services of the network in order to facilitate the authentication of the user, such as the authentication service for authentication and a messaging service for communication. In some examples, the client device may be a different client device than the client device used to identify the user (e.g., the voice command device, a camera device, a device with fingerprint recognition functionality, etc.) or the same client device may be utilized for both user identification and user authentication. In some examples, a communication device may be utilized by the identification and authentication framework for providing communication between the client device used for user identification such as the voice command device, the client device used for user authentication, and other devices and/or services of the network such as the authentication service and the messaging service. It may be appreciated that in some instances, the communication device refers to an identity service hosted on a device with communication capabilities and/or the client device may refer to an authentication service hosted on a client on-prem device, for example, and thus the communication device and/or client device may correspond to a service hosted by a device. The communication device may be used to facilitate the communication of user identification information, authentication information, account creation information, and/or service requests to perform actions. In some examples, the communication device may reside within a same network as the client device used for user authentication (e.g., the communication device and the voice command device may be connected to a same home network) in order to enhance security. In some examples, the device used for user authentication and the communication device may comprise the same device, such as where voice command functionality is integrated into the communication device. In some examples, the device used for user authentication and the communication device are separate devices. The communication device is configured to provide communication functionality over the network to other devices and/or services hosted within the network. In some examples, the communication device comprises a router, a server, a thin multi-access edge computing (MEC) device, or any other device capable of connecting to the network. Because the device used for user identification (e.g., the voice command device, a camera device, a device with fingerprint recognition functionality, etc.), the device used for user authentication (e.g., a smartphone, a smart watch, etc.), and/or the communication device (e.g., a router) may comprise client devices, computational processing can be offloaded from other devices and/or services of the network to these client devices. In this way, resources of the network can be freed for other purposes such as providing services to users and client devices at lower latencies and/or for scaling out to provide services to a larger user and client device base. A user may utilize the identification and authentication framework in order to register accounts with services that can perform actions on behalf of the user, such as paying bills, purchasing products, scheduling service, etc. For example, the user may speak a voice command “schedule a service appointment to upgrade my internet connection,” which is detected by a voice command device. The voice command device may identify a voice profile associated with the user based upon voice characteristics of the voice command. If the voice command device does not have the capability to identify a voice profile associated with the user, then the communication device may be capable of identifying the voice profile associated with the user. If the voice profile is not linked to an account associated with the action (e.g., the voice profile is not linked to an internet service provider account of the user with an internet service provider), then the user may be prompted for an identifier of a device such as a phone number of a smart phone. The voice command device and/or the communication device may be utilized to communicate with other services, such as an authentication service and a messaging service, in order to provide a push notification to the smart phone using the phone number so that the user can authenticate and create the account with the internet service provider through the smart phone. Once an account is created, the voice profile of the user is linked to the account so that subsequent voice commands associated with the account can be processed without further registration or account creation. For example, the user may subsequently speak a voice command “pay my internet bill,” which is detected by the voice command device. The voice command device may identify the voice profile associated with the user based upon voice characteristics of the voice command. Because the voice profile is linked to the account with the internet service provider associated with bill the user wants to pay, the user is authenticated with an authentication service and a perform action command is transmitted by the communication device to the internet service provider to perform an action to pay the internet bill of the user. FIG.1illustrates a system100for user identification and authentication. The system100may employ a user identification and authentication framework in order to utilize various client devices for performing user identification and authentication functionality. A voice command device102may be capable of receiving, processing, and/or implementing voice commands from users utilizing speech recognition functionality. In some embodiments, the voice command device102may be a standalone device or may be part of another device (e.g., a smart speaker, a smart watch, a smart phone, a smart television, a videogame console, a smart thermostat, etc.). The voice command device102may host a digital assistant configured to process voice commands utilizing the speech recognition functionality. The voice command device102may be configured to create and maintain voice profiles for users, which can be used to identify a particular user that spoke a voice command. A voice profile for a user may comprise voice characteristics of the user, which can be compared to voice characteristics of a voice command to determine whether the user spoke the voice command or whether another user spoke the voice command. Accordingly, the voice command device102may be utilized by the user identification and authentication framework for user identification based upon the voice profiles. It may be appreciated that other types of devices may be utilized by the user identification and authentication framework for user identification, such as a camera capable of identifying users based upon facial recognition functionality, a device capable of identifying users based upon biometric data, a device capable of identifying users based upon user input (e.g., a password or pin), a videogame console, a smart television, a smart home device, an enterprise device, a multi-access edge computing (MEC) device, etc. The user identification and authentication framework may utilize a communication device104in order to communicate with the voice command device102. In some examples, the communication device104and the voice command device102are connected over a network112, such as a home network or an enterprise network. The communication device104may be utilized to facilitate communication between the voice command device102and other devices and/or services, such as an authentication service106, a messaging service108, services capable of performing actions on behalf of users, etc. In some examples, the voice command device102may be incorporated into the communication device104or may be a separate device. The communication device104may comprise any device capable of connecting to the other devices and services over a network such as a service provider network. For example, the communication device104may comprise a router, a server, or other client device. In this way, the communication device104provides a communication bridge between the voice command device102and other devices and services, such as the authentication service106and the messaging service108. The authentication service106may be configured to perform authentication and registration of a user, a client device of the user, and/or an account of the user. For example, the voice command device102may identify a user that spoke a voice command to perform an action. A determination may be made that a voice profile of the user is not linked to an account associated with the performance of the action. Accordingly, the authentication service106is invoked to authenticate the user for creating and registering the account. The authentication service106may be provided with an identifier of a device110of the user such as a phone number of a mobile device through which the user can authenticate. The authentication service106may utilize the messaging service108to send a push notification to the device110to guide the user through authenticating through the device110using an authentication user interface of the authentication service. In some examples, the device110such as a smart phone may perform facial recognition, voice recognition, request a password/pin, perform biometric authentication, verify a fingerprint, or perform other authentication for authenticating the user through the authentication user interface. Once the user has performed authentication through the device110, the authentication service106may store authentication data (e.g., an authentication key such as a private key and/or public key, a global unique identifier such as a fast identity online global unique identifier (e.g., a random Global Unique Identifier) generated as a result of the device110authenticating the user in association with the authentication service106, the identifier of the device110, account information of the account being created, etc.) for subsequently authenticating actions requested by the user to perform for the account. In this way, the authentication service106may perform an initial authentication of the user through the device110for creating and registering the account. Once the authentication service106has authenticated the user and registered the account, the communication device104or other device may link the voice profile of the user to the account to create an account link. For example, an authentication service hosted by a client on-prem device, such as the communication device104or other device, and/or a service provider cloud may link the voice profile to create the account link. The voice profile may be linked to the account by storing the global unique identifier and the authentication key such as the public key within the account link. When the voice command device102receives a subsequent voice command from the user to perform an action associated with the account, the communication device104and/or the voice command device102may identify and utilize the account link for facilitating the performance of the action by a service associated with the account, as opposed to initiating registration again. In this way, various client devices may be leveraged for providing user identification and authentication. An embodiment of a voice command device402, ofFIG.4, performing user identification using a voice profile is illustrated by an exemplary method200ofFIG.2and an embodiment of a communication device404facilitating communication with an authentication service406for creating and registering an account in order to link the account to the voice profile is illustrated by an exemplary method300ofFIG.3, which are further described in conjunction with system400ofFIG.4. During operation202of method200, the voice command device402may detect a first voice command412to perform an action. For example, a smart speaker within a home of a user may detect the first voice command of “please pay my phone bill with Phone Company A” spoken by the user. The voice command device402may utilize speech recognition functionality to determine that the user is requesting the performance of an action to pay a phone bill associated with a bill payment service for Phone Company A. The voice command device402may also utilize the speech recognition functionality to determine first voice characteristics of the first voice command. The first voice characteristics may correspond to phonation, pitch, loudness, texture, tone, speech rate, etc. The voice command device402may compare the first voice characteristics to voice profiles of users, which are maintained by the voice command device402. A voice profile of a user may comprise voice characteristics of that user, which can be matched with voice characteristics of voice commands, such as the first voice characteristics of the first voice command. If the voice command device402does not identify a voice profile matching the first voice characteristics of the first voice command, then the voice command device402may generate a voice profile for the user. During operation204of method200, the voice command device402identifies a voice profile associated with the user based upon the first voice characteristics of the first voice command. For example, the first voice characteristics are used to query voice profiles maintained by the voice command device402to identify the voice profile as having voice characteristics that are similar to the first voice characteristics. Once the voice profile is identified by the voice command device402, the voice command device402and/or the communication device404determine414whether the voice profile is linked to an account associated with the action (e.g., an account with the bill payment service for the Phone Company A). In some examples, account links, linking voice profiles to accounts, are maintained by the voice command device402, and thus the voice command device402may determine whether the voice profile is linked to the account. In some examples, the communication device404may maintain the account links, and thus the voice command device402may coordinate with the communication device404to determine whether the voice profile is linked to the account associated with the action. Accordingly, during operation302of method300, the communication device404receives voice profile information of the voice profile for the user and a description of the first voice command to perform the action (e.g., a text translation or other descriptive information of the first voice command as opposed to the actual audio of the voice command) from the voice command device402. The communication device404may determine whether an account link, linking the voice profile to the account, exists. During operation304of method300, the communication device404transmits a request to the voice command device402to obtain an identifier associated with a device410of the user for user authentication and for creating the account through the device410. The communication device404transmits the request to the voice command device402in response to the communication device404determining that the voice profile is not linked to the account associated with the action. Accordingly, account creation will be facilitated for creating and registering the account in order for the action to be performed. During operation206of method200, the voice command device402prompts the user to provide the identifier associated with device410for creating the account through the device410. In some examples, the voice command device402may provide an audible message to the user in order to request an identifier of a device through which the user would be capable of authenticating and creating the account. Various types of identifiers may be requested, such as a phone number of a smart phone through which the user could authenticate using authentication functionality of the smart phone, an email address, a social network profile, or a variety of other identifiers that could be used to transmit information to the device410for authenticating and creating the account. During operation208of method200, in response to the voice command device402receiving the identifier from the user (e.g., the user may provide a new voice command with the phone number of the device410), the identifier is utilized by the voice command device402to facilitate the creation of the account through the device410. For example, the voice command device402transmits a registration request416to the communication device404. The registration request416may comprise the voice profile information of the voice profile and/or the identifier of the device410. The voice command device402may transmit the registration request416to the communication device404for routing to an authentication service406(e.g., an identity online server) for facilitating the creation of the account through the device410. In some examples, the voice command device402may associate the identifier with the voice profile of the user. During operation306of method300, the communication device404utilizes the identifier from the registration request416to facilitate creation of the account through the device410. The communication device404may transmit a registration request418to the authentication service406for facilitating the creation of the account through the device410. In some examples, the registration request418may comprise the identifier of the device410. In some examples, the registration request418may comprise a public key so that user authentication and account creation can be performed in a secure manner. In response to receiving the registration request418, the authentication service406may transmit a trigger push notification command420to a messaging service408. The trigger push notification command420may comprise the identifier of the device410and a message for the messaging service408to transmit to the device410as a push notification422. For example, the identifier may comprise the phone number of the device410. The messaging service408may utilize the phone number to transmit the message to the device410for display on the device410. The message may request that the user authenticate through an authentication service user interface utilizing authentication functionality of the device410. For example, the device410may perform facial recognition, speech recognition, fingerprint recognition, request a password or pin, or perform other types of authentication in order to authenticate the user through the authentication service user interface associated with the authentication service406. In this way, the user may authenticate for creation and registration of the account (authentication/registration424) through the device410. Various account registration information426may be generated based upon the user performing the authentication/registration424through the device410to create the account with the bill payment service. In some examples, a session identifier may be generated based upon a session associated with the user interacting with the authentication service user interface of the authentication service406. The account registration information426may comprise the session identifier, an authentication key (e.g., the public key), and/or a global unique identifier generated or associated with the user successfully authenticating through the device410with the authentication service user interface of the authentication service406. The authentication service406may store the authentication key (e.g., the public key) within a repository for utilization by an authenticator for subsequent authentication of attempts to access the account with the service. The authentication service406may transmit a notification428to the communication device404that the user created the account through the device410. In some examples, the notification428may comprise the account registration information426, which may be stored by the communication device404within an account link that links the voice profile to the account. For example, the global unique identifier and one or more authentication keys (e.g., the public key and a corresponding private key for the authenticator for subsequent authentication attempts to access the account with the service) may be stored within the account link. The communication device404may transmit the notification428to the voice command device402. In some examples, if the voice command device402(e.g., as opposed to the communication device404) maintains account links, then the voice command device402may create the account link based upon the account registration information426within the notification428. The voice command device402may provide a notification to the user that the account link has been created and that the user can now provide voice commands in order to perform actions relating to the account with the bill payment service for the Phone Company A. In some examples, the communication device404may be configured to generate rules and/or levels of trust that may be used for registration and/or subsequently performing actions on behalf of users after accounts have been created and registered. For example, the communication device404may be configured to determine constraints for the rules and levels of trust regarding allowing the device410to be used as an authenticator for authenticating the user and/or how much the device410can be trusted. The rules and/or levels of trust may be generated based upon machine learning and/or other components that take into consideration network information, device information, distance information (e.g., a greater distances between the voice command device402and the device410may indicate a lower level of trust than if both devices were located near one another), and/or network hop information. The rules, levels of trust, and/or other information (e.g., attestation information from third party devices, network information, opt-in signatures, device identifiers, etc.) may be utilized after registration to create authenticator type and association policies. In some examples, the machine learning may be used to detect if the device410is a new device type, any known vulnerabilities associated with the device410, to check a current software version of the device410, prompt for an alert to upgrade the current software version of the device410, and/or perform other actions based upon the constraints within the rules and/or levels of trust. In some embodiments, a policy broker associated with the authentication service406(e.g., a policy broker) may link the voice profile of the user to multiple identifying factors, such as the device410used as an authenticator, GPS device/information, an eSIM, a device ID of the device410, etc. The policy broker may allow access to particular resources of devices (e.g., a multi-access edge computing (MEC) client such as the communication device404, the device410, the voice command device402, or any other device such as a videogame console, a television, a tablet, etc.). The policy broker may utilize various polices in order to provide the user with access to particular MEC resources and/or may redirect users to a particular device. For example, the user may be redirected to a device with a threshold amount of performance capabilities (e.g., a device with better performance than other available devices), a device with a particular MEC configuration, a device that can satisfy certain OEM requirements, a 5G slice, a device with latency below a threshold (e.g., a videogame console with low latency enable), a mobile device with a SIM that can utilize a higher bandwidth slice than other devices, etc. An embodiment of a voice command device602and a communication device604facilitating the performance of an action by a service608is illustrated by an exemplary method500ofFIG.5, which is further described in conjunction with system600ofFIG.6. During operation502of method500, the voice command device602may detect612a second voice command to perform an action associated with an account. For example, a user may say “please upgrade my internet connection with Phone Company A to the next fastest speed,” which may be detected612by the voice command device602. The voice command device602may utilize speech recognition functionality to evaluate the second voice command in order to determine that the user is requesting the performance of an action to upgrade an internet connection to a next fastest speed through a bill payment service for the Phone Company A. The voice command device602may also utilize the speech recognition functionality to evaluate the second voice command in order to determine second voice characteristics of the second voice command. The second voice characteristics may correspond to phonation, pitch, loudness, texture, tone, speech rate, etc. The voice command device602may compare the second voice characteristics to voice profiles of users that are maintained by the voice command device602. A voice profile of a user may comprise voice characteristics that can be matched with voice characteristics of voice commands, such as the second voice characteristics of the second voice command. During operation504of method500, the voice command device602identifies a voice profile associated with the user based upon the second voice characters of the second voice command. For example, the second voice characteristics are used to query voice profiles maintained by the voice command device602to identify the voice profile as having voice characteristics that are similar to the second voice characteristics. Once the voice profile is identified by the voice command device602, the voice command device602and/or the communication device604determine614whether the voice profile is linked to an account associated with the action (e.g., an account with the bill payment service for the Phone Company A). In some examples, account links, linking voice profiles to accounts, are maintained by the voice command device602, and thus the voice command device602may determine whether the voice profile is linked to the account. If the voice command device602identifies the account link, then the voice command device602may transmit an action command to the communication device604, which may be used to facilitate the performance of the action. The action command may comprise voice profile information associated with the voice profile and/or a description of the second voice command. In some examples, the communication device604may maintain the account links, and thus the voice command device602may coordinate with the communication device604to determine whether the voice profile is linked to the account associated with the action. For example, the communication device604receives the voice profile information and/or the description of the second voice command from the voice command device602, and utilizes the voice profile information to determine whether the account link exists. During operation506of method500ofFIG.5, the account link, linking the voice profile to an account with a service capable of performing the action (e.g., the bill payment service capable of upgrading the user's internet, or any other type of service linked to a user account), is identified and used to facilitate the performance of the action. In some embodiments, in response to identifying the account link, an authentication request616is transmitted to the authentication service606in order to authenticate the user for performing the action. The authentication request616may comprise a global unique identifier and/or an authentication key (e.g., a public key) associated with the account link. In response to the communication device604determining that the authentication service606successfully authenticated the user (e.g., authenticating the global unique identifier and/or the public key), the communication device604may transmit a perform action command620to the service608, such as the bill payment service for the Phone Company A. The perform action command620may comprise the voice profile information and/or the description of the second voice command (e.g., a message that the user wants to perform the action through the service608to upgrade the internet to the next fastest speed). In response to the communication device604and/or the voice command device602receiving a success notification generated from the service608that the service608upgraded the internet of the user to the next fastest speed, the success notification may be provided to the user, such as through an audible message provided by the voice command device602. Once the service608completes the action (or attempts to complete the action, but there is a failure or additional information is required), an action result622is provided from the service608to the communication device604. The communication device604may forward the action result622to the voice command device602that will provide a notification to the user based upon the action result622(e.g., a success notification, a failure notification, a request for additional information or details required to perform the action, etc.). According to some embodiments, a method is provided. The method includes detecting a first voice command to perform an action; identifying a voice profile associated with a user based upon first voice characteristics of the first voice command; in response to determining that the voice profile is not linked to an account associated with the action, prompting the user for an identifier associated with a device for creating the account through the device; and in response to receiving the identifier from the user, utilizing the identifier to facilitate creation of the account through the device. According to some embodiments, the method includes transmitting a registration request associated with the identifier to a communication device for routing to an authentication service for facilitating the creation of the account through the device. According to some embodiments, the registration request comprises voice profile information associated with the voice profile and the identifier. According to some embodiments, the method includes receiving a notification that the account was created through the device in response to a push notification provided to the device by a messaging interface utilized by the authentication service for facilitating the creation of the account through the device. According to some embodiments, the method includes in response to receiving account registration information associated with the account created through the device, utilizing the account registration information to create an account link linking the voice profile with the account. According to some embodiments, the method includes storing a global unique identifier and an authentication key comprised within the account registration information into the account link. According to some embodiments, the method includes in response to receiving the identifier from the user, associating the identifier with the voice profile. According to some embodiments, the method includes detecting a second voice command to perform the action associated with the account; identifying the voice profile associated with the user based upon second voice characteristics of the second voice command; and determining whether an account link, linking the voice profile to the account, exists. According to some embodiments, the method includes in response to determining that the account link exists, utilizing the account link to facilitate performance of the action. According to some embodiments, the method includes transmitting an action command comprising voice profile information associated with the voice profile and the second voice command to a communication device, wherein the action command triggers the communication device to utilize an authentication service to authenticate the user based upon a global unique identifier and an authentication key associated with the account link. According to some embodiments, the action command triggers the communication device to transmit a perform action command to a service to perform the action. According to some embodiments, the perform action command comprises the voice profile information and the second voice command. According to some embodiments, a non-transitory computer-readable medium storing instructions that when executed facilitate performance of operations, is provided. The operations include receiving voice profile information of a voice profile associated with a user and a first voice command by the user to perform an action; in response to determining that the voice profile is not linked to an account associated with the action, transmitting a request to obtain an identifier associated with a device of the user for creating the account through the device; and in response to receiving the identifier, utilizing the identifier to facilitate creation of the account through the device. According to some embodiments, the operations include transmitting a registration request to an authentication service for facilitating the creation of the account through the device. According to some embodiments, the operations include receiving a notification that the account was created through the device in response to a push notification provided to the device by a messaging interface utilized by the authentication service for facilitating the creation of the account through the device. According to some embodiments, the operations include in response to receiving account registration information associated with the account created through the device, utilizing the account registration information to create an account link linking the voice profile with the account. According to some embodiments, the operations include receiving the voice profile information and a second voice command by the user to perform the action; and in response to determining that the account link exists, utilizing the account link to facilitate the performance of the action. According to some embodiments, the operations include utilizing an authentication service to authenticate the user based upon a global unique identifier and an authentication key associated with the account link; and transmitting a perform action command to a service to perform the action in response to successful authentication of the user by the authentication service. According to some embodiments, a system is provided. The system comprises a processor coupled to memory, the processor configured to execute instructions to perform operations. The operations include detecting a voice command to perform an action associated with an account; identifying a voice profile associated with a user based upon voice characteristics of the voice command; and in response to determining that an account link, linking the voice profile to the account, exists, utilizing the account link to facilitate performance of the action. According to some embodiments, the operations include utilizing an authentication service to authenticate the user based upon a global unique identifier and an authentication key associated with the account link; and transmitting a perform action command to a service to perform the action in response to successful authentication of the user by the authentication service. FIG.7is an interaction diagram of a scenario700illustrating a service702provided by a set of computers704to a set of client devices710via various types of transmission mediums. The computers704and/or client devices710may be capable of transmitting, receiving, processing, and/or storing many types of signals, such as in memory as physical memory states. The computers704of the service702may be communicatively coupled together, such as for exchange of communications using a transmission medium706. The transmission medium706may be organized according to one or more network architectures, such as computer/client, peer-to-peer, and/or mesh architectures, and/or a variety of roles, such as administrative computers, authentication computers, security monitor computers, data stores for objects such as files and databases, business logic computers, time synchronization computers, and/or front-end computers providing a user-facing interface for the service702. Likewise, the transmission medium706may comprise one or more sub-networks, such as may employ different architectures, may be compliant or compatible with differing protocols and/or may interoperate within the transmission medium706. Additionally, various types of transmission medium706may be interconnected (e.g., a router may provide a link between otherwise separate and independent transmission medium706). In scenario700ofFIG.7, the transmission medium706of the service702is connected to a transmission medium708that allows the service702to exchange data with other services702and/or client devices710. The transmission medium708may encompass various combinations of devices with varying levels of distribution and exposure, such as a public wide-area network and/or a private network (e.g., a virtual private network (VPN) of a distributed enterprise). In the scenario700ofFIG.7, the service702may be accessed via the transmission medium708by a user712of one or more client devices710, such as a portable media player (e.g., an electronic text reader, an audio device, or a portable gaming, exercise, or navigation device); a portable communication device (e.g., a camera, a phone, a wearable or a text chatting device); a workstation; and/or a laptop form factor computer. The respective client devices710may communicate with the service702via various communicative couplings to the transmission medium708. As a first such example, one or more client devices710may comprise a cellular communicator and may communicate with the service702by connecting to the transmission medium708via a transmission medium707provided by a cellular provider. As a second such example, one or more client devices710may communicate with the service702by connecting to the transmission medium708via a transmission medium709provided by a location such as the user's home or workplace (e.g., a WiFi (Institute of Electrical and Electronics Engineers (IEEE) Standard 802.11) network or a Bluetooth (IEEE Standard 802.15.1) personal area network). In this manner, the computers704and the client devices710may communicate over various types of transmission mediums. FIG.8presents a schematic architecture diagram800of a computer704that may utilize at least a portion of the techniques provided herein. Such a computer704may vary widely in configuration or capabilities, alone or in conjunction with other computers, in order to provide a service such as the service702. The computer704may comprise one or more processors810that process instructions. The one or more processors810may optionally include a plurality of cores; one or more coprocessors, such as a mathematics coprocessor or an integrated graphical processing unit (GPU); and/or one or more layers of local cache memory. The computer704may comprise memory802storing various forms of applications, such as an operating system804; one or more computer applications806; and/or various forms of data, such as a database808or a file system. The computer704may comprise a variety of peripheral components, such as a wired and/or wireless network adapter814connectible to a local area network and/or wide area network; one or more storage components816, such as a hard disk drive, a solid-state storage device (SSD), a flash memory device, and/or a magnetic and/or optical disk reader. The computer704may comprise a mainboard featuring one or more communication buses812that interconnect the processor810, the memory802, and various peripherals, using a variety of bus technologies, such as a variant of a serial or parallel AT Attachment (ATA) bus protocol; a Uniform Serial Bus (USB) protocol; and/or Small Computer System Interface (SCI) bus protocol. In a multi-bus scenario, a communication bus812may interconnect the computer704with at least one other computer. Other components that may optionally be included with the computer704(though not shown in the schematic architecture diagram800ofFIG.8) include a display; a display adapter, such as a graphical processing unit (GPU); input peripherals, such as a keyboard and/or mouse; and a flash memory device that may store a basic input/output system (BIOS) routine that facilitates booting the computer704to a state of readiness. The computer704may operate in various physical enclosures, such as a desktop or tower, and/or may be integrated with a display as an “all-in-one” device. The computer704may be mounted horizontally and/or in a cabinet or rack, and/or may simply comprise an interconnected set of components. The computer704may comprise a dedicated and/or shared power supply818that supplies and/or regulates power for the other components. The computer704may provide power to and/or receive power from another computer and/or other devices. The computer704may comprise a shared and/or dedicated climate control unit820that regulates climate properties, such as temperature, humidity, and/or airflow. Many such computers704may be configured and/or adapted to utilize at least a portion of the techniques presented herein. FIG.9presents a schematic architecture diagram900of a client device710whereupon at least a portion of the techniques presented herein may be implemented. Such a client device710may vary widely in configuration or capabilities, in order to provide a variety of functionality to a user such as the user712. The client device710may be provided in a variety of form factors, such as a desktop or tower workstation; an “all-in-one” device integrated with a display908; a laptop, tablet, convertible tablet, or palmtop device; a wearable device mountable in a headset, eyeglass, earpiece, and/or wristwatch, and/or integrated with an article of clothing; and/or a component of a piece of furniture, such as a tabletop, and/or of another device, such as a vehicle or residence. The client device710may serve the user in a variety of roles, such as a workstation, kiosk, media player, gaming device, and/or appliance. The client device710may comprise one or more processors910that process instructions. The one or more processors910may optionally include a plurality of cores; one or more coprocessors, such as a mathematics coprocessor or an integrated graphical processing unit (GPU); and/or one or more layers of local cache memory. The client device710may comprise memory901storing various forms of applications, such as an operating system903; one or more user applications902, such as document applications, media applications, file and/or data access applications, communication applications such as web browsers and/or email clients, utilities, and/or games; and/or drivers for various peripherals. The client device710may comprise a variety of peripheral components, such as a wired and/or wireless network adapter906connectible to a local area network and/or wide area network; one or more output components, such as a display908coupled with a display adapter (optionally including a graphical processing unit (GPU)), a sound adapter coupled with a speaker, and/or a printer; input devices for receiving input from the user, such as a keyboard911, a mouse, a microphone, a camera, and/or a touch-sensitive component of the display908; and/or environmental sensors, such as a global positioning system (GPS) receiver919that detects the location, velocity, and/or acceleration of the client device710, a compass, accelerometer, and/or gyroscope that detects a physical orientation of the client device710. Other components that may optionally be included with the client device710(though not shown in the schematic architecture diagram900ofFIG.9) include one or more storage components, such as a hard disk drive, a solid-state storage device (SSD), a flash memory device, and/or a magnetic and/or optical disk reader; and/or a flash memory device that may store a basic input/output system (BIOS) routine that facilitates booting the client device710to a state of readiness; and a climate control unit that regulates climate properties, such as temperature, humidity, and airflow. The client device710may comprise a mainboard featuring one or more communication buses912that interconnect the processor910, the memory901, and various peripherals, using a variety of bus technologies, such as a variant of a serial or parallel AT Attachment (ATA) bus protocol; the Uniform Serial Bus (USB) protocol; and/or the Small Computer System Interface (SCI) bus protocol. The client device710may comprise a dedicated and/or shared power supply918that supplies and/or regulates power for other components, and/or a battery904that stores power for use while the client device710is not connected to a power source via the power supply918. The client device710may provide power to and/or receive power from other client devices. FIG.10illustrates an example environment1000, in which one or more embodiments may be implemented. In some embodiments, environment1000may correspond to a Fifth Generation (“5G”) network, and/or may include elements of a 5G network. In some embodiments, environment1000may correspond to a 5G Non-Standalone (“NSA”) architecture, in which a 5G radio access technology (“RAT”) may be used in conjunction with one or more other RATs (e.g., a Long-Term Evolution (“LTE”) RAT), and/or in which elements of a 5G core network may be implemented by, may be communicatively coupled with, and/or may include elements of another type of core network (e.g., an evolved packet core (“EPC”)). As shown, environment1000may include UE1003, RAN1010(which may include one or more Next Generation Node Bs (“gNBs”)1011), RAN1012(which may include one or more one or more evolved Node Bs (“eNBs”)1013), and various network functions such as Access and Mobility Management Function (“AMF”)1015, Mobility Management Entity (“MME”)1016, Serving Gateway (“SGW”)1017, Session Management Function (“SMF”)/Packet Data Network (“PDN”) Gateway (“PGW”)-Control plane function (“PGW-C”)1020, Policy Control Function (“PCF”)/Policy Charging and Rules Function (“PCRF”)1025, Application Function (“AF”)1030, User Plane Function (“UPF”)/PGW-User plane function (“PGW-U”)1035, Home Subscriber Server (“HSS”)/Unified Data Management (“UDM”)1040, and Authentication Server Function (“AUSF”)1045. Environment1000may also include one or more networks, such as Data Network (“DN”)1050. Environment1000may include one or more additional devices or systems communicatively coupled to one or more networks (e.g., DN1050), such as device1051corresponding to a voice command device, a communication device, an authentication service, a messaging service, a service, a client device capable of identifying users, a client device capable of authenticating users, etc. The example shown inFIG.10illustrates one instance of each network component or function (e.g., one instance of SMF/PGW-C1020, PCF/PCRF1025, UPF/PGW-U1035, HSS/UDM1040, and/or1045). In practice, environment1000may include multiple instances of such components or functions. For example, in some embodiments, environment1000may include multiple “slices” of a core network, where each slice includes a discrete set of network functions (e.g., one slice may include a first instance of SMF/PGW-C1020, PCF/PCRF1025, UPF/PGW-U1035, HSS/UDM1040, and/or1045, while another slice may include a second instance of SMF/PGW-C1020, PCF/PCRF1025, UPF/PGW-U1035, HSS/UDM1040, and/or1045). The different slices may provide differentiated levels of service, such as service in accordance with different Quality of Service (“QoS”) parameters. The quantity of devices and/or networks, illustrated inFIG.10, is provided for explanatory purposes only. In practice, environment1000may include additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than illustrated inFIG.10. For example, while not shown, environment1000may include devices that facilitate or enable communication between various components shown in environment1000, such as routers, modems, gateways, switches, hubs, etc. Alternatively and/or additionally, one or more of the devices of environment1000may perform one or more network functions described as being performed by another one or more of the devices of environment1000. Devices of environment1000may interconnect with each other and/or other devices via wired connections, wireless connections, or a combination of wired and wireless connections. In some implementations, one or more devices of environment1000may be physically integrated in, and/or may be physically attached to, one or more other devices of environment1000. UE1003may include a computation and communication device, such as a wireless mobile communication device that is capable of communicating with RAN1010, RAN1012, and/or DN1050. UE1003may be, or may include, a radiotelephone, a personal communications system (“PCS”) terminal (e.g., a device that combines a cellular radiotelephone with data processing and data communications capabilities), a personal digital assistant (“PDA”) (e.g., a device that may include a radiotelephone, a pager, Internet/intranet access, etc.), a smart phone, a laptop computer, a tablet computer, a camera, a personal gaming system, an IoT device (e.g., a sensor, a smart home appliance, or the like), a wearable device, an Internet of Things (“IoT”) device, a Mobile-to-Mobile (“M2M”) device, or another type of mobile computation and communication device. UE1003may send traffic to and/or receive traffic (e.g., user plane traffic) from DN1050via RAN1010, RAN1012, and/or UPF/PGW-U1035. RAN1010may be, or may include, a 5G RAN that includes one or more base stations (e.g., one or more gNBs1011), via which UE1003may communicate with one or more other elements of environment1000. UE1003may communicate with RAN1010via an air interface (e.g., as provided by gNB1011). For instance, RAN1010may receive traffic (e.g., voice call traffic, data traffic, messaging traffic, signaling traffic, etc.) from UE1003via the air interface, and may communicate the traffic to UPF/PGW-U1035, and/or one or more other devices or networks. Similarly, RAN1010may receive traffic intended for UE1003(e.g., from UPF/PGW-U1035, AMF1015, and/or one or more other devices or networks) and may communicate the traffic to UE1003via the air interface. RAN1012may be, or may include, a LTE RAN that includes one or more base stations (e.g., one or more eNBs1013), via which UE1003may communicate with one or more other elements of environment1000. UE1003may communicate with RAN1012via an air interface (e.g., as provided by eNB1013). For instance, RAN1010may receive traffic (e.g., voice call traffic, data traffic, messaging traffic, signaling traffic, etc.) from UE1003via the air interface, and may communicate the traffic to UPF/PGW-U1035, and/or one or more other devices or networks. Similarly, RAN1010may receive traffic intended for UE1003(e.g., from UPF/PGW-U1035, SGW1017, and/or one or more other devices or networks) and may communicate the traffic to UE1003via the air interface. AMF1015may include one or more devices, systems, Virtualized Network Functions (“VNFs”), etc., that perform operations to register UE1003with the 5G network, to establish bearer channels associated with a session with UE1003, to hand off UE1003from the 5G network to another network, to hand off UE1003from the other network to the 5G network, manage mobility of UE1003between RANs1010and/or gNBs1011, and/or to perform other operations. In some embodiments, the 5G network may include multiple AMFs1015, which communicate with each other via the N14 interface (denoted inFIG.10by the line marked “N14” originating and terminating at AMF1015). MME1016may include one or more devices, systems, VNFs, etc., that perform operations to register UE1003with the EPC, to establish bearer channels associated with a session with UE1003, to hand off UE1003from the EPC to another network, to hand off UE1003from another network to the EPC, manage mobility of UE1003between RANs1012and/or eNBs1013, and/or to perform other operations. SGW1017may include one or more devices, systems, VNFs, etc., that aggregate traffic received from one or more eNBs1013and send the aggregated traffic to an external network or device via UPF/PGW-U1035. Additionally, SGW1017may aggregate traffic received from one or more UPF/PGW-Us1035and may send the aggregated traffic to one or more eNBs1013. SGW1017may operate as an anchor for the user plane during inter-eNB handovers and as an anchor for mobility between different telecommunication networks or RANs (e.g., RANs1010and1012). SMF/PGW-C1020may include one or more devices, systems, VNFs, etc., that gather, process, store, and/or provide information in a manner described herein. SMF/PGW-C1020may, for example, facilitate in the establishment of communication sessions on behalf of UE1003. In some embodiments, the establishment of communications sessions may be performed in accordance with one or more policies provided by PCF/PCRF1025. PCF/PCRF1025may include one or more devices, systems, VNFs, etc., that aggregate information to and from the 5G network and/or other sources. PCF/PCRF1025may receive information regarding policies and/or subscriptions from one or more sources, such as subscriber databases and/or from one or more users (such as, for example, an administrator associated with PCF/PCRF1025). AF1030may include one or more devices, systems, VNFs, etc., that receive, store, and/or provide information that may be used in determining parameters (e.g., quality of service parameters, charging parameters, or the like) for certain applications. UPF/PGW-U1035may include one or more devices, systems, VNFs, etc., that receive, store, and/or provide data (e.g., user plane data). For example, UPF/PGW-U1035may receive user plane data (e.g., voice call traffic, data traffic, etc.), destined for UE1003, from DN1050, and may forward the user plane data toward UE1003(e.g., via RAN1010, SMF/PGW-C1020, and/or one or more other devices). In some embodiments, multiple UPFs1035may be deployed (e.g., in different geographical locations), and the delivery of content to UE1003may be coordinated via the N9 interface (e.g., as denoted inFIG.10by the line marked “N9” originating and terminating at UPF/PGW-U1035). Similarly, UPF/PGW-U1035may receive traffic from UE1003(e.g., via RAN1010, SMF/PGW-C1020, and/or one or more other devices), and may forward the traffic toward DN1050. In some embodiments, UPF/PGW-U1035may communicate (e.g., via the N4 interface) with SMF/PGW-C1020, regarding user plane data processed by UPF/PGW-U1035. HSS/UDM1040and AUSF1045may include one or more devices, systems, VNFs, etc., that manage, update, and/or store, in one or more memory devices associated with AUSF1045and/or HSS/UDM1040, profile information associated with a subscriber. AUSF1045and/or HSS/UDM1040may perform authentication, authorization, and/or accounting operations associated with the subscriber and/or a communication session with UE1003. DN1050may include one or more wired and/or wireless networks. For example, DN1050may include an Internet Protocol (“IP”)-based PDN, a wide area network (“WAN”) such as the Internet, a private enterprise network, and/or one or more other networks. UE1003may communicate, through DN1050, with data servers, other UEs UE1003, and/or to other servers or applications that are coupled to DN1050. DN1050may be connected to one or more other networks, such as a public switched telephone network (“PSTN”), a public land mobile network (“PLMN”), and/or another network. DN1050may be connected to one or more devices, such as content providers, applications, web servers, and/or other devices, with which UE1003may communicate. The device1051may include one or more devices, systems, VNFs, etc., that perform one or more operations described herein. For example, the device1051detect voice commands, facilitate the creation of accounts, and/or perform actions associated with the accounts. FIG.11illustrates an example Distributed Unit (“DU”) network1100, which may be included in and/or implemented by one or more RANs (e.g., RAN1010, RAN1012, or some other RAN). In some embodiments, a particular RAN may include one DU network1100. In some embodiments, a particular RAN may include multiple DU networks1100. In some embodiments, DU network1100may correspond to a particular gNB1011of a 5G RAN (e.g., RAN1010). In some embodiments, DU network1100may correspond to multiple gNBs1011. In some embodiments, DU network1100may correspond to one or more other types of base stations of one or more other types of RANs. As shown, DU network1100may include Central Unit (“CU”)1105, one or more Distributed Units (“DUs”)1103-1through1103-N (referred to individually as “DU1103,” or collectively as “DUs1103”), and one or more Radio Units (“RUs”)1101-1through1101-M (referred to individually as “RU1101,” or collectively as “RUs1101”). CU1105may communicate with a core of a wireless network (e.g., may communicate with one or more of the devices or systems described above with respect toFIG.10, such as AMF1015and/or UPF/PGW-U1035). In the uplink direction (e.g., for traffic from UEs UE1003to a core network), CU1105may aggregate traffic from DUs1103, and forward the aggregated traffic to the core network. In some embodiments, CU1105may receive traffic according to a given protocol (e.g., Radio Link Control (“RLC”)) from DUs1103, and may perform higher-layer processing (e.g., may aggregate/process RLC packets and generate Packet Data Convergence Protocol (“PDCP”) packets based on the RLC packets) on the traffic received from DUs1103. In accordance with some embodiments, CU1105may receive downlink traffic (e.g., traffic from the core network) for a particular UE1003, and may determine which DU(s)1103should receive the downlink traffic. DU1103may include one or more devices that transmit traffic between a core network (e.g., via CU1105) and UE1003(e.g., via a respective RU1101). DU1103may, for example, receive traffic from RU1101at a first layer (e.g., physical (“PHY”) layer traffic, or lower PHY layer traffic), and may process/aggregate the traffic to a second layer (e.g., upper PHY and/or RLC). DU1103may receive traffic from CU1105at the second layer, may process the traffic to the first layer, and provide the processed traffic to a respective RU1101for transmission to UE1003. RU1101may include hardware circuitry (e.g., one or more RF transceivers, antennas, radios, and/or other suitable hardware) to communicate wirelessly (e.g., via an RF interface) with one or more UEs UE1003, one or more other DUs1103(e.g., via RUs1101associated with DUs1103), and/or any other suitable type of device. In the uplink direction, RU1101may receive traffic from UE1003and/or another DU1103via the RF interface and may provide the traffic to DU1103. In the downlink direction, RU1101may receive traffic from DU1103, and may provide the traffic to UE1003and/or another DU1103. RUs1101may, in some embodiments, be communicatively coupled to one or more Multi-Access/Mobile Edge Computing (“MEC”) devices, referred to sometimes herein simply as (“MECs”)1107. For example, RU1101-1may be communicatively coupled to MEC1107-1, RU1101-M may be communicatively coupled to MEC1107-M, DU1103-1may be communicatively coupled to MEC1107-2, DU1103-N may be communicatively coupled to MEC1107-N, CU1105may be communicatively coupled to MEC1107-3, and so on. MECs1107may include hardware resources (e.g., configurable or provisionable hardware resources) that may be configured to provide services and/or otherwise process traffic to and/or from UE1003, via a respective RU1101. For example, RU1101-1may route some traffic, from UE1003, to MEC1107-1instead of to a core network (e.g., via DU1103and CU1105). MEC1107-1may process the traffic, perform one or more computations based on the received traffic, and may provide traffic to UE1003via RU1101-1. In this manner, ultra-low latency services may be provided to UE1003, as traffic does not need to traverse DU1103, CU1105, and an intervening backhaul network between DU network1100and the core network. In some embodiments, MEC1107may include, and/or may implement some or all of the functionality described above with respect to the device1051, such as a voice command device, a communication device, an authentication service, a messaging service, a service, and/or a user device. FIG.12is an illustration of a scenario1200involving an example non-transitory machine readable medium1202. The non-transitory machine readable medium1202may comprise processor-executable instructions1212that when executed by a processor1216cause performance (e.g., by the processor1216) of at least some of the provisions herein. The non-transitory machine readable medium1202may comprise a memory semiconductor (e.g., a semiconductor utilizing static random access memory (SRAM), dynamic random access memory (DRAM), and/or synchronous dynamic random access memory (SDRAM) technologies), a platter of a hard disk drive, a flash memory device, or a magnetic or optical disc (such as a compact disk (CD), a digital versatile disk (DVD), or floppy disk). The example non-transitory machine readable medium1202stores computer-readable data1204that, when subjected to reading1206by a reader1210of a device1208(e.g., a read head of a hard disk drive, or a read operation invoked on a solid-state storage device), express the processor-executable instructions1212. In some embodiments, the processor-executable instructions1212, when executed cause performance of operations, such as at least some of the example method200ofFIG.2, example method300ofFIG.3, and/or the example method500ofFIG.5, for example. In some embodiments, the processor-executable instructions1212are configured to cause implementation of a system, such as at least some of the example system100ofFIG.1, for example. As used in this application, “component,” “module,” “system”, “interface”, and/or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Unless specified otherwise, “first,” “second,” and/or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first object and a second object generally correspond to object A and object B or two different or two identical objects or the same object. Moreover, “example” is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used herein, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, and/or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims. Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. Various operations of embodiments are provided herein. In an embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering may be implemented without departing from the scope of the disclosure. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments. Also, although the disclosure has been shown and described with respect to one or more implementations, alterations and modifications may be made thereto and additional embodiments may be implemented based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications, alterations and additional embodiments and is limited only by the scope of the following claims. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense. To the extent the aforementioned implementations collect, store, or employ personal information of individuals, groups or other entities, it should be understood that such information shall be used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage, and use of such information can be subject to consent of the individual to such activity, for example, through well known “opt-in” or “opt-out” processes as can be appropriate for the situation and type of information. Storage and use of personal information can be in an appropriately secure manner reflective of the type of information, for example, through various access control, encryption and anonymization techniques for particularly sensitive information.
75,525
11862176
Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent in light of this disclosure. DETAILED DESCRIPTION Generally, this disclosure provides techniques for speaker recognition from captured audio, regardless of whether the audio is captured in the near-field or the far-field of a microphone. When audio is captured in the far-field, typically greater than about three feet from the microphone, various environmental effects, including reflections from walls and object surfaces, can distort the audio. This distortion, which is also referred to as reverberation, can vary in character from one room or environment to the next, and can also vary with changing distances between the user and the microphone in any given environment. In order for speaker recognition systems to be effective, they must generally be trained for each user and the characteristics of the training and authentication audio signals should match. This is not the case, however, in the typical usage scenario where training is performed in the near-field and authentication is performed in the far-field. The disclosed techniques improve speaker recognition performance through the use of reverberation compensation to simulate and adjust for mismatches between training and authentication signals due to far-field environmental effects resulting, for example, from varying distances between the user and the microphone, whether in the near-field or the far-field. Capabilities are provided to train and operate a reverberation compensated speaker recognition system and to configure a reverberation simulator for use in such a system, tailored to a particular environment. In accordance with an embodiment, the disclosed techniques can be implemented, for example, in a computing system or an audio processing system, or a software product executable or otherwise controllable by such systems. The system or product is configured to receive an audio signal associated with speech of a user, extract features from that audio signal, and score the results of application of one or more speaker models to the extracted features. Each of the speaker models is trained based on a training audio signal from a known user, typically captured in a near-field of a microphone, and processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with that speaker model (and corresponding user). The system is further configured to select one of the speaker models, based on the score, and map the selected speaker model to a known speaker identification or label that is associated with the user. Thus, the models effectively allow a far-field speaking person in a given room or environment to be identified, because the model effectively represents what that person's voice sounds like when it is encumbered by reverberation. Said differently, by matching a known user model to a sample of far-field speaker utterances, that far-field speaker can be assumed to be the person for which that model was made. The techniques described herein may allow for improved speaker recognition, compared to existing methods that fail to account for far-field environmental effects that can distort the captured audio, according to an embodiment. Additionally, these disclosed techniques do not require pre-processing of the captured audio to eliminate reverberation, which can also remove useful information in the speech signal. The disclosed techniques can be implemented on a broad range of computing and communication platforms, including mobile devices, since the techniques do not require expensive far-field microphones or specialized microphone configurations. These techniques may further be implemented in hardware or software or a combination thereof. FIG.1is a top level diagram100of an implementation of a system for speaker recognition with reverberation compensation, configured in accordance with certain embodiments of the present disclosure. A speaker recognition system106is shown to be located in an acoustic environment120, such as a conference room, office, home living room, etc. The recognition system106is configured to perform reverberation compensation for improved recognition performance, in accordance with an embodiment of the disclosed techniques. A speaker or user102of the system produces speech, for example, in the form of utterances of words, which are captured by microphone104. The user102may be relatively close to the microphone, for example in the near-field130, or may be relatively far from the microphone, at any distance in the far-field132. In some embodiments, the distance threshold separating near-field from far-field may be approximately three feet. The captured audio110is provided from the microphone104to the speaker recognition system106, which generates a speaker ID or other label that identifies the user102as one of a number of known speakers, for which the system was trained to recognize. FIG.2is block diagram of a speaker recognition system106with reverberation compensation, configured in accordance with certain embodiments of the present disclosure. The speaker recognition system106is shown to include a recognition circuit204, a training circuit202, a reverberation simulator circuit208, and a reverberation simulator configuration circuit206, the operations of which will be explained in greater detail below in connection with the following figures. At a high level, however, the reverberation simulator circuit208is configured to generate a number of processed audio signals, based on the captured audio signal110, to which varying types of reverberation have been applied. Each reverberation processed audio signal is intended to model different far-field effects of a particular acoustic environment120. The training circuit202is configured to generate a number of speaker recognition models, for example during a training or user enrollment mode of the system. One of the models is based on a training audio signal, typically, but not necessarily, captured in the near-field. The other models are based on the reverberation processed versions of the training signal, which simulate far-field effects applied to the training signal. The recognition circuit204is configured to recognize the speaker's voice, during an operational (also referred to as authentication) mode of the system, and identify the speaker using the speaker recognition models provided by the training circuit, thus enabling recognition of speech from either near-field or far-field. The reverberation simulator configuration circuit206generates configuration parameters to control the reverberation characteristics that will be applied to the training signal to improve the simulation of far-field effects. FIG.3is a more detailed block diagram of a training circuit202, configured in accordance with certain embodiments of the present disclosure. The training circuit202is shown to include the reverberation simulator circuit208, a feature extraction circuit302, a speaker model generation circuit306, and speaker model storage308. Users of the speaker recognition system106(e.g., people who will later be identified by their speech) are enrolled in the system through a process that trains speaker models to their voice. During the training process, each user speaks a few words or phrases, referred to as training audio110a, into a microphone. The training audio is spoken at a relatively close range to the microphone, typically such that the audio is captured within the near-field of the microphone, although this is not required so long as the training audio is captured at a closer distance than the subsequent authentication audio. The reverberation simulator circuit208is configured to generate one or more (N) processed training audio signals320by applying reverberation effects to the captured training audio signal110a. The reverberation effects that are applied to each of the processed training audio signals are generated to simulate far-field environmental acoustic effects. For example, each of the N processed training audio signals may comprise a unique acoustic effect that is associated with a particular spatial relationship between the speaker and the microphone and the characteristics of the room. In other words, each of the N processed signals320, simulates enrollment conditions as though the speaker were located at a greater distance from the microphone. In some embodiments, the reverberation simulator circuit208may be a Schroeder reverberator and may be configured by adjusting any number of reverberation parameters, as will be described in greater detail below. In some embodiments, the reverberation simulator circuit208may use other known techniques in light of the present disclosure. The feature extraction circuit302is configured to generate a set of extract features for the captured training audio signal110aand for each of the processed training audio signals320. So, for example, there can be N+1, sets of extracted audio features. The features may be any types of acoustic features of speech that can be used to distinguish between speakers. Such features may include, for example, pitch, spectral-based features, linear prediction based features, cepstral coefficients, and other behavioral and anatomical based features. The speaker model generation circuit306is configured to generate a number of speaker models, each model associated, for example, with the speaker ID for the user being enrolled. Each of the speaker models may be based on one of the (N+1) feature sets. Speaker model generation may be performed using known techniques in light of the present disclosure. Assuming there are K known and enrolled speakers, there may thus be K*(N+1) generated speaker models. The enrollment or training process for each speaker may be performed separately, and typically, though not necessarily, at different times. Speaker model storage308is configured to store these K*(N+1) speaker models for subsequent use by, for example by recognition circuit204, as will be described below. FIG.4is a more detailed block diagram of a recognition circuit204, configured in accordance with certain embodiments of the present disclosure. The recognition circuit204is shown to include the feature extraction circuit302, a speaker model scoring circuit402, a speaker model selection circuit404, a model to speaker ID mapping circuit406, and speaker model storage308. During authentication mode, as opposed to the training mode described above, the system106attempts to identify an unknown user based on a sample of their speech. The speech sample, captured audio110b, may be captured from either the near-field or the far-field of the microphone104. The user may or may not be enrolled in the system. If the user is enrolled, the system may identify the user by the speaker ID provided during training. In some embodiments, if the user is not enrolled, the system may indicate that the user is not identified. The feature extraction circuit302is configured to extract features from the captured audio110b, also referred to as an authentication audio signal, associated with speech of a user to be identified. In some embodiments, feature extraction circuit302may be shared with the feature extraction circuit302used in the training circuit202. In some embodiments, feature extraction circuit302may be implemented as a separate circuit or module. In either case feature extraction circuit302is configured to extract acoustic features of speech that can be used to distinguish between speakers. The speaker model scoring circuit402is configured to apply one or more of the speaker models, for example from speaker model storage308, to the extracted features, and to score the results for each application. Speaker models that were trained on audio, which more closely simulates the environment in which the authentication audio110bwas captured, can be expected to produce higher scores. The speaker model selection circuit404is configured to select one of the speaker models based on the scores. For example, in some embodiments, the speaker model that is associated with the highest score is selected. As will be appreciated, however, other embodiments may employ some other selection criterion that is statistically relevant for a given application, such as the model associated with the penultimate score or the model associated with the score within a certain established range (and not necessarily the highest score). The selected model is passed to the speaker ID mapping circuit406, which is configured to map the selected speaker model (e.g., the one with the highest score) to the known speaker ID associated with that model. Once the speaker ID is known, the actual speaker can thus be identified. FIG.5is a more detailed block diagram of a reverberation simulator configuration circuit206, configured in accordance with certain embodiments of the present disclosure. The reverberation simulator configuration circuit206is shown to include, a reverberation parameter selection circuit502, the reverberation simulator circuit208, the feature extraction circuit302, the speaker model generation circuit306, speaker model storage308, the speaker model scoring circuit402, a score summation circuit504, a reverberation model selection circuit506, and a parameter optimization circuit508. The reverberation simulator configuration circuit206is configured to select one or more sets of reverberation parameters such that the reverberation simulator circuit generates reverberation that most closely simulates a variety of far-field environmental acoustic effects that are associated with the room or environment in which speaker recognition is to be performed. The reverberation parameter selection circuit502is configured to select an initial trial set of reverberation parameters from a population of trial parameter sets. A parameter set may include, for example, an effect mix parameter, a room size parameter, a damping parameter, and a stereo width parameter. Other known reverberation parameters, in light of the present disclosure, may also be included. The population of trial parameter sets may encompass a range of possible values for reverberation parameters of interest. As a simplified example, if there are four parameters of interest, and each parameter can be represented by a value in a continuous range between 0 and 1, then the population may include 4-valued tuples with each parameter chosen at a fixed increment over the possible range of values. If the increment is chosen as 0.5, then the values would be 0, 0.5, and 1 for each parameter, and there would be 81 possible tuples (3×3×3×3). A first audio signal110c, that includes user speech, is obtained. This may be the training audio data110athat was captured and stored at an earlier time, for example, during the enrollment process. The reverberation simulator circuit208is configured to add reverberation to that signal, based on the current trial parameter set, to generate a processed audio signal that simulates a far-field environmental effect. The feature extraction circuit302is configured to extract features from the processed audio signal, and the speaker model generation circuit306is configured to generate a reverberation compensated speaker model based on the extracted features, as previously described for the training and recognition modes. In some embodiments, the speaker model may be stored in speaker model storage308for the duration of the reverb simulator configuration. One or more additional audio signals110dare captured from a variety of locations, all in the far-field of the microphone, or at least at a greater distance from the microphone than from where the first audio signal110cwas captured. These additional audio signals include speech from the same user that produced the first audio signal110c. In some embodiments, facial detection or other identity verification techniques may be employed to ensure that the same speaker is providing audio signals110cand110d. The feature extraction circuit302is further configured to extract features from each of these additional audio signals110d. The speaker model scoring circuit402is configured to score results of application of the generated speaker model to the extracted features of each of the additional audio signals110d. In some embodiments, this process may be repeated for multiple users. For example, near-field audio110cand far-field audio110dmay be captured from a second user, a third user, etc. For each user, a reverberation compensated speaker model is generated and its performance against the far-field audio110dis scored. The score summation circuit504is configured to associate a summation of the scores (possibly from multiple users) with the current trial set of reverberation parameters. The score summation may indicate the effectiveness of the trial set of parameters at modelling the far-field effects captured in the additional audio signals110d. In some embodiments, other score based statistics, besides summation, may be used as an indication of parameter quality. The reverberation parameter selection circuit502may then select the next trial set of parameters from the population, and the process described above is repeated to generate another scoring statistic (e.g., summation of scores) for that parameter set. The process continues until all desired parameters sets have been similarly scored. The reverberation model selection circuit506is configured to generate operational reverberation models, each model employing a trial set of reverberation parameters selected based on the scoring statistic. For example the top M scoring trial parameter sets may be assigned as operational parameter sets for M reverberation simulators. In some embodiments, a parameter optimization circuit508is configured to generate an updated trial set of reverberation parameters for the reverberation simulator using an optimization algorithm based on the scoring statistics calculated over the employed trial sets of reverberation parameters. In this way, the reverberation parameter selection circuit may be guided in the choice of trial parameter sets, rather than sequentially searching through every possible set. In some embodiments, the optimization algorithm may be a genetic algorithm or a gradient descent algorithm, although other known optimization techniques, in light of the present disclosure, may be employed. Methodology FIG.6is a flowchart illustrating an example method600for speaker recognition with reverberation compensation, in accordance with certain embodiments of the present disclosure. As can be seen, example method600includes a number of phases and sub-processes, the sequence of which may vary from one embodiment to another. However, when considered in the aggregate, these phases and sub-processes form a process for speaker recognition in accordance with certain of the embodiments disclosed herein. These embodiments can be implemented, for example using the system architecture illustrated inFIGS.3and4as described above. However other system architectures can be used in other embodiments, as will be apparent in light of this disclosure. To this end, the correlation of the various functions shown inFIG.6to the specific components illustrated in the other figures is not intended to imply any structural and/or use limitations. Rather, other embodiments may include, for example, varying degrees of integration wherein multiple functionalities are effectively performed by one system. For example, in an alternative embodiment a single module can be used to perform all of the functions of method600. Thus other embodiments may have fewer or more modules and/or sub-modules depending on the granularity of implementation. In still other embodiments, the methodology depicted can be implemented as a computer program product including one or more non-transitory machine readable mediums that when executed by one or more processors cause the methodology to be carried out. Numerous variations and alternative configurations will be apparent in light of this disclosure. As illustrated inFIG.6, in one embodiment, method600for speaker recognition with reverberation compensation commences by receiving, at operation610, an authentication audio signal associated with speech of a user to be identified. The authentication audio signal may include any sort of utterance by the user and may be captured by a microphone in either the near-field or the far-field of the microphone. Next, at operation620, features are extracted from the authentication audio signal. The features may be any acoustic features of speech that may be used to distinguish between speakers. At operation630, one or more speaker models are applied to the extracted features and the results are scored. The speaker models are trained on training audio signals, from a number of known and identified users, which are processed by a reverberation simulator to simulate a variety of far-field environmental effects to be associated with each speaker model. In some embodiments, the training audio signals from the various known users are captured within the environment in which the speaker recognition system is to be deployed. In this way, the models can more precisely represent the reverberation effect of the environment on the utterances of the known users. In any case, the models allow the correct speaker to be identified, and the robustness of the models can vary from one embodiment to the next. At operation640, one of the speaker models is selected based on the score. For example, the speaker model that results in the highest score may be selected. At operation650, the selected speaker model is mapped to a known speaker ID that is to be associated with the now recognized user. FIG.7is a flowchart illustrating a methodology for configuration of a reverberation simulator, in accordance with certain embodiments of the present disclosure. As can be seen, example method700includes a number of phases and sub-processes, the sequence of which may vary from one embodiment to another. However, when considered in the aggregate, these phases and sub-processes form a process for configuration of a reverberation simulator in accordance with certain of the embodiments disclosed herein. These embodiments can be implemented, for example using the system architecture illustrated inFIG.5as described above. However other system architectures can be used in other embodiments, as will be apparent in light of this disclosure. To this end, the correlation of the various functions shown inFIG.7to the specific components illustrated in the other figures is not intended to imply any structural and/or use limitations. Rather, other embodiments may include, for example, varying degrees of integration wherein multiple functionalities are effectively performed by one system. For example, in an alternative embodiment a single module can be used to perform all of the functions of method700. Thus other embodiments may have fewer or more modules and/or sub-modules depending on the granularity of implementation. In still other embodiments, the methodology depicted can be implemented as a computer program product including one or more non-transitory machine readable mediums that when executed by one or more processors cause the methodology to be carried out. Numerous variations and alternative configurations will be apparent in light of this disclosure. As illustrated inFIG.7, in one embodiment, method700for configuration of a reverberation simulator commences by receiving, at operation710, a first audio signal associated with speech of a user. In some embodiments, the first audio signal is the enrollment audio signal used in training. In some embodiments, the first audio signal may be captured in a near-field of the microphone. Next, at operation720, a trial set of parameters for the reverberation simulator is selected. At operation730, the reverberation simulator is applied to the first audio signal, using the trial parameters, and features are extracted from the resulting signal. A speaker model is then generated, based those extracted features. At operation740, one or more additional audio signals, associated with speech of the same user as in operation710above, are received. These additional audio signals are captured in a far-field of the microphone and/or at a distance greater than the distance at which the first audio signal was captured. At operation750, the speaker model is applied to extracted features of each of these additional audio signals and a score is generated for the results of each application. At operation760, a summation of the scores is associated with the trial set of parameters. Of course, in some embodiments, additional operations may be performed, as previously described in connection with the system. For example, the trial set of parameters may be selected as an operational set of parameters for the reverberation simulator based on the summation of scores associated with the trial set of parameters. Further additional operations may include generating an updated trial set of parameters for the reverberation simulator using an optimization algorithm based on the summation of scores. In some embodiments, the optimization algorithm may be a genetic algorithm or a gradient descent algorithm. Example System FIG.8illustrates an example system800to perform speaker recognition with reverberation compensation, configured in accordance with certain embodiments of the present disclosure. In some embodiments, system800comprises a platform810which may host, or otherwise be incorporated into a personal computer, workstation, laptop computer, ultra-laptop computer, tablet, touchpad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone and PDA, smart device (for example, smartphone or smart tablet), mobile internet device (MID), messaging device, data communication device, and so forth. Any combination of different devices may be used in certain embodiments. In some embodiments, platform810may comprise any combination of a processor820, a memory830, speaker recognition system106, a network interface840, an input/output (I/O) system850, a microphone104, a user interface860and a storage system870. As can be further seen, a bus and/or interconnect892is also provided to allow for communication between the various components listed above and/or other components not shown. Platform810can be coupled to a network894through network interface840to allow for communications with other computing devices, platforms or resources. Other componentry and functionality not reflected in the block diagram ofFIG.8will be apparent in light of this disclosure, and it will be appreciated that other embodiments are not limited to any particular hardware configuration. Processor820can be any suitable processor, and may include one or more coprocessors or controllers, such as an audio processor or a graphics processing unit, to assist in control and processing operations associated with system800. In some embodiments, the processor820may be implemented as any number of processor cores. The processor (or processor cores) may be any type of processor, such as, for example, a micro-processor, an embedded processor, a digital signal processor (DSP), a graphics processor (GPU), a network processor, a field programmable gate array or other device configured to execute code. The processors may be multithreaded cores in that they may include more than one hardware thread context (or “logical processor”) per core. Processor820may be implemented as a complex instruction set computer (CISC) or a reduced instruction set computer (RISC) processor. In some embodiments, processor820may be configured as an x86 instruction set compatible processor. Memory830can be implemented using any suitable type of digital storage including, for example, flash memory and/or random access memory (RAM). In some embodiments, the memory830may include various layers of memory hierarchy and/or memory caches as are known to those of skill in the art. Memory830may be implemented as a volatile memory device such as, but not limited to, a RAM, dynamic RAM (DRAM), or static RAM (SRAM) device. Storage system870may be implemented as a non-volatile storage device such as, but not limited to, one or more of a hard disk drive (HDD), a solid state drive (SSD), a universal serial bus (USB) drive, an optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up synchronous DRAM (SDRAM), and/or a network accessible storage device. In some embodiments, storage870may comprise technology to increase the storage performance enhanced protection for valuable digital media when multiple hard drives are included. Processor820may be configured to execute an Operating System (OS)880which may comprise any suitable operating system, such as Google Android (Google Inc., Mountain View, CA), Microsoft Windows (Microsoft Corp., Redmond, WA), or Apple OS X (Apple Inc., Cupertino, CA). As will be appreciated in light of this disclosure, the techniques provided herein can be implemented without regard to the particular operating system provided in conjunction with system800, and therefore may also be implemented using any suitable existing or subsequently-developed platform. Network interface circuit840can be any appropriate network chip or chipset which allows for wired and/or wireless connection between other components of computer system800and/or network894, thereby enabling system800to communicate with other local and/or remote computing systems, servers, and/or resources. Wired communication may conform to existing (or yet to developed) standards, such as, for example, Ethernet. Wireless communication may conform to existing (or yet to developed) standards, such as, for example, cellular communications including LTE (Long Term Evolution), Wireless Fidelity (Wi-Fi), Bluetooth, and/or Near Field Communication (NFC). Exemplary wireless networks include, but are not limited to, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, cellular networks, and satellite networks. I/O system850may be configured to interface between various I/O devices and other components of computer system800. I/O devices may include, but not be limited to a microphone104, a user interface860, and other devices not shown such as a keyboard, mouse, speaker, etc. It will be appreciated that in some embodiments, the various components of the system800may be combined or integrated in a system-on-a-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware or software. Speaker recognition system106is configured to perform recognition of the identity of a speaker based on captured audio from either the near-field or the far-field of a microphone. The recognition is based on speaker models trained from audio samples, typically, but not necessarily, captured in the near-field of the microphone, and processed by a reverberation simulator to simulate selected far-field environmental effects. Speaker models trained in this manner, to include reverberation compensation, provide more accurate recognition performance over a greater range of environmental conditions and range of distances between the speaker and the microphone. Speaker recognition system106may include any or all of the components illustrated inFIGS.1-5, as described above. Speaker recognition system106can be implemented or otherwise used in conjunction with a variety of suitable software and/or hardware that is coupled to or that otherwise forms a part of platform810. Speaker recognition system106can additionally or alternatively be implemented or otherwise used in conjunction with user I/O devices that are capable of providing information to, and receiving information and commands from, a user. These I/O devices may include microphone104, and other devices collectively referred to as user interface860. In some embodiments, user interface860may include a textual input device such as a keyboard, and a pointer-based input device such as a mouse. Other input/output devices that may be used in other embodiments include a display element, touchscreen, a touchpad, and/or a speaker. Still other input/output devices can be used in other embodiments. In some embodiments, speaker recognition system106may be installed local to system800, as shown in the example embodiment ofFIG.8. Alternatively, system800can be implemented in a client-server arrangement wherein at least some functionality associated with these circuits is provided to system800using an applet, such as a JavaScript applet, or other downloadable module. Such a remotely accessible module or sub-module can be provisioned in real-time, in response to a request from a client computing system for access to a given server having resources that are of interest to the user of the client computing system. In such embodiments the server can be local to network894or remotely coupled to network894by one or more other networks and/or communication channels. In some cases access to resources on a given network or computing system may require credentials such as usernames, passwords, and/or compliance with any other suitable security mechanism. In various embodiments, system800may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system800may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennae, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the radio frequency spectrum and so forth. When implemented as a wired system, system800may include components and interfaces suitable for communicating over wired communications media, such as input/output adapters, physical connectors to connect the input/output adaptor with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and so forth. Examples of wired communications media may include a wire, cable metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted pair wire, coaxial cable, fiber optics, and so forth. Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (for example, transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, programmable logic devices, digital signal processors, FPGAs, logic gates, registers, semiconductor devices, chips, microchips, chipsets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power level, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds, and other design or performance constraints. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not intended as synonyms for each other. For example, some embodiments may be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. The various embodiments disclosed herein can be implemented in various forms of hardware, software, firmware, and/or special purpose processors. For example, in one embodiment at least one non-transitory computer readable storage medium has instructions encoded thereon that, when executed by one or more processors, cause one or more of the speaker recognition methodologies disclosed herein to be implemented. The instructions can be encoded using a suitable programming language, such as C, C++, object oriented C, Java, JavaScript, Visual Basic .NET, Beginner's All-Purpose Symbolic Instruction Code (BASIC), or alternatively, using custom or proprietary instruction sets. The instructions can be provided in the form of one or more computer software applications and/or applets that are tangibly embodied on a memory device, and that can be executed by a computer having any suitable architecture. In one embodiment, the system can be hosted on a given website and implemented, for example, using JavaScript or another suitable browser-based technology. For instance, in certain embodiments, the system may leverage processing resources provided by a remote computer system accessible via network894. In other embodiments, the functionalities disclosed herein can be incorporated into other software applications, such as speech recognition applications, security and user identification applications, and/or other audio processing applications. The computer software applications disclosed herein may include any number of different modules, sub-modules, or other components of distinct functionality, and can provide information to, or receive information from, still other components. These modules can be used, for example, to communicate with input and/or output devices such as a display screen, a touch sensitive surface, a printer, and/or any other suitable device. Other componentry and functionality not reflected in the illustrations will be apparent in light of this disclosure, and it will be appreciated that other embodiments are not limited to any particular hardware or software configuration. Thus in other embodiments system800may comprise additional, fewer, or alternative subcomponents as compared to those included in the example embodiment ofFIG.8. The aforementioned non-transitory computer readable medium may be any suitable medium for storing digital information, such as a hard drive, a server, a flash memory, and/or random access memory (RAM), or a combination of memories. In alternative embodiments, the components and/or modules disclosed herein can be implemented with hardware, including gate level logic such as a field-programmable gate array (FPGA), or alternatively, a purpose-built semiconductor such as an application-specific integrated circuit (ASIC). Still other embodiments may be implemented with a microcontroller having a number of input/output ports for receiving and outputting data, and a number of embedded routines for carrying out the various functionalities disclosed herein. It will be apparent that any suitable combination of hardware, software, and firmware can be used, and that other embodiments are not limited to any particular system architecture. Some embodiments may be implemented, for example, using a machine readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, process, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium, and/or storage unit, such as memory, removable or non-removable media, erasable or non-erasable media, writeable or rewriteable media, digital or analog media, hard disk, floppy disk, compact disk read only memory (CD-ROM), compact disk recordable (CD-R) memory, compact disk rewriteable (CR-RW) memory, optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of digital versatile disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high level, low level, object oriented, visual, compiled, and/or interpreted programming language. Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like refer to the action and/or process of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (for example, electronic) within the registers and/or memory units of the computer system into other data similarly represented as physical quantities within the registers, memory units, or other such information storage transmission or displays of the computer system. The embodiments are not limited in this context. The terms “circuit” or “circuitry,” as used in any embodiment herein, are functional and may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The circuitry may include a processor and/or controller configured to execute one or more instructions to perform one or more operations described herein. The instructions may be embodied as, for example, an application, software, firmware, etc. configured to cause the circuitry to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on a computer-readable storage device. Software may be embodied or implemented to include any number of processes, and processes, in turn, may be embodied or implemented to include any number of threads, etc., in a hierarchical fashion. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. The circuitry may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc. Other embodiments may be implemented as software executed by a programmable control device. In such cases, the terms “circuit” or “circuitry” are intended to include a combination of software and hardware such as a programmable control device or a processor capable of executing the software. As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Numerous specific details have been set forth herein to provide a thorough understanding of the embodiments. It will be understood by an ordinarily-skilled artisan, however, that the embodiments may be practiced without these specific details. In other instances, well known operations, components and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments. In addition, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described herein. Rather, the specific features and acts described herein are disclosed as example forms of implementing the claims. Further Example Embodiments The following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent. Example 1 is a method for speaker recognition. The method comprises: receiving an authentication audio signal associated with speech of a user; extracting features from the authentication audio signal; scoring results of application of one or more speaker models to the extracted features, wherein each of the speaker models is trained based on a training audio signal, the training audio signal processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with the speaker model; selecting one of the speaker models based on the score; and mapping the selected speaker model to a known speaker identification (ID) associated with the user. Example 2 includes the subject matter of Example 1, wherein the training of the speaker models further comprises: capturing a plurality of the training audio signals from a plurality of users; receiving a known speaker ID for each of the users; and processing each of the plurality of training audio signals by the reverberation simulator to generate a plurality of reverberation processed training audio signals for each of the training audio signals, wherein each of the reverberation processed training audio signals is associated with a unique far-field environmental effect. Example 3 includes the subject matter of Examples 1 or 2, wherein the training of the speaker models further comprises: generating feature sets of extracted features from each of the training audio signals and from each of the reverberation processed training audio signals; generating speaker models based on each feature set; and assigning the associated known speaker ID with the generated speaker model. Example 4 includes the subject matter of any of Examples 1-3, wherein the authentication audio signal is captured in a far-field of the microphone and the training audio signal is captured in a near-field of the microphone. Example 5 includes the subject matter of any of Examples 1-4, wherein the far-field is a distance greater than three feet from the microphone and the near-field is a distance closer than three feet from the microphone. Example 6 is a method for configuring a reverberation simulator for speaker recognition. The method comprises: receiving a first audio signal associated with speech of a user, the first audio signal captured at a first distance from a microphone; selecting a trial set of parameters for a reverberation simulator; generating a speaker model based on extracted features of an application of the reverberation simulator to the first audio signal; receiving one or more additional audio signals associated with speech of the user, the additional audio signals captured at a second distance from the microphone, the second distance greater than the first distance; scoring results of application of the speaker model to extracted features of each of the additional audio signals; and associating a summation of the scores with the trial set of parameters. Example 7 includes the subject matter of Example 6, further comprising selecting the trial set of parameters as an operational set of parameters based on the summation of scores associated with the trial set of parameters. Example 8 includes the subject matter of Examples 6 or 7, further comprising generating an updated trial set of parameters for the reverberation simulator using an optimization algorithm based on the summation of scores. Example 9 includes the subject matter of any of Examples 6-8, wherein the optimization algorithm is one of a genetic algorithm or a gradient descent algorithm. Example 10 includes the subject matter of any of Examples 6-9, wherein the reverberation simulator is a Schroeder reverberator and the reverberation parameters comprise one or more of an effect mix parameter, a room size parameter, a damping parameter, and a stereo width parameter. Example 11 includes the subject matter of any of Examples 6-10, wherein the second distance is in the far-field of the microphone and the first distance is in the near-field of the microphone. Example 12 is at least one non-transitory computer readable storage medium having instructions encoded thereon that, when executed by one or more processors, result in the following operations for speaker recognition. The operations comprise: receiving an authentication audio signal associated with speech of a user; extracting features from the authentication audio signal; scoring results of application of one or more speaker models to the extracted features, wherein each of the speaker models is trained based on a training audio signal, the training audio signal processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with the speaker model; selecting one of the speaker models based on the score; and mapping the selected speaker model to a known speaker identification (ID) associated with the user. Example 13 includes the subject matter of Example 12, wherein the training of the speaker models further comprises the operations: capturing a plurality of the training audio signals from a plurality of users; receiving a known speaker ID for each of the users; and processing each of the plurality of training audio signals by the reverberation simulator to generate a plurality of reverberation processed training audio signals for each of the training audio signals, wherein each of the reverberation processed training audio signals is associated with a unique far-field environmental effect. Example 14 includes the subject matter of Examples 12 or 13, wherein the training of the speaker models further comprises the operations: generating feature sets of extracted features from each of the training audio signals and from each of the reverberation processed training audio signals; generating speaker models based on each feature set; and assigning the associated known speaker ID with the generated speaker model. Example 15 includes the subject matter of any of Examples 12-14, wherein the authentication audio signal is captured in a far-field of the microphone and the training audio signal is captured in a near-field of the microphone. Example 16 includes the subject matter of any of Examples 12-15, wherein the far-field is a distance greater than three feet from the microphone and the near-field is a distance closer than three feet from the microphone. Example 17 is at least one non-transitory computer readable storage medium having instructions encoded thereon that, when executed by one or more processors, result in the following operations for configuring a reverberation simulator for speaker recognition. The operations comprise: receiving a first audio signal associated with speech of a user, the first audio signal captured at a first distance from a microphone; selecting a trial set of parameters for a reverberation simulator; generating a speaker model based on extracted features of an application of the reverberation simulator to the first audio signal; receiving one or more additional audio signals associated with speech of the user, the additional audio signals captured at a second distance from the microphone, the second distance greater than the first distance; scoring results of application of the speaker model to extracted features of each of the additional audio signals; and associating a summation of the scores with the trial set of parameters. Example 18 includes the subject matter of Example 17, the operations further comprising selecting the trial set of parameters as an operational set of parameters based on the summation of scores associated with the trial set of parameters. Example 19 includes the subject matter of Examples 17 or 18, the operations further comprising generating an updated trial set of parameters for the reverberation simulator using an optimization algorithm based on the summation of scores. Example 20 includes the subject matter of any of Examples 17-19, wherein the optimization algorithm is one of a genetic algorithm or a gradient descent algorithm. Example 21 includes the subject matter of any of Examples 17-20, wherein the reverberation simulator is a Schroeder reverberator and the reverberation parameters comprise one or more of an effect mix parameter, a room size parameter, a damping parameter, and a stereo width parameter. Example 22 includes the subject matter of any of Examples 17-21, wherein the second distance is in the far-field of the microphone and the first distance is in the near-field of the microphone. Example 23 is a system for speaker recognition. The system comprises: a feature extraction circuit to extract features from a received authentication audio signal associated with speech of a user; a speaker model scoring circuit to score results of application of one or more speaker models to the extracted features, wherein each of the speaker models is trained based on a training audio signal, the training audio signal processed to simulate selected far-field environmental effects to be associated with the speaker model; a speaker model selection circuit to select one of the speaker models based on the score; and a mapping circuit to map the selected speaker model to a known speaker identification (ID) associated with the user. Example 24 includes the subject matter of Example 23, further comprising a speaker model training circuit, the training circuit comprising: a reverberation simulator circuit to generate a plurality of processed training audio signals based on the captured training audio signal, each processed training audio signal to simulate a unique far-field environmental effect; the feature extraction circuit further to generate a feature set of extracted features for the captured training audio signal and each of the processed training audio signals; and a speaker model generation circuit to generate a plurality of speaker models associated with the speaker ID, each of the speaker models based on one of the feature sets. Example 25 includes the subject matter of Examples 23 or 24, wherein the speaker model training circuit is further to process training audio signals from a plurality of users and to generate a plurality of speaker models for each of the users. Example 26 includes the subject matter of any of Examples 23-25, wherein the authentication audio signal is captured in a far-field of the microphone and the training audio signal is captured in a near-field of the microphone. Example 27 includes the subject matter of any of Examples 23-26, wherein the far-field is a distance greater than three feet from the microphone and the near-field is a distance closer than three feet from the microphone. Example 28 is a system for configuring a reverberation simulator for speaker recognition. The system comprises: a reverberation simulator circuit to add reverberation to a user provided first audio signal, captured at a first distance from a microphone, to generate a processed audio signal that simulates a far-field environmental effect, the reverberation based on a trial set of reverberation parameters; a feature extraction circuit to extract features from the processed audio signal; a speaker model generation circuit to generate a speaker model based on the extracted features; the feature extraction circuit further to extract features from one or more additional audio signals associated with speech of the user, the additional audio signals captured at a second distance from the microphone, the second distance greater than the first distance; a speaker model scoring circuit to score results of application of the speaker model to the extracted features of each of the additional audio signals; and a score summation circuit to associate a summation of the scores with the trial set of reverberation parameters. Example 29 includes the subject matter of Example 28, further comprising a reverberation model selection circuit to assign the trial set of reverberation parameters as an operational set of reverberation parameters based on the summation of scores associated with the trial set of reverberation parameters. Example 30 includes the subject matter of Examples 28 or 29, further comprising a parameter optimization circuit to generate an updated trial set of reverberation parameters for the reverberation simulator using an optimization algorithm based on the summation of scores. Example 31 includes the subject matter of any of Examples 28-30, wherein the optimization algorithm is one of a genetic algorithm or a gradient descent algorithm. Example 32 includes the subject matter of any of Examples 28-31, wherein the reverberation simulator circuit is a Schroeder reverberator and the reverberation parameters comprise one or more of an effect mix parameter, a room size parameter, a damping parameter, and a stereo width parameter. Example 33 includes the subject matter of any of Examples 28-32, wherein the second distance is in the far-field of the microphone and the first distance is in the near-field of the microphone. The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications. It is intended that the scope of the present disclosure be limited not be this detailed description, but rather by the claims appended hereto. Future filed applications claiming priority to this application may claim the disclosed subject matter in a different manner, and may generally include any set of one or more elements as variously disclosed or otherwise demonstrated herein.
59,860
11862177
DETAILED DESCRIPTION Reference will now be made to the illustrative embodiments illustrated in the drawings, and specific language will be used here to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Alterations and further modifications of the inventive features illustrated here, and additional applications of the principles of the inventions as illustrated here, which would occur to a person skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the invention. Voice biometrics for speaker recognition and other operations (e.g., authentication) typically rely upon models or vectors generated from a universe of speaker samples and samples of a particular speaker. As an example, during a training phase (or re-training phase), a server or other computing device executes a speech recognition engine (e.g., artificial intelligence and/or machine-learning programmatic software) that is trained to recognize and distinguish instances of speech using a plurality of training audio signals. The neural network architecture outputs certain results according to corresponding inputs and evaluates the results according to a loss function by comparing the expected output against the observed output. The training operations then tailor the weighted values of the neural network architecture (sometimes called hyper-parameters) and reapply the neural network architecture to the inputs until the expected outputs and observed outputs converge. The server then fixes the hyper-parameters and, in some cases, disables one or more layers of the neural network architecture used for training. The server can further train the speaker recognition engine to recognize a particular speaker during an enrollment phase for the particular enrollee-speaker. The speech recognition engine can generate an enrollee voice feature vector (sometimes called a “voiceprint”) using enrollee audio signals having speech segments involving the enrollee. During later inbound phone calls, the server refers to the voiceprints in order to confirm whether later audio signals involve the enrollee based upon matching a feature vector extracted from the later inbound call against the enrollee's voiceprint. These approaches are generally successful and adequate for detecting the enrollee in the inbound call. A concern, however, is that powerful voice biometric spoofing tools (e.g., deepfake technologies) might eventually use enrollee voice samples to generate a flexible deepfake voice synthesizer tailored to the enrollee, where the enrollee synthesizer would be capable of fooling the recognition engine by conveying features closely matching enrollee's voiceprint. A problem with current spoofing detection system is generalization ability. Traditionally, signal processing researchers tried to overcome this problem by introducing different ways to of processing the input audio files. Prior approaches for detecting synthetic speech spoofing employed, for example, high-frequency cepstrum coefficients (HFCC), constant-Q cepstral coefficients (CQCC)), a cosine normalized phase, and a modified-group delay (MGD) operation. Although, these such approaches confirmed the effectiveness of various audio processing techniques in detecting synthetic speech, these approaches were unable to address the problem of the generalization ability. This shortcoming prevents prior approaches from, for example, generalizing adequately on unknown spoofing technologies and thus insufficiently detecting spoofing for unknown spoof techniques. As described herein, the system could generate another enrollee feature vector for detecting spoofed instances of the enrollee's voice (sometimes called a “spoofprint”). The spoofprint test evaluates the likelihood that the inbound speaker's voice is a spoofed or genuine instance of the enrollee's voice. A speech synthesizer could satisfy a voiceprint test by conveying synthetic speech with voice-related features that are sufficiently similar to the voice-related features of an enrollee to satisfy the similarity requirements of the voiceprint test. The speech synthesizer, however, would fail the spoofprint test, because the synthetic speech would not contain the speaking behavior and/or spoofing artifacts sufficiently similar to the corresponding features expected from the enrollee. The embodiments described herein extract a set of features from audio signals for spoofprints that are (at least in part) different from the set of features extracted for voiceprints. The low-level features extracted from an audio signal may include mel frequency cepstral coefficients (MFCCs), HFCCs, CQCCs, and other features related to the speaker voice characteristics, and spoofing artifacts of the speaker (e.g., speaker speech characteristics) and/or a device or network (e.g., speaker patterns, DTMF tones, background noise, codecs, packet loss). The feature vectors generated when extracting the voiceprint are based on a set of features reflecting the speaker's voice characteristics, such as the spectro-temporal features (e.g., MFCCs, HFCCs, CQCCs). The feature vectors generated when extracting the spoofprint are based on a set of features including audio characteristics of the call, such as spoofing artifacts (e.g., specific aspects of how the speaker speaks), which may include the frequency that a speaker uses certain phonemes (patterns) and the speaker's natural rhythm of speech. The spoofing artifacts are often difficult for synthetic speech programs to emulate. The neural network architecture can extract embeddings that are better tailored for spoof detection than merely evaluating the embeddings extracted for voiceprint recognition. Additionally or alternatively, embodiments described herein may employ a loss function during training and/or enrollment, large margin cosine loss function (LMCL), as adapted from the conventional use in facial recognition systems. Beneficially, the LMCL maximizes the variance between genuine and spoofed class and at the same time, minimize intra-class variance. Prior approaches failed to appreciate and employ the use of LMCL in spoof detection in audio signals because, as mentioned, such approaches focused on other areas. The embodiments described herein implement one or more neural network architectures comprising any number of layers configured to perform certain operations, such as audio data ingestion, pre-processing operations, data augmentation operations, embedding extraction, loss function operations, and classification operations, among others. To perform the various operations, the neural network architectures comprise any number of layers, such as input layers, layers of an embedding extractor, fully-connected layers, loss layers, and layers of a classifier, among others. It should be appreciated that the layers or operations may be performed by any number of neural network architectures. Additionally or alternatively, the layers performing different operations can define different types of neural network architecture. For example, a ResNet neural network architecture could comprise layers and operations defining an embedding extractor, and another neural network architecture could comprise layers and operation defining a classifier. Moreover, certain operations, such as pre-processing operations and data augmentation operations or may be performed by a computing device separately from the neural network architecture or as layers of the neural network architecture. Non-limiting examples of in-network augmentation and pre-preprocessing may be found in U.S. application Ser. Nos. 17/066,210 and 17/079,082, which are incorporated by reference herein. Following classification of an inbound audio signal (e.g., genuine or spoofed), the server the employs or transmits the outputted determination to one or more downstream operations. The outputs used by the downstream operation could include the classification determination, similarity scores, and/or the extracted spoofprint or voiceprint. Non-limiting examples of downstream operations and/or the potential uses of the neural network architecture described herein include voice spoof detection, speaker identification, speaker authentication, speaker verification, speech recognition, audio event detection, voice activity detection (VAD), speech activity detection (SAD), and speaker diarization, among others. Example System Components FIG.1shows components of a system100for receiving and analyzing telephone calls, according to an illustrative embodiment. The system100comprises a call analytics system101, call center systems110of customer enterprises (e.g., companies, government entities, universities), and caller devices114. The call analytics system101includes analytics servers102, analytics databases104, and admin devices103. The call center system110includes call center servers111, call center databases112, and agent devices116. Embodiments may comprise additional or alternative components or omit certain components from those ofFIG.1, and still fall within the scope of this disclosure. It may be common, for example, to include multiple call center systems110or for the call analytics system101to have multiple analytics servers102. Embodiments may include or otherwise implement any number of devices capable of performing the various features and tasks described herein. For example, theFIG.1shows the analytics server102as a distinct computing device from the analytics database104. In some embodiments, the analytics database104may be integrated into the analytics server102. Various hardware and software components of one or more public or private networks may interconnect the various components of the system100. Non-limiting examples of such networks may include Local Area Network (LAN), Wireless Local Area Network (WLAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), and the Internet. The communication over the network may be performed in accordance with various communication protocols, such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), and IEEE communication protocols. Likewise, the caller devices114may communicate with callees (e.g., call center systems110) via telephony and telecommunications protocols, hardware, and software capable of hosting, transporting, and exchanging audio data associated with telephone calls. Non-limiting examples of telecommunications hardware may include switches and trunks, among other additional or alternative hardware used for hosting, routing, or managing telephone calls, circuits, and signaling. Non-limiting examples of software and protocols for telecommunications may include SS7, SIGTRAN, SCTP, ISDN, and DNIS among other additional or alternative software and protocols used for hosting, routing, or managing telephone calls, circuits, and signaling. Components for telecommunications may be organized into or managed by various different entities, such as carriers, exchanges, and networks, among others. The caller devices114may be any communications or computing device that the caller operates to place the telephone call to the call destination (e.g., the call center system110). Non-limiting examples of caller devices114may include landline phones114aand mobile phones114b. That the caller device114is not limited to telecommunications-oriented devices (e.g., telephones). As an example, the caller device114may include a caller computing device114c, which includes an electronic device comprising a processor and/or software, such as or personal computer, configured to implement voice-over-IP (VoIP) telecommunications. As another example, the caller computing device114cmay be an electronic IoT device (e.g., voice assistant device, “smart device”) comprising a processor and/or software capable of utilizing telecommunications features of a paired or otherwise networked device, such as a mobile phone114b. The call analytics system101and the call center system110represent network infrastructures101,110comprising physically and logically related software and electronic devices managed or operated by various enterprise organizations. The devices of each network system infrastructure101,110are configured to provide the intended services of the particular enterprise organization. The analytics server102of the call analytics system101may be any computing device comprising one or more processors and software, and capable of performing the various processes and tasks described herein. The analytics server102may host or be in communication with the analytics database104, and receives and processes call data (e.g., audio recordings, metadata) received from the one or more call center systems110. AlthoughFIG.1shows only single analytics server102, the analytics server102may include any number of computing devices. In some cases, the computing devices of the analytics server102may perform all or sub-parts of the processes and benefits of the analytics server102. The analytics server102may comprise computing devices operating in a distributed or cloud computing configuration and/or in a virtual machine configuration. It should also be appreciated that, in some embodiments, functions of the analytics server102may be partly or entirely performed by the computing devices of the call center system110(e.g., the call center server111). The analytics server102executes audio-processing software that includes a neural network that performs speaker spoof detection, among other potential operations (e.g., speaker recognition, speaker verification or authentication, speaker diarization). The neural network architecture operates logically in several operational phases, including a training phase, an enrollment phase, and a deployment phase (sometimes referred to as a test phase or testing). The inputted audio signals processed by the analytics server102include training audio signals, enrollment audio signals, and inbound audio signals processed during the deployment phase. The analytics server102applies the neural network to each of the types of inputted audio signals during the corresponding operational phase. The analytics server102or other computing device of the system100(e.g., call center server111) can perform various pre-processing operations and/or data augmentation operations on the input audio signals. Non-limiting examples of the pre-processing operations include extracting low-level features from an audio signal, parsing and segmenting the audio signal into frames and segments and performing one or more transformation functions, such as Short-time Fourier Transform (SFT) or Fast Fourier Transform (FFT), among other potential pre-processing operations. Non-limiting examples of augmentation operations include audio clipping, noise augmentation, frequency augmentation, duration augmentation, and the like. The analytics server102may perform the pre-processing or data augmentation operations before feeding the input audio signals into input layers of the neural network architecture or the analytics server102may execute such operations as part of executing the neural network architecture, where the input layers (or other layers) of the neural network architecture perform these operations. For instance, the neural network architecture may comprise in-network data augmentation layers that perform data augmentation operations on the input audio signals fed into the neural network architecture. During training, the analytics server102receives training audio signals of various lengths and characteristics from one or more corpora, which may be stored in an analytics database104or other storage medium. The training audio signals include clean audio signals (sometimes referred to as samples) and simulated audio signals, each of which the analytics server102uses to train the neural network to recognize speech occurrences. The clean audio signals are audio samples containing speech in which the speech is identifiable by the analytics server102. Certain data augmentation operations executed by the analytics server102retrieve or generate the simulated audio signals for data augmentation purposes during training or enrollment. The data augmentation operations may generate additional versions or segments of a given training signal containing manipulated features mimicking a particular type of signal degradation or distortion. The analytics server102stores the training audio signals into the non-transitory medium of the analytics server102and/or the analytics database104for future reference or operations of the neural network architecture. During the training phase and, in some implementations, the enrollment phase, fully connected layers of the neural network architecture generate a training feature vector for each of the many training audio signals and a loss function (e.g., LMCL) determines levels of error for the plurality of training feature vectors. A classification layer of the neural network architecture adjusts weighted values (e.g., hyper-parameters) of the neural network architecture until the outputted training feature vectors converge with predetermined expected feature vectors. When the training phase concludes, the analytics server102stores the weighted values and neural network architecture into the non-transitory storage media (e.g., memory, disk) of the analytics server102. During the enrollment and/or the deployment phases, the analytics server102disables one or more layers of the neural network architecture (e.g., fully-connected layers, classification layer) to keep the weighted values fixed. During the enrollment operational phase, an enrollee, such as an end-consumer of the call center system110, provides several speech examples to the call analytics system101. For example, the enrollee could respond to various interactive voice response (IVR) prompts of IVR software executed by a call center server111. The call center server111then forwards the recorded responses containing bona fide enrollment audio signals to the analytics server102. The analytics server102applies the trained neural network architecture to each of the enrollee audio samples and generates corresponding enrollee feature vectors (sometimes called “enrollee embeddings”), though the analytics server102disables certain layers, such as layers employed for training the neural network architecture. The analytics server102generates an average or otherwise algorithmically combines the enrollee feature vectors and stores the enrollee feature vectors into the analytics database104or the call center database112. Layers of the neural network architecture are trained to operate as one or more embedding extractors that generate the feature vectors representing certain types of embeddings. The embedding extractors generate the enrollee embeddings during the enrollment phase, and generate inbound embeddings (sometimes called “test embeddings”) during the deployment phase. The embeddings include a spoof detection embedding (spoofprint) and a speaker recognition embedding (voiceprint). As an example, the neural network architecture generates an enrollee spoofprint and an enrollee voiceprint during the enrollment phase, and generates an inbound spoofprint and an inbound voiceprint during the deployment phase. Different embedding extractors of the neural network architecture generate the spoofprints and the voiceprints, though the same embedding extractor of the neural network architecture may be used to generate the spoofprints and the voiceprints in some embodiments. As an example, the spoofprint embedding extractor may be a neural network architecture (e.g., ResNet, SyncNet) that processes a first set of features extracted from the input audio signals, where the spoofprint extractor comprises any number of convolutional layers, statistics layers, and fully-connected layers and trained according to the LMCL. The voiceprint embedding extractor may be another neural network architecture (e.g. (e.g., ResNet, SyncNet) that processes a second set of features extracted from the input audio signals, where the voiceprint embedding extractor comprises any number of convolutional layers, statistics layers, and fully-connected layers and trained according to a softmax function. As a part of the loss function operations, the neural network performs a Linear Discriminant Analysis (LDA) algorithm or similar operation to transform the extracted embeddings to a lower-dimensional and more discriminative subspace. The LDA minimizes the intra-class variance and maximizes the inter-class variance between genuine training audio signals and spoof training audio signals. In some implementations, the neural network architecture may further include an embedding combination layer that performs various operations to algorithmically combine the spoofprint and the voiceprint into a combined embedding (e.g., enrollee combined embedding, inbound combined embedding). The embeddings, however, need not be combined in all embodiments. The loss function operations and LDA, as well as other aspects of the neural network architecture (e.g., scoring layers) are likewise configured to evaluate the combined embeddings, in addition or as an alternative to evaluating separate spoofprint and voiceprints embeddings. The analytics server102executes certain data augmentation operations on the training audio signals and, in some implementations, on the enrollee audio signals. The analytics server102may perform different, or otherwise vary, the augmentation operations performed during the training phase and the enrollment phase. Additionally or alternatively, the analytics server102may perform different, or otherwise vary, the augmentation operations performed for training the spoofprint embedding extractor and the voiceprint embedding extractor. For example, the server may perform frequency masking (sometimes call frequency augmentation) on the training audio signals for the spoofprint embedding extractor during the training and/or enrollment phase. The server may perform noise augmentation for the voiceprint embedding extractor during the training and/or enrollment phase. During the deployment phase, the analytics server102receives the inbound audio signal of the inbound phone call, as originated from the caller device114of an inbound caller. The analytics server102applies the neural network on the inbound audio signal to extract the features from the inbound audio and determine whether the caller is an enrollee who is enrolled with the call center system110or the analytics system101. The analytics server102applies each of the layers of the neural network, including any in-network augmentation layers, but disables the classification layer. The neural network generates the inbound embeddings (e.g., spoofprint, voiceprint, combined embedding) for the caller and then determines one or more similarity scores indicating the distances between these feature vectors and the corresponding enrollee feature vectors. If, for example, the similarity score for the spoofprints satisfies a predetermined spoofprint threshold, then the analytics server102determines that the inbound phone call is likely spoofed or otherwise fraudulent. As another example, if the similarity score for the voiceprints or the combined embeddings satisfies a corresponding predetermined threshold, then the analytics server102determines that the caller and the enrollee are likely the same person or that the inbound call is genuine or spoofed (e.g., synthetic speech). Following the deployment phase, the analytics server102(or another device of the system100) may execute any number of various downstream operations (e.g., speaker authentication, speaker diarization) that employ the determinations produced by the neural network at deployment time. The analytics database104and/or the call center database112may contain any number of corpora of training audio signals that are accessible to the analytics server102via one or more networks. In some embodiments, the analytics server102employs supervised training to train the neural network, where the analytics database104includes labels associated with the training audio signals that indicate which signals contain speech portions. The analytics server102may also query an external database (not shown) to access a third-party corpus of training audio signals. An administrator may configure the analytics server102to select the speech segments to have durations that are random, random within configured limits, or predetermined at the admin device103. The duration of the speech segments vary based upon the needs of the downstream operations and/or based upon the operational phase. For example, during training or enrollment, the analytics server102will likely have access to longer speech samples compared to the speech samples available during deployment. As another example, the analytics server102will likely have access to longer speech samples during telephony operations compared to speech samples received for voice authentication. The call center server111of a call center system110executes software processes for managing a call queue and/or routing calls made to the call center system110, which may include routing calls to the appropriate call center agent devices116based on the inbound caller's comments, instructions, IVR inputs, or other inputs submitted during the inbound call. The call center server111can capture, query, or generate various types of information about the call, the caller, and/or the caller device114and forward the information to the agent device116, where a graphical user interface (GUI) of the agent device116displays the information to the call center agent. The call center server111also transmits the information about the inbound call to the call analytics system101to preform various analytics processes on the inbound audio signal and any other audio data. The call center server111may transmit the information and the audio data based upon a preconfigured triggering conditions (e.g., receiving the inbound phone call), instructions or queries received from another device of the system100(e.g., agent device116, admin device103, analytics server102), or as part of a batch transmitted at a regular interval or predetermined time. The admin device103of the call analytics system101is a computing device allowing personnel of the call analytics system101to perform various administrative tasks or user-prompted analytics operations. The admin device103may be any computing device comprising a processor and software, and capable of performing the various tasks and processes described herein. Non-limiting examples of the admin device103may include a server, personal computer, laptop computer, tablet computer, or the like. In operation, the user employs the admin device103to configure the operations of the various components of the call analytics system101or call center system110and to issue queries and instructions to such components. The agent device116of the call center system110may allow agents or other users of the call center system110to configure operations of devices of the call center system110. For calls made to the call center system110, the agent device116receives and displays some or all of the relevant information associated with the call routed from the call center server111. Example Operations FIG.2shows steps of a method200for implementing one or more neural networks architectures for spoof detection and speaker recognition, according to an embodiment. Embodiments may include additional, fewer, or different operations than those described in the method200. The method200is performed by a server executing machine-readable software code of the neural network architectures, though it should be appreciated that the various operations may be performed by one or more computing devices and/or processors. Though the server is described as generating and evaluating spoofprint and voiceprint embeddings, the server need not generate and evaluate the voiceprint embedding in all embodiments to detect spoofing. The server or layers of the neural network architecture may perform various pre-processing operations on an input audio signal (e.g., training audio signal, enrollment audio signal, inbound audio signal). These pre-processing operations may include, for example, extracting low-level features from the audio signals and transforming these features from a time-domain representation into a frequency-domain representation by performing Short-time Fourier Transforms (SFT) and/or Fast Fourier Transforms (FFT). The pre-processing operations may also include parsing the audio signals into frames or sub-frames, and performing various normalization or scaling operations. Optionally, the server performs any number of pre-processing operations before feeding the audio data into the neural network. The server may perform the various pre-processing operations in one or more of the operational phases, though the particular pre-processing operations performed may vary across the operational phases. The server may perform the various pre-processing operations separately from the neural network architecture or as in-network layer of the neural network architecture. The server or layers of the neural network architecture may perform various augmentation operations on the input audio signal (e.g., training audio signal, enrollment audio signal). The augmentation operations generate various types of distortion or degradation for the input audio signal, such that the resulting audio signals are ingested by, for example, the convolutional operations that generate the feature vectors. The server may perform the various augmentation operations as separate operations from the neural network architecture or as in-network augmentation layers. The server may perform the various augmentation operations in one or more of the operational phases, though the particular augmentation operations performed may vary across the operational phases. In step202, a server places the neural network into a training operational phase. The server applies the neural network to thousands of speech samples (received as inputted audio signals) to train a classifier layer to identify, for example, speech portions of audio. The server may select training audio signals and/or randomly generate simulated audio segments, which the fully connected layer or classification layer uses to determine the level of error of training feature vectors (sometimes referred to as “training embeddings”) produced by an embedding extractor of the neural network. The classifier layer adjusts the hyper-parameters of the neural network until the training feature vectors converge with expected feature vectors. When training is completed, the server stores the hyper-parameters into memory of the server or other memory location. The server may also disable one or more layers of the neural network in order to keep the hyper-parameters fixed. In step204, the server places the neural network into an enrollment operational phase to generate enrollee embeddings for an enrollee. The server receives enrollment speech samples for the enrollee and applies the neural network to generate enrollment feature vectors, including, for example, an enrollee spoofprint and an enrollee voiceprint. The server may enable and/or disable certain layers of the neural network architecture during the enrollment phase. For instance, the server typically enables and applies each of the layers during the enrollment phase, though the server disables the classification layer. When extracting a particular embedding (e.g., spoofprint, voiceprint) for the enrollee, the neural network architecture generates a set of enrollee feature vectors based on features related to the particular type of embedding as extracted from each enrollee audio signal. The neural network architecture then extracts the particular embedding by combining this set of enrollee feature vectors based on an average of the enrollee feature vectors or any other algorithmic technique for combining the enrollee feature vectors. The server stores each enrollee embedding into a non-transitory storage medium. In step206, the server places the neural network architecture into a deployment phase to generate inbound embeddings for an inbound speaker and detect spoofing and verify the speaker. The server may enable and/or disable certain layers of the neural network architecture during the deployment phase. For instance, the server typically enables and applies each of the layers during the deployment phase, though the server disables the classification layer. The server receives the inbound audio signal for the inbound speaker and feeds the inbound audio signal into the neural network architecture. In step208, during the deployment operational phase, the server receives the inbound audio signal for the speaker and applies the neural network to extract the inbound embeddings, including, for example, an inbound spoofprint and an inbound voiceprint. The neural network architecture then generates one or more similarity scores based on the similarities or differences between the inbound embeddings and the enrolled embeddings. For example, the neural network architecture extracts the inbound spoofprint and outputs a similarity score indicating the distance (e.g., similarities, differences) between the inbound spoofprint and the enrollee spoofprint. A larger distance may indicate a lower likelihood that the inbound audio signal is a spoof, due to lower/fewer similarities between the inbound spoofprint and the enrollee spoofprint. In this example, the server determines the speaker of the inbound audio signal is spoofing the enrollee when the similarity score satisfies a spoof threshold value. As another example, the neural network architecture extracts the inbound voiceprint and outputs a similarity score indicating the distance between the inbound voiceprint and the enrollee voiceprint. A larger distance may indicate a lower likelihood that the speaker of the inbound audio signal matches to the enrollee. In this example, the server identifies a match (or a likely match) between the speaker and the enrollee when the similarity score satisfies a voice match threshold value. The server may evaluate the spoofprints and voiceprints simultaneously or sequentially. For example, the server may evaluate the inbound voiceprint against the enrollee voiceprint. If the server determines that the speaker of the inbound audio signal likely matches the enrollee, then the server evaluates the inbound spoofprint against the enrollee spoofprint. The server then determines whether the inbound audio signal is a spoofing attempt. As another example, the server evaluates the spoofprints and voiceprints without regard to the sequencing, yet require the extracted inbound embeddings to satisfy corresponding thresholds. In some implementations, the server generates a combined similarity score using a voice similarity score (based on comparing the voiceprints) and a spoof likelihood or detection score (based on comparing the spoofprints). The server generates the combined similarity score by summing or otherwise algorithmically combining the voice similarity score and the spoof likelihood score. The server then determines whether the combined similarity score satisfies an authentication or verification threshold score. Following successful or failed verification of the speaker of the inbound audio signal, in step208, the server may use the determination for one or more downstream operations (e.g., speaker authentication, speaker diarization). The server may, for example, use the spoof or match determinations, the similarity scores, and/or the inbound embeddings to perform the given downstream functions. Training Operational Phases FIG.3shows steps of a method300for training operations of one or more neural networks architectures for spoof detection and speaker recognition, according to an embodiment. Embodiments may include additional, fewer, or different operations than those described in the method300. The method300is performed by a server executing machine-readable software code of the neural network architectures, though it should be appreciated that the various operations may be performed by one or more computing devices and/or processors. The server or layers of the neural network architecture may perform various pre-processing operations on an input audio signal (e.g., training audio signal, enrollment audio signal, inbound audio signal). These pre-processing operations may include, for example, extracting low-level features from the audio signals and transforming these features from a time-domain representation into a frequency-domain representation by performing Short-time Fourier Transforms (SFT) and/or Fast Fourier Transforms (FFT). The pre-processing operations may also include parsing the audio signals into frames or sub-frames, and performing various normalization or scaling operations. Optionally, the server performs any number of pre-processing operations before feeding the audio data into the neural network. The server may perform the various pre-processing operations in one or more of the operational phases, though the particular pre-processing operations performed may vary across the operational phases. The server may perform the various pre-processing operations separately from the neural network architecture or as in-network layer of the neural network architecture. The server or layers of the neural network architecture may perform various augmentation operations on the input audio signal (e.g., training audio signal, enrollment audio signal). The augmentation operations generate various types of distortion or degradation for the input audio signal, such that the resulting audio signals are ingested by, for example, the convolutional operations that generate the feature vectors. The server may perform the various augmentation operations as separate operations from the neural network architecture or as in-network augmentation layers. The server may perform the various augmentation operations in one or more of the operational phases, though the particular augmentation operations performed may vary across the operational phases. During a training phase, the server applies a neural network architecture to training audio signals (e.g., clean audio signals, simulated audio signals, previously received observed audio signals). In some instances, before applying the neural network architecture to the training audio signals, the server pre-processes the training audio signals according to various pre-processing operations described herein, such that the neural network architecture receives arrays representing portions of the training audio signals. In step302, the server obtains the training audio signals, including clean audio signals and noise samples. The server may receive or request clean audio signals from one or more speech corpora databases. The clean audio signals may include speech originating from any number speakers, where the quality allows the server identify the speech—i.e., the clean audio signal contains little or no degradation (e.g., additive noise, multiplicative noise). The clean audio signals may be stored in non-transitory storage media accessible to the server or received via a network or other data source. In some circumstances, the server generates a simulated clean audio signal using simulated audio signals. For example, the server may generate a simulated clean audio signal by simulating speech. In step304, the server performs one or more data augmentation operations using the clean training audio samples and/or to generate simulated audio samples. For instance, the server generates one or more simulated audio signals by applying augmentation operations for degrading the clean audio signals. The server may, for example, generate simulated audio signals by applying additive noise and/or multiplicative noise on the clean audio signals and labeling these simulated audio signals. The additive noise may be generated as simulated white Gaussian noise or other simulated noises with different spectral shapes, and/or example sources of backgrounds noise (e.g., real babble noise, real white noise, and other ambient noise) on the clean audio signals. The multiplicative noise may be simulated acoustic impulse responses. The server may perform additional or alternative augmentation operations on the clean audio signals to produce simulated audio signals, thereby generating a larger set of training audio signals. In step306, the server uses the training audio signals to train one or more neural network architectures. As discussed herein, the result of training the neural network architecture is to minimize the amount of error between a predicted output (e.g., neural network architecture outputted of genuine or spoofed; extracted features; extracted feature vector) and an expected output (e.g., label associated with the training audio signal indicating whether the particular training signal is genuine or spoofed; label indicating expected features or feature vector of the particular training signal). The server feeds each training audio signal to the neural network architecture, which the neural network architecture uses to generate the predicted output by applying the current state of the neural network architecture to the training audio signal. In step308, the server performs a loss function (e.g., LMCL, LDA) and updates hyper-parameters (or other types of weight values) of the neural network architecture. The server determines the error between the predicted output and the expected output by comparing the similarity or difference between the predicted output and expected output. The server adjusts the algorithmic weights in the neural network architecture until the error between the predicted output and expected output is small enough such that the error is within a predetermined threshold margin of error and stores the trained neural network architecture into memory. Enrollment and Deployment Operational Phases FIG.4shows steps of a method400for enrollment and deployment operations of one or more neural networks architectures for spoof detection and speaker recognition, according to an embodiment. Embodiments may include additional, fewer, or different operations than those described in the method400. The method400is performed by a server executing machine-readable software code of the neural network architectures, though it should be appreciated that the various operations may be performed by one or more computing devices and/or processors. During an enrollment phase, the server applies a neural network architecture to bona fide enrollee audio signals. In some instances, before applying the neural network architecture to the enrollee audio signals, the server pre-processes the enrollee audio signals according to various pre-processing operations described herein, such that the neural network architecture receives arrays representing portions of the enrollee audio signals. In operation, embedding extractor layers of the neural network architecture generate feature vectors based on features of the enrollee audio signals and extract enrollee embeddings, which the server later references during a deployment phase. In some embodiments, the same embedding extractor of the neural network architecture is applied for each type embedding, and in some embodiments different embedding extractors of the neural network architecture are applied for corresponding types of embeddings. In step402, the server obtains the enrollee audio signals for the enrollee. The server may receive the enrollee audio signals directly from a device (e.g., telephone, IoT device) of the enrollee, a database, or a device of a third-party (e.g., customer call center system). In some implementations, the server may perform one or more data augmentation operations on the enrollee audio signals, which could include the same or different augmentation operations performed during a training phase. In some cases, the server extracts certain features from the enrollee audio signals. The server extracts the features based on the relevant types of enrollee embeddings. For instance, the types of features used to produce a spoofprint can be different from the types of features used to produce a voiceprint. In step404, the server applies the neural network architecture to each enrollee audio signal to extract the enrollee spoofprint. The neural network architecture generates spoofprint feature vectors for the enrollee audio signals using the relevant set of extracted features. The neural network architecture extracts the spoofprint embedding for the enrollee by combining the spoofprint feature vectors according to various statistical and/or convolutional operations. The server then stores the enrollee spoofprint embedding into non-transitory storage media. In step406, the server applies the neural network architecture to each enrollee audio signal to extract the enrollee voiceprint. The neural network architecture generates voiceprint feature vectors for the enrollee audio signals using the relevant set of extracted features, which may be the same or different types of features used to extract the spoofprint. The neural network architecture extracts the voiceprint embedding for the enrollee by combining the voiceprint feature vectors according to various statistical and/or convolutional operations. The server then stores the enrollee voiceprint embedding into non-transitory storage media. In step408, the server receives an inbound audio signal involving a speaker and extracts inbound embeddings for the speaker corresponding to enrollee embeddings. The inbound audio signal may be received directly from a device of the speaker or a device of the third-party. The server applies the neural network architecture to the inbound audio signal to extract, for example, an inbound spoofprint and an inbound voiceprint. In step410, the server determines a similarity score based upon a distance between the inbound voiceprint and the enrollee voiceprint. The server then determines whether the similarity score satisfies a voice match threshold. In step412, the server determines a similarity score based upon the distance between the inbound voiceprint and the enrollee voiceprint. The server then determines whether the similarity score satisfies a spoof detection threshold. In some embodiments, the server performs steps410and412sequentially, whereby the server performs spoof detection (in step412) in response to the server determining that the inbound voiceprint satisfies the voice match threshold (in step410). In some embodiments, the server performs steps410and412without respect to sequence, whereby the server determines whether the inbound voiceprint satisfies the voice match threshold (in step410) and whether the inbound spoofprint satisfies the spoof detection threshold (in step412) regardless of the outcome of the counterpart evaluation. FIG.5shows steps of a method500for enrollment and deployment operations of one or more neural networks architectures for spoof detection and speaker recognition, according to an embodiment. Embodiments may include additional, fewer, or different operations than those described in the method500. The method500is performed by a server executing machine-readable software code of the neural network architectures, though it should be appreciated that the various operations may be performed by one or more computing devices and/or processors. During an enrollment phase, the server applies a neural network architecture to bona fide enrollee audio signals. In some instances, before applying the neural network architecture to the enrollee audio signals, the server pre-processes the enrollee audio signals according to various pre-processing operations described herein, such that the neural network architecture receives arrays representing portions of the enrollee audio signals. In operation, embedding extractor layers of the neural network architecture generate feature vectors based on features of the enrollee audio signals and extract enrollee embeddings, which the server later references during a deployment phase. In some embodiments, the same embedding extractor of the neural network architecture is applied for each type embedding, and in some embodiments different embedding extractors of the neural network architecture are applied for corresponding types of embeddings. In step502, the server obtains the enrollee audio signals for the enrollee. The server may receive the enrollee audio signals directly from a device (e.g., telephone, IoT device) of the enrollee, a database, or a device of a third-party (e.g., customer call center system). In some implementations, the server may perform one or more data augmentation operations on the enrollee audio signals, which could include the same or different augmentation operations performed during a training phase. In some cases, the server extracts certain features from the enrollee audio signals. The server extracts the features based on the relevant types of enrollee embeddings. For instance, the types of features used to produce a spoofprint can be different from the types of features used to produce a voiceprint. In step504, the server applies the neural network architecture to each enrollee audio signal to extract the enrollee spoofprint. The neural network architecture generates spoofprint feature vectors for the enrollee audio signals using the relevant set of extracted features. The neural network architecture extracts the spoofprint embedding for the enrollee by combining the spoofprint feature vectors according to various statistical and/or convolutional operations. The server then stores the enrollee spoofprint embedding into non-transitory storage media. In step506, the server applies the neural network architecture to each enrollee audio signal to extract the enrollee voiceprint. The neural network architecture generates voiceprint feature vectors for the enrollee audio signals using the relevant set of extracted features, which may be the same or different types of features used to extract the spoofprint. The neural network architecture extracts the voiceprint embedding for the enrollee by combining the voiceprint feature vectors according to various statistical and/or convolutional operations. The server then stores the enrollee voiceprint embedding into non-transitory storage media. In step508, the server generates an enrollee combined embedding for the enrollee. The neural network architecture includes one or more layers for algorithmically combining the enrollee spoofprint embedding and the enrollee voiceprint embedding. The server then stores the enrollee combined embedding into non-transitory storage media. In step510, the server receives an inbound audio signal involving a speaker and extracts inbound embeddings for the speaker corresponding to the extracted enrollee embeddings, including an inbound spoofprint embedding, an inbound voiceprint embedding, and an inbound combined embedding. The inbound audio signal may be received directly from a device of the speaker or a device of the third-party. The server applies the neural network architecture to the inbound audio signal to extract the inbound spoofprint and the inbound voiceprint, and generate the inbound combined embedding by algorithmically combining the inbound spoofprint and the inbound voiceprint. In step512, the server determines a similarity score based upon a distance between the inbound combined embedding and the enrollee combined embedding. The server then determines whether the similarity score satisfies a verification threshold. The server verifies the inbound audio signal as matching the enrollee voice with the speaker and as genuine (not spoofed) when the server determines the inbound combined embedding satisfies the corresponding verification threshold score. In some configurations, the call is allowed to proceed upon the verification by the server. Example Neural Network Architecture Example of Training Phase FIG.6shows architecture components of a neural network architecture600for processing audio signals to detect spoofing attempts, according to an embodiment. The neural network600is executed by a server during a training operational phase and optional enrollment and deployment operational phases, though the neural network600may be executed by any computing device comprising a processor capable of performing the operations of the neural network600and by any number of such computing devices. The neural network600includes input layers601for ingesting audio signals enrollment audio signals602,603(e.g., training audio signals602, enrollment audio signals603) and performing various augmentation operations; layers that define one or more embedding extractors606for generating one or more feature vectors (or embeddings) and performing other operations; one or more fully-connected layers608performing various statistical and algorithmic combination operations; a loss layer610for performing one or more loss functions; and a classifier612for performing any number of scoring and classification operations based upon the embeddings. It should be appreciated that the neural network architecture600need not perform operations of an enrollment operational phase. As such, in some embodiments, the neural network architecture600includes the training and deployment operational phases In the training phase, the server feeds the training audio signals602into the input layers601, where the training audio signals may include any number of genuine and spoofed or false audio signals. The training audio signals602may be raw audio files or pre-processed according to one or more pre-processing operations. The input layers601may perform one or more pre-processing operations on the training audio signals602. The input layers601extract certain features from the training audio signals602and perform various data augmentation operations on the training audio signals602. For instance, input layers601may convert the training audio signals602into multi-dimensional log filter banks (LFBs). The input layers601then perform, for example, a frequency masking data augmentation operation on one or more portions of the LFB representations of the training audio signals602, thereby negating or nullifying how such portions would factor into later operations. The training audio signals602are then fed into functional layers (e.g., ResNet blocks) defining the embedding extractors606. The embedding extractors606generate feature vectors based on the extracted features fed into the embedding extractors606and extract, for example, a spoof embedding, among other types of embeddings (e.g., voiceprint embeddings), based upon the feature vectors. The spoof embedding extractor606is trained by performing a loss layer610for learning and tuning spoof embedding according to labels associated with the training audio signals602. The classifier612uses the spoof embeddings to determine whether the given input layers601is “genuine” or “spoofed.” The loss layer610tunes the embedding extractor606by performing the loss function (e.g., LMCL) to determine the distance (e.g., large margin cosine loss) between the determined genuine and spoof classifications, as indicated by supervised labels or previously generated clusters. A user may tune parameters of the loss layer610(e.g., adjust the m value of the LMCL function) to tune the sensitivity of the loss function. The server feeds the training audio signals602into the neural network architecture600to re-train and further tune the layers of the neural network600. The server fixes the hyper-parameters of the embedding extractor606and/or fully-connected layers608when predicted outputs (e.g., classifications, feature vectors, embeddings) converge with the expected outputs within a threshold margin of error. In some embodiments, the server may forgo the enrollment phase and proceed directly to the deployment phase. The server feeds inbound audio signals (which could include an enrollment audio signal) into the neural network architecture600. The classifier612includes one or more layers trained to determine the whether the outputs (e.g., classifications, feature vectors, embeddings) of the embedding extractor606and/or fully-connected layers608are within a given distance from a threshold value established during the training phase according to the LMCL and/or LDA algorithms. By executing the classifier612, the server classifies an inbound audio signal as genuine or spoofed based on the neural network architecture's600output(s). In some cases, the server may authenticate the inbound audio signal according to the results of the classifier's612determination. During the optional enrollment phase, the server feeds one or more enrollment audio signals603into the embedding extractor606to extract an enrollee spoofprint embedding for an enrollee. The enrollee spoofprint embedding is then stored into memory. In some embodiments, the enrollee spoofprint embeddings are used to train a classifier612for the enrollee, though the server may disable the classifier612during the enrollment phase in some embodiments. Example Enrollment and Deployment FIG.7shows architecture components of a neural network architecture700for processing audio signals702,712to detect spoofing attempts, according to an embodiment. The neural network700is described as being executed by a server during enrollment and deployment operational phases for authentication, though the neural network700may be executed by any computing device comprising a processor capable of performing the operations of the neural network700and by any number of such computing devices. The neural network700includes input layers703for ingesting audio signals702,712and performing various augmentation operations; layers that define one or more embedding extractors704(e.g., spoofprint embedding extractor, voiceprint embedding extractor) for generating one or more embeddings706,714; one or more layers defining a combination operation (LDA) that algorithmically combines enrollee embeddings706; and one or more scoring layers716that perform various scoring operations, such as a distance scoring operation716, to produce a verification score718. The server feeds audio signals702,712to the input layers703to begin applying the neural network700. In some cases, the input layers703perform one or more pre-processing operations on the audio signals702,712, such as parsing the audio signals702,712into frames or segments, extracting low-level features, and transforming the audio signals702,712from a time-domain representation to a frequency-domain (or energy domain) representation, among other pre-processing operations. During the enrollment phase, the input layers703receive enrollment audio signals702for an enrollee. In some implementations, the input layers703perform data augmentation operations on the enrollment audio signals702to, for example, manipulate the audio signals within the enrollment audio signals702, manipulate the low-level features, or generate simulated enrollment audio signals702that have manipulated features or audio signal based on corresponding enrollment audio signals702. During the deployment phase, the input layers703may perform the pre-processing operations to prepare an inbound audio signal712for the embedding extractor704. The server, however, may disable the augmentation operations of the input layers703, such that the embedding extractor704evaluates the features of the inbound audio signal712as received. The embedding extractor704comprises one or more layers of the neural network700trained (during a training phase) to detect speech and generate feature vectors based on the features extracted from the audio signals702,712, which the embedding extractor704outputs as embeddings706,714. During the enrollment phase, the embedding extractor704produces enrollee embeddings706for each of the enrollment audio signals702. The neural network700then performs the combination operation708on the embeddings706to extract the enrollee spoofprint710for the enrollee. During the deployment phase, the embedding extractor704generates the feature vector for the inbound audio signal712based on the features extracted from the inbound audio signal712. The embedding extractor704outputs this feature vector as an inbound spoofprint714for the inbound audio signal712. The neural network700feeds the enrollee spoofprint710and the inbound spoofprint714to the scoring layers716to perform various scoring operations. The scoring layers716perform a distance scoring operation that determines the distance (e.g., similarities, differences) between the enrollee spoofprint710and the inbound spoofprint714, indicating the likelihood that inbound spoofprint714is a spoofing attempt. For instance, a lower distance score for the inbound spoofprint714indicates the inbound spoofprint714is more likely to be a spoofing attempt. The neural network700may output a verification score718, which may be a value generated by the scoring layers716based on one or more scoring operations (e.g., distance scoring). In some implementations, the scoring layers716determine whether the distance score or other outputted values satisfy threshold values. In such implementations, the verification score718need not be a numeric output. For example, the verification score718may be a human-readable indicator (e.g., plain language, visual display) that indicates whether the neural network700has determined that the inbound audio signal712is a spoof attempt (e.g., the server has detected spoofing). Additionally or alternatively, the verification score718may include a machine-readable detection indicator or authentication instruction, which the server transmits via one or more networks to computing devices performing one or more downstream applications. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the invention. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein. When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. While various aspects and embodiments have been disclosed, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
67,021
11862178
Throughout the drawings, like reference numerals will be understood to refer to like parts, components, and structures. MODE FOR CARRYING OUT THE INVENTION The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness. The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents. It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces. FIG.1is a block diagram illustrating an electronic device in a network environment according to an embodiment of the disclosure. Referring toFIG.1, an electronic device101in a network environment100may communicate with an (external) electronic device102via a first network198(e.g., a short-range wireless communication network), or at least one of an (external) electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input module150, a sound output module155, a display module160, an audio module170, a sensor module176, an interface177, a connecting (or connection) terminal178, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one of the components (e.g., the connecting terminal178) may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components (e.g., the sensor module176, the camera module180, or the antenna module197) may be implemented as a single component (e.g., the display module160). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may store a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), or an auxiliary processor123(e.g., a graphics processing unit (GPU), a neural processing unit (NPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. For example, when the electronic device101includes the main processor121and the auxiliary processor123, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control at least some of functions or states related to at least one component (e.g., the display module160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. According to an embodiment, the auxiliary processor123(e.g., the neural processing unit) may include a hardware structure specified for artificial intelligence model processing. An artificial intelligence model may be generated by machine learning. Such learning may be performed, e.g., by the electronic device101where the artificial intelligence is performed or via a separate server (e.g., the server108). Learning algorithms may include, but are not limited to, e.g., supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. The artificial intelligence model may include a plurality of artificial neural network layers. The artificial neural network may be a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), deep Q-network or a combination of two or more thereof but is not limited thereto. The artificial intelligence model may, additionally or alternatively, include a software structure other than the hardware structure. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input module150may receive a command or data to be used by another component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input module150may include, for example, a microphone, a mouse, a keyboard, a key (e.g., a button), or a digital pen (e.g., a stylus pen). The sound output module155may output sound signals to the outside of the electronic device101. The sound output module155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record. The receiver may be used for receiving incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display module160may visually provide information to the outside (e.g., a user) of the electronic device101. The display module160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display module160may include a touch sensor adapted to detect a touch, or a pressure sensor adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input module150, or output the sound via the sound output module155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, Wi-Fi direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., LAN or wide area network (WAN))). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The wireless communication module192may support a 5G network, after a 4G network, and next-generation communication technology, e.g., new radio (NR) access technology. The NR access technology may support enhanced mobile broadband (eMBB), massive machine type communications (mMTC), or ultra-reliable and low-latency communications (URLLC). The wireless communication module192may support a high-frequency band (e.g., the mmWave band) to achieve, e.g., a high data transmission rate. The wireless communication module192may support various technologies for securing performance on a high-frequency band, such as, e.g., beamforming, massive multiple-input and multiple-output (massive MIMO), full dimensional MIMO (FD-MIMO), array antenna, analog beam-forming, or large scale antenna. The wireless communication module192may support various requirements specified in the electronic device101, an external electronic device (e.g., the electronic device104), or a network system (e.g., the second network199). According to an embodiment, the wireless communication module192may support a peak data rate (e.g., 20 Gbps or more) for implementing eMBB, loss coverage (e.g., 164 dB or less) for implementing mMTC, or U-plane latency (e.g., 0.5 ms or less for each of downlink (DL) and uplink (UL), or a round trip of 1 ms or less) for implementing URLLC. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., a printed circuit board (PCB)). According to an embodiment, the antenna module197may include a plurality of antennas (e.g., array antennas). In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. According to various embodiments, the antenna module197may form an mmWave antenna module. According to an embodiment, the mmWave antenna module may include a printed circuit board, a RFIC disposed on a first surface (e.g., the bottom surface) of the printed circuit board, or adjacent to the first surface and capable of supporting a designated high-frequency band (e.g., the mmWave band), and a plurality of antennas (e.g., array antennas) disposed on a second surface (e.g., the top or a side surface) of the printed circuit board, or adjacent to the second surface and capable of transmitting or receiving signals of the designated high-frequency band. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102or104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, mobile edge computing (MEC), or client-server computing technology may be used, for example. The electronic device101may provide ultra low-latency services using, e.g., distributed computing or mobile edge computing. In another embodiment, the external electronic device104may include an internet-of-things (IoT) device. The server108may be an intelligent server using machine learning and/or a neural network. According to an embodiment, the external electronic device104or the server108may be included in the second network199. The electronic device101may be applied to intelligent services (e.g., smart home, smart city, smart car, or healthcare) based on 5G communication technology or IoT-related technology. FIG.2is a block diagram illustrating the audio module according to an embodiment of the disclosure. Referring toFIG.2, in a block diagram200, the audio module170may include, for example, an audio input interface210, an audio input mixer220, an analog-to-digital converter (ADC)230, an audio signal processor240, a digital-to-analog converter (DAC)250, an audio output mixer260, or an audio output interface270. The audio input interface210may receive an audio signal corresponding to a sound obtained from the outside of the electronic device101via a microphone (e.g., a dynamic microphone, a condenser microphone, or a piezo microphone) that is configured as part of the input module150or separately from the electronic device101. For example, if an audio signal is obtained from the external electronic device102(e.g., a headset or a microphone), the audio input interface210may be connected with the external electronic device102directly via the connecting terminal178, or wirelessly (e.g., Bluetooth™ communication) via the wireless communication module192to receive the audio signal. According to an embodiment, the audio input interface210may receive a control signal (e.g., a volume adjustment signal received via an input button) related to the audio signal obtained from the external electronic device102. The audio input interface210may include a plurality of audio input channels and may receive a different audio signal via a corresponding one of the plurality of audio input channels, respectively. According to an embodiment, additionally or alternatively, the audio input interface210may receive an audio signal from another component (e.g., the processor120or the memory130) of the electronic device101. The audio input mixer220may synthesize a plurality of inputted audio signals into at least one audio signal. For example, according to an embodiment, the audio input mixer220may synthesize a plurality of analog audio signals inputted via the audio input interface210into at least one analog audio signal. The ADC230may convert an analog audio signal into a digital audio signal. For example, according to an embodiment, the ADC230may convert an analog audio signal received via the audio input interface210or, additionally or alternatively, an analog audio signal synthesized via the audio input mixer220into a digital audio signal. The audio signal processor240may perform various processing on a digital audio signal received via the ADC230or a digital audio signal received from another component of the electronic device101. For example, according to an embodiment, the audio signal processor240may perform changing a sampling rate, applying one or more filters, interpolation processing, amplifying or attenuating a whole or partial frequency bandwidth, noise processing (e.g., attenuating noise or echoes), changing channels (e.g., switching between mono and stereo), mixing, or extracting a specified signal for one or more digital audio signals. According to an embodiment, one or more functions of the audio signal processor240may be implemented in the form of an equalizer. The DAC250may convert a digital audio signal into an analog audio signal. For example, according to an embodiment, the DAC250may convert a digital audio signal processed by the audio signal processor240or a digital audio signal obtained from another component (e.g., the processor (120) or the memory (130)) of the electronic device101into an analog audio signal. The audio output mixer260may synthesize a plurality of audio signals, which are to be outputted, into at least one audio signal. For example, according to an embodiment, the audio output mixer260may synthesize an analog audio signal converted by the DAC250and another analog audio signal (e.g., an analog audio signal received via the audio input interface210) into at least one analog audio signal. The audio output interface270may output an analog audio signal converted by the DAC250or, additionally or alternatively, an analog audio signal synthesized by the audio output mixer260to the outside of the electronic device101via the sound output module155. The sound output module155may include, for example, a speaker, such as a dynamic driver or a balanced armature driver, or a receiver. According to an embodiment, the sound output module155may include a plurality of speakers. In such a case, the audio output interface270may output audio signals having a plurality of different channels (e.g., stereo channels or 5.1 channels) via at least some of the plurality of speakers. According to an embodiment, the audio output interface270may be connected with the external electronic device102(e.g., an external speaker or a headset) directly via the connecting terminal178or wirelessly via the wireless communication module192to output an audio signal. According to an embodiment, the audio module170may generate, without separately including the audio input mixer220or the audio output mixer260, at least one digital audio signal by synthesizing a plurality of digital audio signals using at least one function of the audio signal processor240. According to an embodiment, the audio module170may include an audio amplifier (not shown) (e.g., a speaker amplifying circuit) that is capable of amplifying an analog audio signal inputted via the audio input interface210or an audio signal that is to be outputted via the audio output interface270. According to an embodiment, the audio amplifier may be configured as a module separate from the audio module170. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that various embodiments of the disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively,” as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used in connection with various embodiments of the disclosure, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry.” A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor (e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities, and some of the multiple entities may be separately disposed in different components. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. FIG.3is a block diagram illustrating an integrated intelligence system according to an embodiment of the disclosure. Referring toFIG.3, the integrated intelligence system300according to an embodiment may include a user terminal301, an intelligent (or intelligence) server302, and a service server303. According to an embodiment, the user terminal301may be a terminal device (or electronic device) that can be connected to the Internet, for example, a mobile phone, a smartphone, a personal digital assistant (PDA), a notebook computer, a TV, a domestic appliance, a wearable device, an HMD, or a smart speaker. According to an embodiment, the user terminal301(e.g., the electronic device101ofFIG.1) may include a communication interface311, a microphone312, a speaker313, a display314, a memory315, and a processor316. The listed components may be operatively or electrically connected to each other. According to an embodiment, the communication interface311may be configured to be connected to an external device to transmit and receive data. According to an embodiment, the microphone312may receive a sound (e.g., a user's utterance) and may convert the sound into an electrical signal. According to an embodiment, the speaker313may output an electrical signal as a sound (e.g., a voice). According to an embodiment, the display314may be configured to display an image or a video. According to an embodiment, the display314may display a graphic user interface (GUI) of an executed application (or application program). According to an embodiment, the memory315may store a client module317, a software development kit (SDK)318, and a plurality of applications319_1and319_2. The client module317and the SDK318may form a framework (or a solution program) for performing a general-purpose function. In addition, the client module317or the SDK318may form a framework for processing a voice input. According to an embodiment, the plurality of applications319_1and319_2in the memory315may be programs for performing a designated function. According to an embodiment, the plurality of applications319_1and319_2may include a first application319_1and a second application319_2. According to an embodiment, each of the plurality of applications319_1and319_2may include a plurality of operations for performing a designated function. For example, the plurality of applications319_1and319_2may include at least one of an alarm application, a message application, and a schedule application. According to an embodiment, the plurality of applications319_1and319_2may be executed by the processor316to sequentially execute at least some of the plurality of operations. According to an embodiment, the processor316may control the overall operation of the user terminal301. For example, the processor316may be electrically connected to the communication interface311, the microphone312, the speaker313, the display314, and the memory315to perform a designated operation. According to an embodiment, the processor316may also execute a program stored in the memory315to perform a designated function. For example, the processor316may execute at least one of the client module317or the SDK318to perform the following operation for processing a voice input. The processor316may control the operation of the plurality of applications319_1and319_2, for example, through the SDK318. An operation to be described below as the operation of the client module317or the SDK318may be an operation by execution by the processor316. According to an embodiment, the client module317may receive a voice input. For example, the client module317may generate a voice signal corresponding to a user's utterance detected through the microphone312. The client module317may transmit the received voice input to the intelligent server302. According to an embodiment, the client module317may transmit state information about the user terminal301, together with the received voice input, to the intelligent server302. The state information may be, for example, execution state information about an application. According to an embodiment, the client module317may receive a result corresponding to the received voice input. For example, the client module317may receive the result corresponding to the received voice input from the intelligent server302. The client module317may display the received result on the display314. According to an embodiment, the client module317may receive a plan corresponding to the received voice input. The client module317may display a result of executing a plurality of operations of an application according to the plan on the display314. For example, the client module317may sequentially display results of executing the plurality of operations on the display. In another example, the user terminal301may display only some (e.g., a result of executing the last operation) of the results of executing the plurality of operations on the display. According to an embodiment, the client module317may receive a request for obtaining information required to produce the result corresponding to the voice input from the intelligent server302. The information required to produce the result may be, for example, state information about an electronic device101. According to an embodiment, the client module317may transmit the required information to the intelligent server302in response to the request. According to an embodiment, the client module317may transmit information about the result of executing the plurality of operations according to the plan to the intelligent server302. The intelligent server302may identify that the received voice input has been properly processed using the information about the result. According to an embodiment, the client module317may include a voice recognition module. According to an embodiment, the client module317may recognize a voice input for performing a limited function through the voice recognition module. For example, the client module317may perform an intelligent application for processing a voice input for performing an organic operation through a designated input (e.g., Wake up!). The client module317may recognize a call utterance (e.g., Hi Bixby) in an audio signal received from the microphone312and may start an AI agent service in response to the call utterance. According to an embodiment, the intelligent server302(e.g., the server108ofFIG.1) may receive information relating to a user voice input from the user terminal301through a communication network. According to an embodiment, the intelligent server302may change data relating to the received voice input into text data. According to an embodiment, the intelligent server302may generate, based on the text data, a plan for performing a task corresponding to the user voice input. According to an embodiment, the plan may be generated by an artificial intelligence (AI) system. The artificial intelligence system may be a rule-based system or a neural network-based system (e.g., a feedforward neural network (FNN)), or a recurrent neural network (RNN). Alternatively, the artificial intelligence system may be a combination of the above systems or a different artificial intelligence system. According to an embodiment, the plan may be selected from a set of predefined plans, or may be generated in real time in response to a user request. For example, the artificial intelligence system may select at least one plan from among a plurality of predefined plans. According to an embodiment, the intelligent server302may transmit a result obtained according to the generated plan to the user terminal301or may transmit the generated plan to the user terminal301. According to an embodiment, the user terminal301may display the result obtained according to the plan on the display314. According to an embodiment, the user terminal301may display a result of executing an operation according to the plan on the display. According to an embodiment, the intelligent server302may include a front end321, a natural language platform322, a capsule database (DB)323, an execution engine324, an end user interface325, a management platform326, a big data platform327, and an analytic platform328. According to an embodiment, the front end321may receive a voice input received from the user terminal301. The front end321may transmit a response corresponding to the voice input. According to an embodiment, the natural language platform322may include an automatic speech recognition module (ASR module)322a, a natural language understanding module (NLU module)322b, a planner module322c, a natural language generator (or generation) module (NLG module)322d, and a text-to-speech module (TTS module)322e. According to an embodiment, the ASR module322amay convert a voice input received from the user terminal301into text data. According to an embodiment, the NLU module322bmay understand a user's intent using the text data of the voice input. For example, the NLU module322bmay understand the user's intent by performing a syntactic analysis or a semantic analysis. According to an embodiment, the NLU module322bmay understand the meaning of a word extracted from the voice input using a linguistic feature (e.g., a syntactic element) of a morpheme or phrase and may determine the user's intent by matching the understood meaning of the word to intent. According to an embodiment, the planner module322cmay generate a plan using the intent determined by the NLU module322band a parameter. According to an embodiment, the planner module322cmay determine a plurality of domains necessary to perform a task based on the determined intent. The planner module322cmay determine a plurality of operations respectively included in the plurality of domains determined based on the intent. According to an embodiment, the planner module322cmay determine a parameter required to execute the plurality of determined operations or a result value output by executing the plurality of operations. The parameter and the result value may be defined as a concept related to a designated format (or class). Accordingly, the plan may include the plurality of operations determined by the intent of the user and a plurality of concepts. The planner module322cmay determine a relationship between the plurality of operations and the plurality of concepts by stages (or hierarchically). For example, the planner module322cmay determine the execution order of the plurality of operations, determined based on the user's intent, based on the plurality of concepts. That is, the planner module322cmay determine the execution order of the plurality of operations, based on the parameter required to execute the plurality of operations and the result output by executing the plurality of operations. Accordingly, the planner module322cmay generate a plan including association information (e.g., ontology) between the plurality of operations and the plurality of concepts. The planner module322cmay generate a plan using information stored in a capsule DB323in which a set of relationships between concepts and operations is stored. According to an embodiment, the NLG module322dmay change designated information into a text form. The information changed into the text form may be in the form of a natural language utterance. According to an embodiment, the TTS module322emay change information in the text form into information in a voice form. According to an embodiment, the capsule DB323may store information about a relationship between a plurality of concepts and a plurality of operations corresponding to a plurality of domains. For example, the capsule DB323may store a plurality of capsules including a plurality of action objects (or pieces of action information) and a plurality of concept objects (or pieces of concept information) of a plan. According to an embodiment, the capsule DB323may store the plurality of capsules in the form of a concept action network (CAN). According to an embodiment, the plurality of capsules may be stored in a function registry included in the capsule DB323. According to an embodiment, the capsule DB323may include a strategy registry that stores strategy information required to determine a plan corresponding to a voice input. The strategy information may include reference information for determining one plan when there is a plurality of plans corresponding to the voice input. According to an embodiment, the capsule DB323may include a follow-up registry that stores information about a follow-up for suggesting a follow-up to the user in a specified situation. The follow-up may include, for example, a following utterance. According to an embodiment, the capsule DB323may include a layout registry that stores layout information about information output through the user terminal301. According to an embodiment, the capsule DB323may include a vocabulary registry that stores vocabulary information included in capsule information. According to an embodiment, the capsule DB323may include a dialog registry that stores information about a dialog (or interaction) with the user. According to an embodiment, the capsule DB323may update a stored object through a developer tool. The developer tool may include, for example, a function editor for updating an action object or a concept object. The developer tool may include a vocabulary editor for updating vocabulary. The developer tool may include a strategy editor for generating and registering a strategy for determining a plan. The developer tool may include a dialog editor that generates a dialog with the user. The developer tool may include a follow-up editor capable of activating a following target and editing a following utterance providing a hint. The following target may be determined based on a currently set target, user preference, or an environmental condition. According to an embodiment, the capsule DB323can also be implemented in the user terminal301. That is, the user terminal301may include the capsule DB323that stores information for determining an operation corresponding to a voice input. According to an embodiment, the execution engine324may produce a result using the generated plan. According to an embodiment, the end user interface325may transmit the produced result to the user terminal301. Accordingly, the user terminal301may receive the result and may provide the received result to the user. According to an embodiment, the management platform326may manage information used in the intelligent server302. According to an embodiment, the big data platform327may collect user data. According to an embodiment, the analytic platform328may manage the quality of service (QoS) of the intelligent server302. For example, the analytic platform328may manage a component and the processing speed (or efficiency) of the intelligent server302. According to an embodiment, the service server303may provide a designated service (e.g., a food delivery service or a hotel reservation service) to the user terminal301. According to an embodiment, the service server303may be a server operated by a third party. For example, the service server303may include a first service server331, a second service server332, and a third service server333that are operated by different third parties. According to an embodiment, the service server303may provide information for generating a plan corresponding to a received voice input to the intelligent server302. The provided information may be stored, for example, in the capsule DB323. In addition, the service server303may provide result information according to the plan to the intelligent server302. In the foregoing integrated intelligent system300, the user terminal301may provide various intelligent services to the user in response to a user input. The user input may include, for example, an input through a physical button, a touch input, or a voice input. In an embodiment, the user terminal301may provide a voice recognition service through an intelligent application (or voice recognition application) stored therein. In this case, for example, the user terminal301may recognize a user utterance or a voice input received through the microphone and may provide a service corresponding to the recognized voice input to the user. In an embodiment, the user terminal301may perform a designated operation alone or together with the intelligent server302and/or the service server303, based on the received voice input. For example, the user terminal301may execute an application corresponding to the received voice input and may perform the designated operation through the executed application. In an embodiment, when the user terminal301provides a service together with the intelligent server302and/or the service server303, the user terminal301may detect a user utterance using the microphone312and may generate a signal (or voice data) corresponding to the detected user speech. The user terminal301may transmit the voice data to the intelligent server302using the communication interface311. According to an embodiment, the intelligent server302may generate, as a response to voice input received from the user terminal301, a plan for performing a task corresponding to the voice input or a result of performing an operation according to the plan. The plan may include, for example, a plurality of operations for performing the task corresponding to the user's voice input and a plurality of concepts related to the plurality of operations. The concepts may define a parameter input to execute the plurality of operations or a result value output by executing the plurality of operations. The plan may include information about an association between the plurality of operations and the plurality of concepts. According to an embodiment, the user terminal301may receive the response using the communication interface311. The user terminal301may output an audio signal generated inside the user terminal301to the outside using the speaker313or may output an image generated inside the user terminal301to the outside using the display314. FIG.4illustrates a form in which information about a relationship between a concept and an action is stored in a database according to an embodiment of the disclosure. Referring toFIG.4, a capsule DB (e.g., the capsule database DB323) of the intelligent server302may store a capsule in the form of a concept action network (CAN)400. The capsule DB may store an operation of processing a task corresponding to a voice input from a user and a parameter required for the operation in the form of a concept action network (CAN). The CAN may show a systematic relationship between an action and a concept defining a parameter required to perform the action. The capsule DB may store a plurality of capsules (e.g., capsule A401and capsule B402) respectively corresponding to a plurality of domains (e.g., applications). According to an embodiment, one capsule (e.g., capsule A401) may correspond to one domain (e.g., application). Further, one capsule may correspond to at least one service provider (e.g., CP 1403, CP 2404, CP 3405, or CP 4406) for performing a function for a domain related to the capsule. According to an embodiment, one capsule may include at least one action410and at least one concept420for performing a specified function. According to an embodiment, the natural language platform322may generate a plan for performing a task corresponding to a received voice input using a capsule stored in the capsule DB. For example, the planner module322cof the natural language platform322may generate the plan using the capsule stored in the capsule DB. For example, the planner module322cmay generate a plan407using actions4011and4013and concepts4012and4014of capsule A401and an action4041and a concept4042of capsule B402. FIG.5illustrates a screen for a user terminal to process a received voice input through an intelligent application according to an embodiment of the disclosure. Referring toFIG.5, the user terminal301may execute an intelligent application to process a user input through the intelligent server302. According to an embodiment, when recognizing a designated voice input (e.g., Wake up!) or receiving an input via a hardware key (e.g., a dedicated hardware key), the user terminal301may execute the intelligent application for processing the voice input on screen510. For example, the user terminal301may execute the intelligent application in a state in which a schedule application is executed. According to an embodiment, the user terminal301may display an object (e.g., an icon)511corresponding to the intelligent application on the display314. According to an embodiment, the user terminal301may receive a voice input based on a user utterance. For example, the user terminal301may receive a voice input “Tell me the schedule for this week!” According to an embodiment, the user terminal301may display a user interface (UI, e.g., an input window)513of the intelligent application displaying text data of the received voice input on the display. According to an embodiment, the user terminal301may display a result corresponding to the received voice input on screen520on the display. For example, the user terminal301may receive a plan corresponding to the received user input and may display “Schedule for this week” according to the plan on the display. FIG.6is a block diagram of an electronic device configured to enable an AI agent to participate in a conversation between a user and a neighbor according to an embodiment of the disclosure. FIG.7illustrates connections between modules ofFIG.6according to an embodiment of the disclosure. Referring toFIGS.6and7, the electronic device600(e.g., the electronic device101ofFIG.1) may include an audio input module601, a wake-up module602, an audio separation module603, a user verification module604, a voice activity detection (VAD) module605, an ASR606, an NLU607, an NLG608, a preference identification module609, a reliability measurement module610, a conversation participation determination module611, a TTS612, an audio output module613, an emotion detection module614, a filler detection module615, a user verification model616, a personal model617, a general model618, a filler model619, a key utterance list620, a memory688, or a processor699. The foregoing components of the electronic device600may be operatively or electrically connected to each other. The models617to619and the key utterance list620may be stored in the memory688. According to an embodiment, the modules601to615may be operatively connected as shown inFIG.7. The audio input module601may receive an audio signal. For example, the audio input module601may receive an audio signal from a microphone configured in the input module150ofFIG.1. The audio input module601may receive an audio signal from an external device (e.g., a headset or a microphone) connected via a cable through an audio connector configured in the connection terminal178ofFIG.1. The audio input module601may receive an audio signal from an external device wirelessly (e.g., via Bluetooth communication) connected to the electronic device600through a wireless communication circuit (e.g., the wireless communication module192ofFIG.1). The wake-up module602may recognize that a user701calls an AI agent (or voice assistant). According to an embodiment, the wake-up module602may receive an audio signal from the audio input module601and may recognize an utterance (e.g., Hi Bixby) designated to call the AI agent in the received audio signal. For example, the wake-up module602may detect the starting point and the end point of the user utterance in the audio signal, thereby obtaining a part including the user utterance (e.g., a first part corresponding to “High” and a second part corresponding to “Bixby”) in the audio signal. The wake-up module602may compare the obtained utterance part with voice data previously stored, thereby determining whether the audio signal includes a call utterance (or a driving utterance). According to an embodiment, the wake-up module602may support the user701to call the AI agent using a method other than a voice. For example, the wake-up module602may recognize two consecutive presses of a power key of the input module150as a call. In another example, the wake-up module602may recognize a touch input received from a touch circuit of the display module160as a call. The audio separation module603may separate an audio signal710received from the audio input module601through the wake-up module602into a user audio signal711including a voice of the user701and a neighbor audio signal712including a voice of a neighbor702having a conversation with the user701in response to a call from the user701. According to an embodiment, the audio separation module603may obtain the user audio signal711and the neighbor audio signal712using the user verification model616. The user verification model616finds a user voice to which an AI agent needs to respond in an audio signal and may be, for example, an artificial intelligence model trained using utterance data of the user701(e.g., a call utterance of the user701recognized in a user registration process). For example, the audio separation module603may enter the audio signal710as an input value into the user verification model616through the user verification module604by a unit of a frame (e.g., 20 ms) and may determine whether an audio frame entered as the input value includes the voice of the user, based on a result value output from the user verification model616. For example, when the result value indicates that the audio frame includes the user voice, the audio separation module603may classify the audio frame as the user audio signal711. When the result value indicates that the audio frame does not include the voice of the user, the audio separation module603may classify the audio frame as the neighbor audio signal712. The voice activity detection (VAD) module605may recognize a speech section715in the audio signal710received from the audio input module601through the wake-up module602. For example, the audio signal710may be transmitted to the audio separation module603, and a copy thereof may be transmitted to the VAD module605to be used for voice activity detection. According to an embodiment, the VAD module605may recognize the speech section in the audio signal710using a VAD model (e.g., a convolutional neural network (CNN) model or a recurrent neural network (RNN) model) trained using an artificial intelligence algorithm. For example, the VAD module605may enter the audio signal710as an input value into the VAD model by a unit of a frame and may obtain a result value from the VAD model. For example, the result value may include a predictive value indicating whether an input audio frame is a voice frame. When an audio frame input to the VAD model is a voice frame, the VAD module605may recognize whether the voice frame is the start point of a speech section, the end point thereof, or within the speech section, based on predictive values output from the VAD model. For example, a first audio frame, a second audio frame, and a third audio frame may be sequentially input to the VAD model, and a first predictive value, a second predictive value, and a third predictive value may be sequentially output from the VAD model. When the first predictive value indicates that no voice is present in the first audio frame, the second predictive value indicates that a voice is present in the second audio frame, and the third predictive value indicates that a voice is present in the third audio frame, the VAD module605may recognize the second audio frame as the start point of a speech section and may recognize the third audio frame as a frame within the speech section. When the first predictive value indicates that a voice is present in the first audio frame and the second predictive value indicates that no voice is present in the second audio frame, the VAD module605may recognize the second audio frame as the end point of a speech section. The ASR606(e.g., the automatic speech recognition module322aof FIG.3) may convert a user voice in a speech section recognized by the VAD module605in the user audio signal711received through the audio separation module603into user text data721. The ASR606may convert a neighbor voice in a speech section recognized by the VAD module605in the neighbor audio signal712received through the audio separation module603into neighbor text data722. The NLU607(e.g., the natural language understanding module322bofFIG.3) may understand the intent of the user701using the user text data721received from the ASR606. For example, the NLU607may understand what information the user701has queried or requested. The NLU607may understand what function or service the user701has given a command to execute. The NLU607may understand the intent of the neighbor702using the neighbor text data722received from the ASR606. For example, the NLU607may recognize that a neighbor utterance is an answer to a query or request from the user701. According to an embodiment, a mode in which the AI agent responds to a call from the user701may include a one-time conversation mode and a continuous conversation mode. For example, the one-time conversation mode may be a mode in which the AI agent responds to a query or command of the user701after a user call and ends a conversation, or ends the conversation when there is no additional query or command within a specified time after the response. The continuous conversation mode may be a mode in which the AI agent continuously participates in a conversation between the user701and the neighbor702after a user call, and ends the participation in the conversation when an end utterance of the user701is recognized. According to an embodiment, the NLU607may determine whether a response mode is the one-time conversation mode or the continuous conversation mode, based on user intent understood from user voice data. For example, after giving a call utterance of “Hi Bixby” or entering a call key, the NLU607may understand a command of the user701as starting the continuous conversation mode from user voice data received from the ASR606, such as “Start the conversation mode” or “Join our conversation,” and may determine the response mode as the continuous conversation mode. As the response mode is determined as the continuous conversation mode, a component (e.g., the modules603to615) for supporting an AI agent service may be continuously activated. In a state in which the continuous conversation mode is maintained, the NLU607may understand a command of the user701as ending the continuous conversation mode from user voice data received from the ASR606, such as “End the conversation mode,” “Stop now,” or “Hi Bixby, stop,” and may determine to end the continuous conversation mode. As the end of the continuous conversation mode is determined, the component for supporting the AI agent service may be continuously deactivated. The NLG608(e.g., the natural language generation module322dofFIG.3) may generate an answer (i.e., an agent answer)741of the AI agent, based on the user intent731understood by the NLU607. The answer of the agent may be displayed on a display or may be converted into a voice signal751by the TTS612. According to an embodiment, the NLG608may generate an agent answer indicating that the AI agent has understood what the user command is. For example, as the NLU607understands the command of the user as starting the continuous conversation mode, the NLG608may generate an agent answer “The conversation mode is started.” As the NLU607understands the command of the user as ending the continuous conversation mode, the NLG608may generate an agent response “The conversation mode is terminated.” As the NLU607understands a user utterance (e.g., Are you listening?) as identifying that the conversation mode is continuing, the NLG608may generate an agent response “Yes, I am listening.” According to an embodiment, the NLG608may generate an agent answer to a query or request of the user701, based on a knowledge database (e.g., a database configured in the server108ofFIG.1). For example, the NLG608may retrieve information queried or requested by user701, which is understood by NLU607, from the knowledge database703and may generate the answer741of the agent, based on the information retrieved from the knowledge database. According to an embodiment, the NLG608may identify a preference761of the user701for information retrieved according to a query or request of the user701through the preference identification module609and may generate an agent answer, based on preference information. For example, the NLG608may select information to be provided for the user701among information (e.g., a list of recommended movies) retrieved from the knowledge database, based on the preference761identified through the preference identification module609and may generate the answer741of the agent using the selected information. According to an embodiment, the NLG608may determine whether a neighbor answer742understood by the NLU607is right or wrong, based on the knowledge database, and may generate the answer741of the AI agent, based on the determination. For example, when the neighbor702answers “He is the third king of the Joseon Dynasty” to a user question “What number king is King Sejong?,” the NLG608may recognize that the neighbor answer understood by the NLU607includes wrong information, based on the knowledge database, and may generate an agent answer “No. King Sejong is the fourth king of the Joseon Dynasty” by correcting the answer of the neighbor. The preference identification module609may identify a preference of the user701for information obtained by the NLG608, based on the personal model617and/or the general model618. According to an embodiment, the personal model (or personal preference model)617may be an artificial intelligence model that is learned using an artificial intelligence algorithm and is personalized in relation to a preference of the user701. For example, the personal model617may collect a user profile associated with an account used when the user701logs in to the electronic device600. The collected user profile is, for example, a name, an age, a gender, an occupation, a home address, a company address, usage records (e.g., used content, usage time, and frequency of use) of applications installed in the electronic device600, a record of a visit to a specific place (e.g., the location of a visited place and a stay time), and an Internet usage record (e.g., information about a visited site, a visit time, and a search term). When the collected user profile is entered as an input value, the personal model617may output a preference indicating how much the user likes each target as a predictive value. For example, the personal model617may out a preference of the user701for each application, a preference of the user701for each service, or a preference of the user701for each piece of content (e.g., movie, music, food, and sports) as a predictive value. According to an embodiment, the general model (or general preference model)618may be an artificial intelligence model for outputting a common preference of a plurality of unspecified persons as a predictive value. For example, the general model618may predict a preference (e.g., an application preference, a service preference, and a content preference) by age group and/or gender using profiles collected from the plurality of unspecified persons and may provide the preference to the preference identification module609. The NLG608may generate an agent answer, based on the preference identified through the preference identification module609. For example, when a user request understood by the NLU607is a movie recommendation, the NLG608may identify which genre of movie the user701prefers the most in the personal model617through the preference identification module609. The NLG608may identify a movie that is the most popular in the age group of the user701, a movie having a good rating, or a movie that a largest number of viewers have watched currently in the general model618through the preference identification module609. The NLG608may generate an agent answer, based on identified preference information. For example, the NLG608may generate “How about the movie OO?” as an agent answer, based on the user's personal preference information identified in the personal model617. The NLG608may generate “The movie XX is the most popular,” “The movie YY has a good rating,” or “The movie ZZ is currently ranked first” as an agent answer, based on general preference information identified in the general model618. The reliability measurement module610may measure the first reliability771of the neighbor answer742understood by the NLU607, provided to the user701in response to the user utterance, based on the preference761identified through the preference identification module609. The reliability measurement module610may measure the second reliability772of the agent answer741provided by the NLG608in response to the user utterance, based on the preference761identified through the preference identification module609. For example, a user request may be understood as a movie recommendation by the NLU607, and a genre may be understood from the title of a first movie included in a neighbor answer. The reliability measurement module610may identify how much the user701likes the genre of the first movie in the answer of the neighbor702through the preference identification module609and may give a reliability to the neighbor answer, based on identified preference (e.g., in proportion to the preference). When it is understood by the NLU607that there is no movie-related word in the neighbor answer, the reliability measurement module610may give the lowest reliability to the neighbor answer. The reliability measurement module610may identify how much the user701likes the genre of a second movie in an agent answer generated by the NLG608through the preference identification module609and may give a reliability to the agent answer, based on identified preference (e.g., in proportion to the preference). The conversation participation determination module611may determine whether the agent responds, based on the measured reliabilities771and772. For example, in response to the user701's query “What kind of movie do you want to see?” a neighbor answer may be “How about the movie OO?” and an answer provided by the AI agent may be “How about the movie XX?” In another example, in response to the user701's query “Recommend a nice restaurant nearby,” a neighbor answer may be “Only the restaurant OO comes to my mind,” and an answer provided by the AI agent may be “Recommended restaurants near Gangnam Station are aaa, bbb, and ccc.” The reliability measurement module610may give a first reliability to the neighbor answer and a second reliability to the answer of the agent, based on preference identified through the preference identification module609. When the second reliability given to the agent answer is higher than the first reliability given to the neighbor answer or the two reliabilities are the same, the conversation participation determination module611may determine to provide the agent answer to the user701. When the second reliability is lower than the first reliability, the conversation participation determination module611may determine not to provide the agent answer to the user701. The conversation participation determination module611may determine to participate in the conversation between the user701and the neighbor702when there is a positive response to a recommendation of the agent in the conversation. For example, as described above, the user701may positively react, for example, “I think the restaurant bbb will be okay?” to the agent answer “Recommended restaurants near Gangnam Station are aaa, bbb, and ccc,” and the neighbor702may respond, for example, “Then let's go to the restaurant bbb,” to the positive response of the user701. The conversation participation determination module611may recognize a positive response in the conversation through NLU607and accordingly may determine to provide the agent answer741, such as “May I guide you to the restaurant bbb?” prepared by the NLG608to the user701. When a neighbor answer includes wrong information and the NLG608prepares an agent answer correcting the wrong information, the conversation participation determination module611may determine to provide the agent answer to the user701. For example, as described above, when there is wrong information in the neighbor answer “He is the third king of the Joseon Dynasty,” the conversation participation determination module611may determine to provide the agent answer “No. King Sejong is the fourth king of the Joseon Dynasty” to the user701in order to correct the neighbor answer. The conversation participation determination module611may determine a time to provide the agent answer741, based on the VAD. For example, the conversation participation determination module611may recognize that a neighbor utterance has ended in response to a user utterance through the VAD module605and may determine the time to provide the agent answer741after a time of the recognition. The TTS612(e.g., the text-to-speech module322eofFIG.3) may change information in a text form into a voice signal. For example, when participation in the conversation is determined by the conversation participation determination module611, the TTS612may change the agent answer741generated by the NLG608into a voice signal751. The audio output module613may output the voice signal751received from the TTS612. According to an embodiment, the audio output module613may output the answer (voice signal) of the agent received through the TTS612at a time determined by the conversation participation determination module611. For example, the audio output module613may output the voice signal of the agent to a speaker configured in the sound output module155ofFIG.1. The audio output module613may output the voice signal of the agent to an external device (e.g., a headset or a speaker) connected via a cable through an audio connector configured in the connection terminal178ofFIG.1. The audio output module613may output the voice signal of the agent to an external device wirelessly (e.g., via Bluetooth communication) connected to the electronic device600through a wireless communication circuit (e.g., the wireless communication module192ofFIG.1). The emotion detection module614may recognize a voice signal indicating a user emotion (or user response) to the answer of the agent from the user audio signal711received from the audio input module601through the audio separation module603. The recognized user emotion may be used as a measure of how reliable the answer of the agent is. For example, the emotion detection module614may recognize a voice signal indicating a positive or negative response of the user701to the answer of the agent, based on characteristics (e.g., strength, pitch, and tone) of the user audio signal. The emotion detection module614may recognize a user emotion, based on the user text data received from the ASR606. For example, the emotion detection module614may recognize that a word (e.g., oh, good, or okay) expressing a positive response exist in the user text data received from the ASR606and accordingly may recognize that the user emotion is positive about the answer of the agent. The emotion detection module614may recognize that a word (e.g., umm, ah, no, or I don't know) expressing a negative response exists in the user text data received from the ASR606, and accordingly may recognize that the user emotion is negative about in the answer of the agent. According to an embodiment, the preference identification module609may update the personal model617to be adapted to the user's taste, based on the user emotion781recognized by the emotion detection module614. For example, in response to the user's query “What kind of movie do you want to see?” the user may show a negative response to an answer “How about the movie XX?” provided by the agent to the user, while the user701may show a positive response to an agent answer “How about the movie YY?” As the negative response of the user701to the movie XX is recognized by the emotion detection module614, the preference identification module609may update the personal model617such that the preference of the user701for the genre of the movie XX is adjusted to be low. As the positive response of the user701to the movie YY is recognized by the emotion detection module614, the preference identification module609may update the personal model617such that the preference of the user701for the genre of the movie YY is adjusted to be high. For example, the personal model617may be trained to be adaptive to the taste of the user701using emotion data of the user701received from the emotion detection module614through the preference identification module609. The filler detection module615may recognize a voice signal corresponding to a filler (e.g., uh, um, or ah) in the neighbor audio signal712received from the audio input module601through the audio separation module603. The filler detection module615may recognize a filler from the neighbor text data722received from the ASR606. The filler may be a word recognized as a situation in which the AI agent needs to participate in the conversation between the user and the neighbor. For example, the conversation participation determination module611may recognize that a filler is included in a neighbor answer through the filler detection module615and accordingly may determine to provide the answer741of the agent provided by the NLG608to the user701. For example, when the neighbor702hesitates, saying “Um” in response to a user question “What are you going to do for dinner tonight?” the conversation participation determination module611may recognize a filler “Um” through the filler detection module615and accordingly may determine to provide an agent answer “How about shrimp pasta?” prepared by the NLG608to the user701. According to an embodiment, the filler detection module615may recognize the voice signal corresponding to the filler (e.g., uh, um, or ah) in the neighbor audio signal712using the filler model619. For example, the filler model619may be an artificial intelligence model trained using filler training data. When the neighbor audio signal is entered as an input value, the filler model619may output a predictive value indicating whether a filler exists in the neighbor audio signal712. The filler detection module615may output the predictive value783to the conversation participation determination module611. The key utterance list620may include utterance data designated as a situation in which an additional remark of the AI agent is required. According to an embodiment, when the neighbor answer742understood by the NLU607includes an utterance included in the key utterance list620, the conversation participation determination module611may determine to provide the answer741of the agent prepared by the NLG608to the user701. For example, when the neighbor702answers “I don't know. Should I ask Bixby?” to a user question “Will it rain in Seoul this weekend?” the conversation participation determination module611may recognize that the neighbor answer includes utterance data “I don't know” included in the key utterance list620and accordingly may determine to provide an agent answer “It is sunny today in Seoul” prepared by the NLG608to the user701. A reference time (e.g., hangover time)785for inducing participation of the AI agent in the conversation may be defined. According to an embodiment, the conversation participation determination module611may recognize through the NLU607and the VAD module605that a user utterance is a query requesting a neighbor answer and no neighbor utterance starts within a designated reference time after the user utterance ends. Accordingly, the conversation participation determination module611may determine to provide an agent answer prepared by the NLG608to the user701. For example, an answer prepared by the agent to a query of the user701“What kind of movie do you want to see?” may be “How about the movie XX?” When the neighbor702does not answer, the AI agent may provide a prepared answer to the user701. In another example, when the neighbor702does not answer to a query of the user701“Tell me the weather in Seoul today,” the AI agent may provide an answer, for example, “It is sunny today in Seoul,” prepared based on the knowledge database to the user701. The neighbor702having the conversation with the user701may also have a designated personal model, and the personal model of the neighbor702may be shared with the NLG608through the preference identification module609. According to an embodiment, the NLG608may generate a plurality of agent answers using a plurality of personal models. For example, the NLG608may select information to be provided for the user701among the information (e.g., a list of recommended movies) retrieved from the knowledge database, based on a first preference identified from a first personal model (e.g., a user personal model) through the preference identification module609and may generate a first agent answer using the selected information. The NLG608may select information to be provided for the user701, based on a second preference identified from a second personal model (e.g., a neighbor personal model) through the preference identification module609and may generate a second agent answer using the selected information. The reliability measurement module610may measure the first reliability of the first agent answer and the second reliability of the second agent answer, based on the first preference. In addition, the reliability measurement module610may measure the third reliability of a neighbor answer, based on the first preference. The conversation participation determination module611may identify an answer having the highest reliability among the measured reliabilities. As a result of identification, when the answer having the highest reliability is the first agent answer or the second agent answer, the conversation participation determination module611may determine to provide the corresponding agent answer to the user701. When there is no personal model617personalized to the user701or the personal model617does not include user information (e.g., information indicating a preference of the user701for content (e.g., movie) queried by the user701), the general model618may be used to measure reliability. According to an embodiment, the general model618may collect information indicating preferences recorded by content users for each piece of content on an Internet site. For example, the general model618may collect ratings received by restaurant users on a delivery application or ratings received by viewers for released movies. The reliability measurement module610may measure the reliability of an agent answer and the reliability of a neighbor answer, based on information identified from the general model618. For example, in response to a query of the user701“What kind of movie do you want to see?” a neighbor answer may be “How about the movie OO?” and an answer prepared by the AI agent may be “How about the movie XX?” The reliability measurement module610may identify rating information about the movie OO and the movie XX from the general model618. The reliability measurement module610may give a first reliability to the neighbor answer “How about the movie OO?” and may give a second reliability to the agent answer “How about the movie XX?” based on the identified rating information. When the first reliability is higher than the second reliability, the AI agent may provide the prepared neighbor answer to the user701. When the first reliability is lower than or equal to the second reliability, the AI agent may provide the prepared agent answer to the user701. At least one of the modules601to615may be stored as instructions in the memory688(e.g., the memory130ofFIG.1) and may be executed by the processor699(e.g., the processor120ofFIG.1). At least one of the modules601to615may be executed by a processor (e.g., the coprocessor123) specializing in processing an artificial intelligence model. At least one of the modules601to615may be omitted from the electronic device600and may instead be configured in an external device. For example, at least one of the modules603to612,614, and615may be configured in the external device (e.g., the server108ofFIG.1or the intelligent server302ofFIG.3). For example, the NLU607and the NLG608may be configured in the external device. The processor699may transmit an input value (e.g., the user text data721and the neighbor text data722) to be entered into the NLU607to the external device through the wireless communication circuit. The processor699may transmit an input value (e.g., the preference761) to be entered into the NLG608to the external device through the wireless communication circuit. The processor699may receive a result value (e.g., the neighbor answer742) output from the NLU607and/or a result value (e.g., the answer741of the AI agent) output from the NLG608from the external device through the wireless communication circuit. At least one of the models617to619may be omitted from the electronic device600and may instead be configured in the external device (e.g., the server108ofFIG.1or the intelligent server302ofFIG.3). For example, the general model618may be provided in the external device. The processor699may transmit an input value (e.g., a user utterance) to be entered into the general model618to the external device through the wireless communication circuit. The processor699may receive a result value (e.g., general preference information) output from the general model618from the external device through the wireless communication circuit. FIG.8is a flowchart illustrating operations of a processor for an AI agent to participate in a conversation between a user and a neighbor according to an embodiment of the disclosure. Referring toFIG.8, in operation810, the processor699may recognize a speech section of the user and a speech section of the neighbor in an audio signal received from a microphone. In operation820, the processor699may analyze (e.g., syntactic analysis and/or semantic analysis) a voice signal in the speech section of the user, thereby identifying a user utterance. In addition, the processor699may analyze a voice signal in the speech section of the neighbor, thereby identifying a neighbor utterance in response to the user. In operation830, the processor699may identify general preference information associated with the user utterance using a general model (e.g., the general model618ofFIG.6). For example, when the user utterance is recognized as a movie recommendation, the processor699may identify a movie that is the most popular in the age group of the user, a movie having a good rating, or a movie that a largest number of viewers have watched currently in the general model. In operation835, the processor699may generate a first answer candidate of the AI agent to be provided by the AI agent to the user in response to the user utterance using information retrieved from a knowledge database, based on the user utterance and the general preference information. For example, the processor699may generate “The movie XX is the most popular,” “The movie YY movie has a good rating,” or “The movie ZZ is currently ranked first” as the first answer candidate, based on the general preference information. In operation840, the processor699may identify whether there is a personal model (e.g., the personal model617ofFIG.6) personalized to the user. When the personal model exists, the processor699may identify personal preference information associated with the user utterance using the personal model in operation845. For example, when the user utterance is recognized as a movie recommendation, the processor699may identify which genre of movie the user701prefers the most in the personal model617. In operation850, the processor699may generate a second answer candidate of the AI agent using information retrieved from the knowledge database, based on the user utterance and the personal preference information. In operation860, the processor699may measure the first reliability of a neighbor answer and the second reliability of an agent answer, based on preference information. The second answer candidate is a priority as a reliability measurement target, and when the second answer candidate is not generated due to no personal model, the first answer candidate may be selected as the reliability measurement target. In operation870, the processor699may provide the agent answer through a speaker as the second reliability is higher than the first reliability. When the second reliability level is lower than or equal to the first reliability level, the processor699may not respond to the user utterance. FIGS.9,10, and11illustrate user interface (UI) screens providing an agent answer during a conversation between a user and a neighbor according to various embodiments of the disclosure. Referring toFIG.9, the processor699may provide the user with a UI screen900including an indicator910that allows the user to recognize that the AI agent is performing the continuous conversation mode. The processor699may identify a user query921and a neighbor answer923in response to the user query921in an audio signal received from a microphone. The processor699may dispose the user query921and the neighbor answer923on the UI screen900in a discriminative manner. For example, the processor699may dispose the user query921on the right side of the screen and the neighbor answer923on the left side of the screen. The processor699may retrieve information to be provided for the user from a knowledge database, based on the identified user query921and personal preference information of the user (e.g., a drama genre preferred by the user) obtained from the personal model in relation to the user query921. The processor699may provide an agent answer including the retrieved information927and an agent utterance929to the user through the UI screen900. Referring toFIG.10, the processor699may identify a first user query1021and a first neighbor answer1023in response thereto in an audio signal received from a microphone. The processor699may generate a first agent answer in response to the first user query1021. The processor699may measure the first reliability of the first neighbor answer1023and the second reliability of the first agent answer, based on personal preference information of the user obtained from a personal model in relation to the first user query1021. As the second reliability is not higher than the first reliability, the processor699may not respond to the first user query1021. Subsequently, the processor699may identify a second user query1025and a second neighbor answer1027in response thereto received from the microphone. The processor699may generate a second agent answer in response to the second user query1025. The processor699may measure the third reliability of the second neighbor answer1027and the fourth reliability of the second agent answer in the same manner as when measuring the above reliabilities. As the fourth reliability is higher than the third reliability, the processor699may provide the second agent answer including retrieved information1031and an agent utterance1033to the user through a UI screen1000. Referring toFIG.11, the processor699may identify a user query1121and a neighbor answer1123in response thereto in an audio signal received from a microphone. The processor699may retrieve information to be provided for the user from a knowledge database, based on the identified user query1121and general preference information (e.g., a drama genre preferred by women in their 20s like the user) obtained from a general model in relation to the user query1121. The processor699may provide an agent answer including the retrieved information1125and an agent utterance1127to the user through a UI screen1100. FIG.12is a flowchart illustrating operations of a processor for an AI agent to participate in a conversation between a user and a neighbor according to an embodiment of the disclosure. Referring toFIG.12, in operation1210, the processor699may identify a speech section of the user and a speech section of the neighbor in an audio signal received from a microphone. Here, the microphone may be an internal microphone configured in the electronic device600or an external microphone connected to the electronic device600through a wireless communication circuit or an audio connector. In operation1220, the processor699may identify a user utterance in the speech section of the user and a neighbor answer in the speech section of the neighbor through semantic and/or grammatical analysis. In operation1230, the processor699may obtain preference information associated with the user utterance. For example, the processor699may obtain personal preference information associated with the user utterance using an artificial intelligence model (e.g., the personal model617ofFIG.6) personalized to the user in relation to the user's preference. When there is no artificial intelligence model personalized to the user, the processor699may identify general preference information associated with the user utterance using a generalized artificial intelligence model (e.g., the general model618ofFIG.6) in relation to a preference of a plurality of unspecified persons. In operation1240, the processor699may give a first reliability to the neighbor answer and a second reliability to an agent answer generated in response to the user utterance, based on the identified preference information (e.g., in proportion to a preference). In operation1250, the processor699may not respond to the user utterance when the second reliability is less than the first reliability, and may output the agent answer through a speaker when the second reliability is equal to or higher than the first reliability. Here, the speaker may be an internal speaker configured in the electronic device600or an external speaker connected to the electronic device600through the wireless communication circuit or the audio connector. FIG.13is a flowchart illustrating operations of a processor for an AI agent to participate in a conversation between a user and a neighbor according to an embodiment of the disclosure. Referring toFIG.13, in operation1310, the processor699may configure the AI agent in a conversation mode to participate in the conversation between the user and the neighbor. For example, after the AI agent is called, the processor699may identify a user utterance commanding the continuous conversation mode in an audio signal received from a microphone. Here, the microphone may be an internal microphone configured in the electronic device600or an external microphone connected to the electronic device600through a wireless communication circuit or an audio connector. During the configured conversation mode, the processor699may participate in the conversation between the user and the neighbor. In operation1320, the processor699may identify a user utterance in an audio signal received from the microphone. When no neighbor utterance is identified in an audio signal received from the microphone within a designated reference time (e.g., hangover time) from when the user utterance is identified, the processor699may output an answer of the AI agent generated in response to the user utterance through a speaker. Here, the speaker may be an internal speaker configured in the electronic device600or an external speaker connected to the electronic device600through the wireless communication circuit or the audio connector. FIG.14is a flowchart illustrating operations of a processor for an AI agent to participate in a conversation between a user and a neighbor according to an embodiment of the disclosure. Referring toFIG.14, in operation1410, the processor699may configure the AI agent in a conversation mode to participate in the conversation between the user and the neighbor. For example, after the AI agent is called, the processor699may identify a user utterance commanding the continuous conversation mode in an audio signal received from a microphone. Here, the microphone may be an internal microphone configured in the electronic device600or an external microphone connected to the electronic device600through a wireless communication circuit or an audio connector. During the configured conversation mode, the processor699may participate in the conversation between the user and the neighbor. In operation1420, the processor699may identify a speech section of the user and a speech section of the neighbor in an audio signal received from the microphone. In operation1430, the processor699may identify a user utterance in the speech section of the user and a neighbor answer in the speech section of the neighbor through semantic and/or grammatical analysis. When the neighbor answer includes designated utterance data (e.g., I don't know, um, or ah) or it is identified through a knowledge database that the neighbor answer includes wrong information, the processor699may output an answer of the AI agent generated in response to the user utterance through a speaker in operation1440. Here, the speaker may be an internal speaker configured in the electronic device600or an external speaker connected to the electronic device600through the wireless communication circuit or the audio connector. According to various embodiments, an electronic device may include: a speaker; a microphone; an audio connector; a wireless communication circuit; a processor configured to be operatively connected to the speaker, the microphone, the audio connector, and the wireless communication circuit; and a memory configured to be operatively connected to the processor, wherein the memory may store instructions that, when executed, cause the processor to identify a speech section of a user and a speech section of a neighbor in an audio signal received through the microphone, the audio connector, or the wireless communication circuit, identify a user utterance in the speech section of the user and a neighbor answer to the user utterance in the speech section of the neighbor, obtain preference information associated with the user utterance, give a first reliability to the neighbor answer and a second reliability to an agent answer of an artificial intelligence (AI) agent generated in response to the user utterance, based on the preference information, not respond to the user utterance when the second reliability is lower than the first reliability, and output the agent answer through the speaker, the audio connector, or the wireless communication circuit when the second reliability is equal to or higher than the first reliability. The instructions may cause the processor to obtain the preference information associated with the user utterance using an artificial intelligence model (e.g., the personal model617) personalized in relation to a preference of the user. The instructions may cause the processor to obtain the preference information associated with the user utterance using an artificial intelligence model (e.g., the general model618) generalized in relation to a preference of a plurality of unspecified persons when there is no artificial intelligence model personalized to the user. The instructions may cause the processor to identify a positive or negative response of the user to the output agent answer in the speech section of the user, and update the personalized model, based on the identified response. The instructions may cause the processor to configure the AI agent in a conversation mode of participating in a conversation between the user and the neighbor when a designated first utterance is identified in the speech section of the user, and terminate the conversation mode when a designated second utterance is identified in the speech section of the user. The instructions may cause the processor to output a designated agent answer through the speaker, the audio connector, or the wireless communication circuit when a designated third utterance (e.g., Are you listening?) is identified in the speech section of the user while the AI agent is configured in the conversation mode. The instructions may cause the processor to output the user utterance and the neighbor answer through a display, and output the agent answer through the display when the second reliability is equal to or higher than the first reliability. The instructions may cause the processor to identify the speech section of the user and the speech section of the neighbor in the audio signal using an artificial intelligence model (e.g., the user verification model616) trained to find a voice of the user. According to various embodiments, a method for operating an electronic device may include: identifying a speech section of a user and a speech section of a neighbor in an audio signal received through a microphone, an audio connector, or a wireless communication circuit provided in the electronic device; identifying a user utterance in the speech section of the user and a neighbor answer to the user utterance in the speech section of the neighbor; obtaining preference information associated with the user utterance; giving a first reliability to the neighbor answer and a second reliability to an agent answer of an artificial intelligence (AI) agent generated in response to the user utterance, based on the preference information; and outputting the agent answer through the speaker, the audio connector, or the wireless communication circuit when the second reliability is equal to or higher than the first reliability, without responding to the user utterance when the second reliability is lower than the first reliability. The obtaining of the preference information may include obtaining the preference information associated with the user utterance using an artificial intelligence model personalized in relation to a preference of the user. The obtaining of the preference information may include obtaining the preference information associated with the user utterance using an artificial intelligence model generalized in relation to a preference of a plurality of unspecified persons when there is no artificial intelligence model personalized to the user. The method may further include: identifying a positive or negative response of the user to the output agent answer in the speech section of the user; and updating the personalized model, based on the identified response. The method may further include: configuring the AI agent in a conversation mode of participating in a conversation between the user and the neighbor when a designated first utterance is identified in the speech section of the user; and terminating the conversation mode when a designated second utterance is identified in the speech section of the user. The method may further include outputting a designated agent answer through the speaker, the audio connector, or the wireless communication circuit when a designated third utterance is identified in the speech section of the user while the AI agent is configured in the conversation mode. The method may further include: outputting the user utterance and the neighbor answer through a display; and outputting the agent answer through the display when the second reliability is equal to or higher than the first reliability. The identifying of the speech section of the user and the speech section of the neighbor may include identifying the speech section of the user and the speech section of the neighbor in the audio signal using an artificial intelligence model trained to find a voice of the user. According to various embodiments, an electronic device may include: a speaker; a microphone; an audio connector; a wireless communication circuit; a processor configured to be operatively connected to the speaker, the microphone, the audio connector, and the wireless communication circuit; and a memory configured to be operatively connected to the processor, wherein the memory may store instructions that, when executed, cause the processor to: configure an artificial intelligence (AI) agent in a conversation mode of participating in a conversation between a user and a neighbor after the AI agent is called; and identify an utterance of the user in an audio signal received through the microphone, the audio connector, or the wireless communication circuit, and output an answer of the AI agent generated in response to the utterance of the user through the speaker, the audio connector, or the wireless communication circuit when an utterance of the neighbor is not identified in an audio signal received through the microphone, the audio connector, or the wireless communication circuit within a designated reference time (e.g., hangover time) from when the utterance of the user is identified, while the AI agent is configured in the conversation mode. According to various embodiments, an electronic device may include: a speaker; a microphone; an audio connector; a wireless communication circuit; a processor configured to be operatively connected to the speaker, the microphone, the audio connector, and the wireless communication circuit; and a memory configured to be operatively connected to the processor, wherein the memory may store instructions that, when executed, cause the processor to configure an artificial intelligence (AI) agent in a conversation mode of participating in a conversation between a user and a neighbor after the AI agent is called; and identify a speech section of the user and a speech section of the neighbor in an audio signal received through the microphone, the audio connector, or the wireless communication circuit, identify a user utterance in the speech section of the user and a neighbor answer to the user utterance in the speech section of the neighbor, and output an answer of the AI agent generated in response to the user utterance through the speaker, the audio connector, or the wireless communication circuit when the neighbor answer includes designated utterance data (e.g., I don't know, um, or ah) or it is identified that the neighbor answer includes wrong information, while the AI agent is configured in the conversation mode. While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
104,711
11862179
DETAILED DESCRIPTION Some implementations of the disclosed technology will be described more fully with reference to the accompanying drawings. This disclosed technology may, however, be embodied in many different forms and should not be construed as limited to the implementations set forth herein. The components described hereinafter as making up various elements of the disclosed technology are intended to be illustrative and not restrictive. Many suitable components that would perform the same or similar functions as components described herein are intended to be embraced within the scope of the disclosed electronic devices and methods. Reference will now be made in detail to example embodiments of the disclosed technology that are illustrated in the accompanying drawings and disclosed herein. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. FIG.1is a block diagram of an example system100that may be used to detect a manipulated vocal sample by transforming a given vocal sample from a wavelength domain into a frequency domain and comparing a distribution of leading digits of amplitudes in the frequency domain to a predetermined frequency. The system100may be configured to perform one or more processes that enable the detection of manipulated vocal samples, including calculating Fourier transforms of vocal samples in substantially real-time, converting the transformed vocal samples into spectrogram representation, and comparing the leading digit of amplitude distributions in the frequency domain to a predetermined frequency distribution. The components and arrangements shown inFIG.1are not intended to limit the disclosed embodiments as the components used to implement the disclosed processes and features may vary. As shown, system100may interact with a user device102via a network106. In certain example implementations, the system100may include a web server110, a call center server112, a transaction server114, a local network116, a manipulation detection module120, a database118, an API server122, and an audio processing device124. In some embodiments, a user may operate the user device102. The user device102can include one or more of a mobile device, smart phone, general purpose computer, tablet computer, laptop computer, telephone, PSTN landline, smart wearable device, voice command device, other mobile computing device, or any other device capable of communicating with the network106and ultimately communicating with one or more components of the system100. In some embodiments, the user device102may include or incorporate electronic communication devices for hearing or vision impaired users. Users may include individuals such as, for example, subscribers, clients, prospective clients, or customers of an entity associated with an organization, such as individuals who have obtained, will obtain, or may obtain a product, service, or consultation from an entity associated with the system100. According to some embodiments, the user device102may include an environmental sensor for obtaining audio or visual data, such as a microphone and/or digital camera, a geographic location sensor for determining the location of the device, an input/output device such as a transceiver for sending and receiving data, a display for displaying digital images, one or more processors including a sentiment depiction processor, and a memory in communication with the one or more processors. The network106may be of any suitable type, including individual connections via the internet such as cellular or WiFi networks. In some embodiments, the network106may connect terminals, services, and mobile devices using direct connections such as radio-frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), WiFi™, ZigBee™, ambient backscatter communications (ABC) protocols, USB, WAN, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connections be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore the network connections may be selected for convenience over security. The network106may include any type of computer networking arrangement used to exchange data. For example, the network106may be the Internet, a private data network, virtual private network using a public network, and/or other suitable connection(s) that enable(s) components in the system100environment to send and receive information between the components of the system100. The network106may also include a public switched telephone network (“PSTN”) and/or a wireless network. In accordance with certain example implementations, a third-party server126may be in communication with the system100via the network106. In certain implementations, the third-party server126can include a computer system associated with an entity (other than the entity associated with the system100and its customers) that performs one or more functions associated with the customers. For example, the third-party server126can include an automated teller machine (ATM) system that allows a customer to withdraw money from an account managed via an organization that controls the system100. As another example, the third-party server126may include a computer system associated with a product repair service that submits a warranty claim for a product that a customer purchased from the organization that controls the system100. The system100may be associated with and optionally controlled by an entity such as a business, corporation, individual, partnership, or any other entity that provides one or more of goods, services, and consultations to individuals such as users or customers. The system100may include one or more servers and computer systems for performing one or more functions associated with products and/or services that the organization provides. Such servers and computer systems may include, for example, the web server110, the call center server112, and/or the transaction server114, as well as any other computer systems necessary to accomplish tasks associated with the organization or the needs of users (which may be customers of the entity associated with the organization). The web server110may include a computer system configured to generate and provide one or more websites accessible to users, as well as any other individuals involved in an organization's normal operations. The web server110, for example, may include a computer system configured to receive communications from the user device102via for example, a mobile application, a chat program, an instant messaging program, a voice-to-text program, an SMS message, email, or any other type or format of written or electronic communication. The web server110may have one or more processors132and one or more web server databases134, which may be any suitable repository of website data. Information stored in the web server110may be accessed (e.g., retrieved, updated, and added to) via the local network116(and/or the network106) by one or more devices (e.g., manipulation detection module120and/or the audio processing device124) of the system100. In some embodiments, one or more processors132may be used to implement an automated natural language dialogue system that may interact with a user via different types of communication channels such as a website, mobile application, instant messaging application, SMS message, email, phone, or any other type of spoken or written electronic communication. When receiving an incoming message from, for example, the user device102, the web server110may be configured to determine the type of communication channel the user device102used to generate the incoming message. The call center server112may include a computer system configured to receive, process, and route telephone calls and other electronic communications between a user operating a user device102and the manipulation detection module120. The call center server112may have one or more processors142and one or more call center databases144, which may be any suitable repository of call center data. Information stored in the call center server112may be accessed (e.g., retrieved, updated, and added to) via the local network116(and/or network106) by one or more devices of the system100. In some embodiments, the call center server processor142may be used to implement an interactive voice response (IVR) system that interacts with the user over the phone. The transaction server114may include a computer system configured to process one or more transactions involving an account associated with users or customers, or a request received from users or customers. In some embodiments, transactions can include, for example, a product/service purchase, product/service return, financial transfer, financial deposit, financial withdrawal, financial credit, financial debit, dispute request, warranty coverage request, shipping information, delivery information, and any other type of transaction associated with the products and/or services that an entity associated with system100provides to individuals such as customers. The transaction server114may have one or more processors152and one or more transaction server databases154, which may be any suitable repository of transaction data. Information stored in transaction server114may be accessed (e.g., retrieved, updated, and added to) via the local network116(and/or network106) by one or more devices of the system100. In some embodiments, the transaction server114tracks and stores event data regarding interactions between a third-party, such as a third-party server126, with the system100, and on behalf of the individual users or customers. For example, the transaction server114may track third-party interactions such as purchase requests, refund requests, shipping status, shipping charges, warranty claims, account withdrawals and deposits, and any other type of interaction that the third-party server126may conduct with the system100on behalf of an individual such as a user or customer. The local network116may include any type of computer networking arrangement used to exchange data in a localized area, such as WiFi, Bluetooth™ Ethernet, and other suitable network connections that enable components of the system100to interact with one another and to connect to the network106for interacting with components in the system100environment. In some embodiments, the local network116may include an interface for communicating with or linking to the network106. In other embodiments, certain components of the system100may communicate via the network106, without a separate local network116. In accordance with certain example implementations of the disclosed technology, the manipulation detection module120, which is described more fully below with reference toFIG.2, may include one or more computer systems configured to compile data from a plurality of sources, such as the web server110, the call center server112, the transaction server114, and/or the database118. The manipulation detection module120may correlate compiled data, analyze the compiled data, arrange the compiled data, generate derived data based on the compiled data, and store the compiled and derived data in a database such as the database118. According to some embodiments, the database118may be a database associated with an organization and/or a related entity that stores a variety of information relating to users, customers, transactions, and business operations. The database118may also serve as a back-up storage device and may contain data and information that is also stored on, for example, databases134,144,154,164,174(and260, as will be discussed with reference toFIG.2). The database118may be accessed by the manipulation detection module120and may be used to store records of every interaction, communication, and/or transaction a particular user or customer has had with the organization108and/or its related entity in the past to enable the creation of an ever-evolving customer context that may enable the manipulation detection module120, in conjunction with the audio processing device124, to determine whether a received vocal sample has been manipulated or is associated with an authentic vocal sample that has not been manipulated or deep faked. In certain example implementations, the API server122may include one or more computer systems configured to execute one or more application program interfaces (APIs) that provide various functionalities related to the operations of the system100. In some embodiments, the API server122may include API adapters that enable the API server122to interface with and utilize enterprise APIs maintained by an organization and/or an associated entity that may be housed on other systems or devices. In some embodiments, APIs can provide functions that include, for example, retrieving user account information, modifying user account information, executing a transaction related to an account, scheduling a payment, authenticating a user, updating a user account to opt-in or opt-out of notifications, and any other such function related to management of user profiles and accounts. The API server122may include one or more processors162and one or more API databases164, which may be any suitable repository of API data. Information stored in the API server122may be accessed (e.g., retrieved, updated, and added to) via the local network116(and/or network106) by one or more devices (e.g., manipulation detection module120) of system100. In some embodiments, the API processor162may be used to implement one or more APIs that can access, modify, and retrieve user account information. In certain embodiments, real-time APIs consistent with certain disclosed embodiments may use Representational State Transfer (REST) style architecture, and in this scenario, the real time API may be called a RESTful API. In certain embodiments, real-time APIs consistent with the disclosed embodiments may utilize streaming APIs to provide real-time data exchange between various components of the system. While RESTful APIs may provide for a request and response model of data transfer, a streaming API may open a persistent connection between components of the system, and provide data in real-time whenever a state change occurs on a component of the system (e.g., API server122) to another component of the system (e.g., audio processing device124, manipulation detection module120, transaction server114, call center server112, and/or web server110). In certain embodiments, a real-time API may include a set of Hypertext Transfer Protocol (HTTP) request messages and a definition of the structure of response messages. In certain aspects, the API may allow a software application, which is written against the API and installed on a client (such as, for example, the transaction server114) to exchange data with a server that implements the API (such as, for example, the API server122), in a request-response pattern. In certain embodiments, the request-response pattern defined by the API may be configured in a synchronous fashion and may require that the response be provided in real-time. In some embodiments, a response message from the server to the client through the API consistent with the disclosed embodiments may be in formats including, for example, Extensible Markup Language (XML), JavaScript Object Notation (JSON), and/or the like. In some embodiments, the API design may also designate specific request methods for a client to access the server. For example, the client may send GET and POST requests with parameters URL-encoded (GET) in the query string or form-encoded (POST) in the body (e.g., a form submission). In certain example implementations, the client may send GET and POST requests with JSON serialized parameters in the body. Preferably, the requests with JSON serialized parameters use “application/json” content-type. In another aspect, an API design may also require the server implementing the API return messages in JSON format in response to the request calls from the client. In accordance with certain example implementations of the disclosed technology, the audio processing device124may include a computer system configured to receive and process incoming vocal/audio samples and determine a meaning of the incoming message. In some embodiments, the audio processing device may be further configured to process received audio samples (e.g., vocal commands or requests received from user device102). For example, audio processing device124may be configured to transform the received vocal sample from a wavelength domain into a frequency domain. Audio processing device124may achieve the audio transformation by using a Fourier transformation, a short time Fourier transformation, a discrete cosine transformation, or any other suitable method for converting an audio sample from a wavelength domain into a frequency domain. Audio processing device124may be configured to receive commands or requests from a user (e.g., from user device102). The commands or request may include requesting access to one or more third party servers (e.g., accessing third-party server126to authenticate an ATM transaction associated with the third-party server126), requesting approval of a purchase or transaction (e.g., a transaction initiated with transaction server124), requesting approval to log into an account associated with the organization (e.g., logging into a secured user account via web server110), or requesting a service over an automated call or IVR system (e.g., via call center server112). The audio processing device124may include one or more processors172and one or more audio processing databases174, which may be any suitable repository of audio/vocal sample data. Information stored on the audio processing device124may be accessed (e.g., retrieved, updated, and added to) via the local network116(and/or network106) by one or more devices (e.g., the manipulation detection module120) of system100. In some embodiments, processor172may be used to implement a natural language processing system that can determine the meaning behind a spoken utterance and convert it to a form that can be understood by other devices. Although described in the above embodiments as being performed by the web server110, the call center server112, the transaction server114, the manipulation detection module120, the database118, the API server122, and the audio processing device124, some or all of those functions may be carried out by a single computing device. The features and other aspects and principles of the disclosed embodiments may be implemented in various environments. Such environments and related applications may be specifically constructed for performing the various processes and operations of the disclosed embodiments or they may include a general-purpose computer or computing platform selectively activated or reconfigured by program code to provide the necessary functionality. Further, the processes disclosed herein may be implemented by a suitable combination of hardware, software, and/or firmware. For example, the disclosed embodiments may implement general purpose machines configured to execute software programs that perform processes consistent with the disclosed embodiments. Alternatively, the disclosed embodiments may implement a specialized apparatus or system configured to execute software programs that perform processes consistent with the disclosed embodiments. Furthermore, although some disclosed embodiments may be implemented by general purpose machines as computer processing instructions, all or a portion of the functionality of the disclosed embodiments may be implemented instead in dedicated electronics hardware. The disclosed embodiments also relate to tangible and non-transitory computer readable media that include program instructions or program code that, when executed by one or more processors, perform one or more computer-implemented operations. The program instructions or program code may include specially designed and constructed instructions or code, and/or instructions and code well-known and available to those having ordinary skill in the computer software arts. For example, the disclosed embodiments may execute high level and/or low-level software instructions, such as machine code (e.g., such as that produced by a compiler) and/or high-level code that can be executed by a processor using an interpreter. FIG.2is a block diagram (with additional details) of an example manipulation detection module120, as also depicted inFIG.1. According to some embodiments, the user device102, the web server110, the call center server112, the transaction server114, the API server122, the audio processing device124, and the third-party server126, as depicted inFIG.1, may have a similar structure and components that are similar to those described with respect to manipulation detection module120shown inFIG.2. As shown, the manipulation detection module120may include a processor210, an input/output (“I/O”) device220, a memory230containing an operating system (“OS”)240and a program250. In certain example implementations, the manipulation detection module120may be a single server or may be configured as a distributed computer system including multiple servers or computers that interoperate to perform one or more of the processes and functionalities associated with the disclosed embodiments. In some embodiments, the manipulation detection module120may further include a peripheral interface, a transceiver, a mobile network interface in communication with the processor210, a bus configured to facilitate communication between the various components of the manipulation detection module120, and a power source configured to power one or more components of the manipulation detection module120. A peripheral interface, for example, may include the hardware, firmware and/or software that enable(s) communication with various peripheral devices, such as media drives (e.g., magnetic disk, solid state, or optical disk drives), other processing devices, or any other input source used in connection with the disclosed technology. In some embodiments, a peripheral interface may include a serial port, a parallel port, a general-purpose input and output (GPIO) port, a game port, a universal serial bus (USB), a micro-USB port, a high definition multimedia (HDMI) port, a video port, an audio port, a Bluetooth™ port, a near-field communication (NFC) port, another like communication interface, or any combination thereof. In some embodiments, a transceiver may be configured to communicate with compatible devices and ID tags when they are within a predetermined range. A transceiver may be compatible with one or more of: radio-frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), WiFi™, ZigBee™, ambient backscatter communications (ABC) protocols or similar technologies. A mobile network interface may provide access to a cellular network, the Internet, or another wide-area or local area network. In some embodiments, a mobile network interface may include hardware, firmware, and/or software that allow(s) the processor(s)210to communicate with other devices via wired or wireless networks, whether local or wide area, private or public, as known in the art. A power source may be configured to provide an appropriate alternating current (AC) or direct current (DC) to power components. The processor210may include one or more of a microprocessor, microcontroller, digital signal processor, co-processor or the like or combinations thereof capable of executing stored instructions and operating upon stored data. The memory230may include, in some implementations, one or more suitable types of memory (e.g. such as volatile or non-volatile memory, random access memory (RAM), read only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, flash memory, a redundant array of independent disks (RAID), and the like), for storing files including an operating system, application programs (including, for example, a web browser application, a widget or gadget engine, and or other applications, as necessary), executable instructions and data. In one embodiment, the processing techniques described herein may be implemented as a combination of executable instructions and data stored within the memory230. The processor210may be one or more known processing devices, such as, but not limited to, a microprocessor from the Pentium™ family manufactured by Intel™ or the Turion™ family manufactured by AMD™. The processor210may constitute a single core or multiple core processor that executes parallel processes simultaneously. For example, the processor210may be a single core processor that is configured with virtual processing technologies. In certain embodiments, the processor210may use logical processors to simultaneously execute and control multiple processes. The processor210may implement virtual machine technologies, or other similar known technologies to provide the ability to execute, control, run, manipulate, store, etc. multiple software processes, applications, programs, etc. One of ordinary skill in the art would understand that other types of processor arrangements could be implemented that provide for the capabilities disclosed herein. In accordance with certain example implementations of the disclosed technology, the manipulation detection module120may include one or more storage devices configured to store information used by the processor210(or other components) to perform certain functions related to the disclosed embodiments. In one example, the manipulation detection module120may include the memory230that includes instructions to enable the processor210to execute one or more applications, such as server applications, network communication processes, and any other type of application or software known to be available on computer systems. Alternatively, the instructions, application programs, etc. may be stored in an external storage or available from a memory over a network. The one or more storage devices may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible computer-readable medium. In one embodiment, the manipulation detection module120may include a memory230that includes instructions that, when executed by the processor210, perform one or more processes consistent with the functionalities disclosed herein. Methods, systems, and articles of manufacture consistent with disclosed embodiments are not limited to separate programs or computers configured to perform dedicated tasks. For example, the manipulation detection module120may include the memory230that may include one or more programs250to perform one or more functions of the disclosed embodiments. For example, in some embodiments, the manipulation detection module120may additionally manage dialogue and/or other interactions with the user via a program250. In certain example implementations, the program250that may include a rule-based platform290for determining a risk tier of a user-initiated request in accordance with a set of predefined rules. In some embodiments, the manipulation detection module120may include a trained machine learning model295for analyzing vocal samples received from a user and determining a command or user-initiated request based on applying natural language processing techniques to the received vocal samples/utterances. Moreover, the processor210may execute one or more programs250located remotely from the system100(such as the system shown inFIG.1). For example, the system100may access one or more remote programs250(such as the rule-based platform290or the trained machine learning model295), that, when executed, perform functions related to disclosed embodiments. According to some embodiments, the trained machine learning model295may be trained by updating an audio processing database174(as discussed above with respect toFIG.1) with communications from users that have been labeled using, for example, a web user interface. The data in the audio processing database174may undergo supervised training in a neural network model using a neural network training algorithm while the model is offline before being deployed in the system100. According to some embodiments, a natural language processing model of the system100may utilize deep learning models such as a convolutional neural network (CNN) and long short-term memory (LS™). The natural language processing model may also be trained to recognize named entities in addition to intents. For example, a named entity may include persons, places, organizations, account types, and product types. According to some embodiments, when the manipulation detection module120generates a command, it may determine an entity that will execute the command, such as, for example, the API server122, the audio processing device124, or some other device or component. According to some embodiments, at the time the manipulation detection module120generates a new command, the manipulation detection module120may also update the user information database260(or alternatively, external database118) with information about a previous or concurrent transaction or user interaction. The memory230may include one or more memory devices that store data and instructions used to perform one or more features of the disclosed embodiments. The memory230may also include any combination of one or more databases controlled by memory controller devices (e.g., server(s), etc.) or software, such as document management systems, Microsoft™ SQL databases, SharePoint™ databases, Oracle™ databases, Sybase™ databases, or other relational or non-relational databases. The memory230may include software components that, when executed by the processor210, perform one or more processes consistent with the disclosed embodiments. In some embodiments, the memory230may include a user information database260for storing related data to enable the manipulation detection module120to perform one or more of the processes and functionalities associated with the disclosed embodiments. The user information database260may include stored data relating to a user or customer profile and user or customer accounts, such as for example, user identification, name, age, sex, birthday, address, account status, preferences, preferred language, greeting name, preferred communication channel, account numbers, order history, delivery history, authorized users associated with one or more accounts, account balances, account payment history, and other such typical account information. The user information database260may further include stored data relating to previous interactions between the organization (or its related entity) and a user. For example, the user information database260may store user interaction data that includes records of previous interactions with a user via a website, SMS, a chat program, a mobile application, an IVR system, or notations taken after speaking with a customer service agent. The user information database260may also include information about business transactions between the organization (or its related entity) and a user or customer that may be obtained from, for example, the transaction server114. The user information database260may also include user feedback data such as an indication of whether an automated interaction with a user was successful, online surveys filled out by a user, surveys answered by a user following previous interactions to the company, digital feedback provided through websites or mobile applications associated with the organization or its related entity (e.g., selecting a smiley face or thumbs up to indicate approval), reviews written by a user, complaint forms filled out by a user, information obtained from verbal interactions with user (e.g., information derived from a transcript of a customer service call with a user or customer that is generated using, for example, voice recognition techniques and/or by audio processing device124) or any other types of communications from a user or customer to the organization or its related entity. According to some embodiments, the functions provided by the user information database may also be provided by a database that is external to the manipulation detection module120, such as the database118as shown inFIG.1. The manipulation detection module120may also be communicatively connected to one or more memory devices (e.g., databases) locally or through a network. The remote memory devices may be configured to store information and may be accessed and/or managed by the manipulation detection module120. By way of example, the remote memory devices may be document management systems, Microsoft™ SQL database, SharePoint™ databases, Oracle™ databases, Sybase™ databases, or other relational or non-relational databases. Systems and methods consistent with disclosed embodiments, however, are not limited to separate databases or even to the use of a database. The manipulation detection module120may also include one or more I/O devices220that may comprise one or more interfaces for receiving signals or input from devices and providing signals or output to one or more devices that allow data to be received and/or transmitted by the manipulation detection module120. For example, the manipulation detection module120may include interface components, which may provide interfaces to one or more input devices, such as one or more keyboards, mouse devices, touch screens, track pads, trackballs, scroll wheels, digital cameras, microphones, sensors, and the like, that enable the manipulation detection module120to receive data from one or more users (such as, for example, via the user device102). In example embodiments of the disclosed technology, the manipulation detection module120may include any number of hardware and/or software applications that are executed to facilitate any of the operations. The one or more I/O interfaces may be utilized to receive or collect data and/or user instructions from a wide variety of input devices. Received data may be processed by one or more computer processors as desired in various implementations of the disclosed technology and/or stored in one or more memory devices. While the manipulation detection module120has been described as one form for implementing the techniques described herein, other, functionally equivalent, techniques may be employed. For example, some or all of the functionality implemented via executable instructions may also be implemented using firmware and/or hardware devices such as application specific integrated circuits (ASICs), programmable logic arrays, state machines, etc. Furthermore, other implementations of the manipulation detection module120may include a greater or lesser number of components than those illustrated. FIG.3is a flow diagram300illustrating examples of methods for detecting manipulated vocal audio, in accordance with certain embodiments of the disclosed technology. As shown in step305of method300, the system may receive an utterance from a user. For example, a user of system100may contact call center server112in order to request an action from the system (e.g., authenticating a transaction initiated via transaction server114, validation of a user login initiated via web server110, etc.). The user may contact call center server112and call center server112may provide an IVR response system to the user to request the user for the reason for the call. Accordingly, the user may provide an utterance to the system which the system may interpret to provide the requested action. According to certain embodiments, the utterance may comprise both (i) a request for a service or action from system100and (ii) a vocal sample that may be analyzed to determine whether the vocal sample is manipulated. In step310, the system (e.g., audio processing device124) may transform the utterance from a wavelength domain to a frequency domain. According to some embodiments, the system may implement one of a Fourier transformation, a fast Fourier transformation, a short-time Fourier transformation, or a discrete cosine transformation in order to transform the vocal sample from a wavelength domain to a frequency domain. According to some embodiments, when transforming the received vocal sample using the short-time Fourier transformation, the system sets a window function that allows the system to sample the received utterance at a predetermined sampling rate to determine a series of overlapping discrete signal components in the wavelength domain. The system may apply a Fourier transformation to each of the plurality of overlapping discrete signal components and determine a plurality of amplitudes in the frequency domain associated with the overlapping discrete signal components in the wavelength domain. In step315, the system may determine a divergence of one or more amplitude values from a predetermined frequency distribution. For example, once the utterance and/or vocal sample has been transformed into the frequency domain, the transformed frequency domain data may include a plurality of amplitude values distributed across a plurality of frequency values. According to some embodiments, the transformed vocal sample may be represented by a spectrogram. A spectrogram may show frequency values along a first axis, time values associated with the vocal sample along a second axis, and amplitude values (e.g., loudness of a given frequency at a given time) along a third axis. The system may (e.g., via manipulation detection module120) may determine whether the amplitude values diverge from a predetermined frequency distribution. For example, according to some embodiments, the amplitude (e.g., loudness) values of human vocal frequencies follow a predetermined distribution of leading digits. According to some embodiments, the predetermined distribution is a Benford's distribution. According to some embodiments, the system (e.g., via manipulation detection module120and/or audio processing device124) may isolate a plurality of amplitude values representative of the utterance received in step305and determine whether the leading digits frequency distribution of the selected amplitude values diverge from a predetermined distribution. In other embodiments, the system (e.g., manipulation detection module120and/or audio processing device124) may operate on all of the amplitude values of the utterance received in step305to determine whether the leading digits frequency distribution diverges from a predetermined distribution. In decision block320, the system (e.g. manipulation detection module120) may determine whether the divergence between the selected amplitude values and the predetermined distribution exceeds a predetermined threshold. In some embodiments, the predetermined threshold may be a predetermined p-value. In some embodiments, the predetermined p-value may be p=0.05, although the system may use any p-value to determine whether the divergence from the predetermined threshold is statistically significant. According to some embodiments, the determination comprises determining whether the divergence between the selected amplitude values and a Benford's distribution for leading digits exceeds a predetermined threshold. According to some embodiments, when the divergence exceeds the predetermined threshold, the system determines that the received utterance has been manipulated. When the divergence does not exceed the predetermined threshold, the system may return to step305to listen for a new utterance, and run a similar analysis on the next utterance received from the user. When the divergence exceeds the predetermined threshold, the system may move to step325. According to some embodiments, the divergence calculation may further comprise one of a Jensen-Shannon divergence, a Kullback-Leibler divergence, a symmetrized Renyi divergence, a symmetrized Tsallis divergence, and/or a Kolmogorov-Smirnov test. In step325, the system (e.g. manipulation detection module120) may execute one or more security measures. According to some embodiments, the one or more security measures may include (i) transferring the user from an automated operator to a human operator, (ii) requiring second factor authentication from the user, and/or (iii) denying a user-initiated request. For example, depending on a risk tier of the user request, the system may execute a different type of security measure. For example, if the user request is associated with a first risk tier (e.g., highest risk) the system may deny the user-initiated request. If the user request is associated with a second risk tier (e.g., medium risk) then the system may transform the user to a human operator (e.g., via call center server112). When the risk tier is associated with a third risk tier (e.g., lower risk) the system may request second factor authentication from the user before allowing the user-initiated request. According to some embodiments, the determined risk tier of the user request may be based in part on the statistical significance of the divergence. For example, when the statistical significance of the divergence is at a highest level, the system may determine that the user-initiated request may be associated with the highest risk tier. In some embodiments, the determined risk tier may be based in part on the specific user-initiated request. For example, a user-initiated request to check an account balance may be assigned a lower risk tier than a user-initiated request to change a password or security PIN associated with the user's account. After step325, method300may end. FIG.4is a flow diagram400illustrating examples of methods for detecting manipulated vocal audio, in accordance with certain embodiments of the disclosed technology. As shown in step405of method400, the system may receive a first vocal sample. The first vocal sample may be received from a user requesting an action from the system (e.g., authenticating a transaction initiated via transaction server114, validation of a user login initiated via web server110, etc.). The user may contact call center server112and call center server112may provide an IVR response system to the user to request the user for the reason for the call. Accordingly, the user may provide an utterance to the system which the system may interpret to provide the requested action. According to certain embodiments, the utterance may comprise both (i) a request for a service or action from system100and (ii) a vocal sample that may be analyzed to determine whether the vocal sample is manipulated. In step410, the system (e.g., audio processing device124) may transform the vocal sample from the wavelength domain to the frequency domain. The transformation may be accomplished by one of a Fourier transformation, a fast Fourier transformation, a short-time Fourier transformation, and/or a discrete cosine transformation. According to some embodiments, when transforming the received vocal sample using the short-time Fourier transformation, the system sets a window function that allows the system to sample the received utterance at a predetermined sampling rate to determine a series of overlapping discrete signal components in the wavelength domain. The system may apply a Fourier transformation to each of the plurality of overlapping discrete signal components and determine a plurality of amplitudes in the frequency domain associated with the overlapping discrete signal components in the wavelength domain. In step415, the system (e.g., manipulation detection module120) may determine a first digit frequency distribution of a plurality of amplitudes associated with the transformed vocal sample. For example, once the utterance and/or vocal sample has been transformed into the frequency domain, the transformed frequency domain data may include a plurality of amplitude values distributed across a plurality of frequency values. According to some embodiments, the transformed vocal sample may be represented by a spectrogram. A spectrogram may show frequency values along a first axis, time values associated with the vocal sample along a second axis, and amplitude values (e.g., loudness of a given frequency at a given time) along a third axis. The system may (e.g., via audio processing device124and/or manipulation detection module120) select a plurality of amplitudes associated with the received vocal sample and determine their first digit frequency distribution. For example, amplitude values may be measured in decibels and the leading digit of any given amplitude value may be any digit from1to9. However, authentic vocal samples predominantly will have first digit frequency values represented by a1, in accordance with Benford's distribution. In step420, the system (e.g., manipulation detection module120) may calculate a divergence between the first digit frequency distribution of the selected amplitude values and a predetermined frequency distribution. The system may (e.g., via manipulation detection module120) may determine whether the amplitude values diverge from a predetermined frequency distribution. In some embodiments, the predetermined threshold may be a predetermined p-value. In some embodiments, the predetermined p-value may be p=0.05, although the system may use any p-value to determine whether the divergence from the predetermined threshold is statistically significant. For example, according to some embodiments, the amplitude (e.g., loudness) values of certain human vocal frequencies follow a predetermined distribution of leading digits. According to some embodiments, the predetermined distribution is a Benford's distribution. Accordingly, the system (e.g., via manipulation detection module120and/or audio processing device124) may isolate a plurality of amplitude values representative of the vocal sample received in step405and determine whether the leading digits frequency distribution of the selected amplitude values diverge from a predetermined distribution. In other embodiments, the system (e.g., manipulation detection module120and/or audio processing device124) may operate on all of the amplitude values of the vocal sample received in step405to determine whether the leading digits frequency distribution diverges from a predetermined distribution. In decision block425, the system (e.g. manipulation detection module120) may determine whether the divergence between the selected amplitude values and the predetermined distribution exceeds a predetermined threshold. According to some embodiments, the determination comprises determining whether the divergence between the selected amplitude values and a Benford's distribution for leading digits exceeds a predetermined threshold. According to some embodiments, when the divergence exceeds the predetermined threshold, the system determines that the received utterance has been manipulated. When the divergence does not exceed the predetermined threshold, the system may return to step405to listen for a second vocal sample, and run a similar analysis on the next vocal sample received from the user. When the divergence exceeds the predetermined threshold, the system may move to step430. According to some embodiments, the divergence calculation may further comprise one of a Jensen-Shannon divergence, a Kullback-Leibler divergence, a symmetrized Renyi divergence, a symmetrized Tsallis divergence, or a Kolmogorov-Smirnov test. In step430, the system may determine that the vocal sample is manipulated when the divergence between the first digit frequency distribution of the selected amplitude values and the predetermined distribution exceeds a predetermined threshold. In response to the determination, the system (e.g., manipulation detection module120) may execute one or more security measures in step430. According to some embodiments, the one or more security measures may include (i) transferring the user from an automated operator to a human operator, (ii) requiring second factor authentication from the user, and/or (iii) denying a user-initiated request. For example, depending on a risk tier of the user request, the system may execute a different type of security measure. For example, if the user request is associated with a first risk tier (e.g., highest risk) the system may deny the user-initiated request. If the user request is associated with a second risk tier (e.g., medium risk) then the system may transform the user to a human operator (e.g., via call center server112). When the risk tier is associated with a third risk tier (e.g., lower risk) the system may request second factor authentication from the user before allowing the user-initiated request. According to some embodiments, the determined risk tier of the user request may be based in part on the statistical significance of the divergence. For example, when the statistical significance of the divergence is at a highest level, the system may determine that the user-initiated request may be associated with the highest risk tier. In some embodiments, the determined risk tier may be based in part on the specific user-initiated request. For example, a user-initiated request to check an account balance may be assigned a lower risk tier than a user-initiated request to change a password or security PIN associated with the user's account. After step430, method400may end. FIG.5is a flow diagram500illustrating examples of methods for detecting manipulated vocal audio, in accordance with certain embodiments of the disclosed technology. As shown in step505of method500, the system may receive a first vocal sample. The first vocal sample may be received from a user requesting an action from the system (e.g., authenticating a transaction initiated via transaction server114, validation of a user login initiated via web server110, etc.). The user may contact call center server112and call center server112may provide an IVR response system to the user to request the user for the reason for the call. Accordingly, the user may provide an utterance to the system which the system may interpret to provide the requested action. According to certain embodiments, the utterance may comprise both (i) a request for a service or action from system100and (ii) a vocal sample that may be analyzed to determine whether the vocal sample is manipulated. In step510, the system (e.g., audio processing device124) may transform the vocal sample from the wavelength domain to the frequency domain. The transformation may be accomplished by one of a Fourier transformation, a fast Fourier transformation, a short-time Fourier transformation, and/or a discrete cosine transformation. According to some embodiments, when transforming the received vocal sample using the short-time Fourier transformation, the system sets a window function that allows the system to sample the received utterance at a predetermined sampling rate to determine a series of overlapping discrete signal components in the wavelength domain. The system may apply a Fourier transformation to each of the plurality of overlapping discrete signal components and determine a plurality of amplitudes in the frequency domain associated with the overlapping discrete signal components in the wavelength domain. In step515, the system (e.g., manipulation detection module120) may determine a first digit frequency distribution of a plurality of amplitudes associated with the transformed vocal sample. For example, once the utterance and/or vocal sample has been transformed into the frequency domain, the transformed frequency domain data may include a plurality of amplitude values distributed across a plurality of frequency values. According to some embodiments, the transformed vocal sample may be represented by a spectrogram. A spectrogram may show frequency values along a first axis, time values associated with the vocal sample along a second axis, and amplitude values (e.g., loudness of a given frequency at a given time) along a third axis. The system may (e.g., via audio processing device124and/or manipulation detection module120) select a plurality of amplitudes associated with the received vocal sample and determine their first digit frequency distribution. For example, amplitude values may be measured in decibels and the leading digit of any given amplitude value may be any digit from1to9. However, authentic vocal samples predominantly will have first digit frequency values represented by a1, in accordance with Benford's distribution. In step520, the system (e.g., manipulation detection module120) may calculate a divergence between the first digit frequency distribution of the selected amplitude values and a Benford's frequency distribution. The system may (e.g., via manipulation detection module120) determine whether the amplitude values diverge from a predetermined frequency distribution. For example, according to some embodiments, the amplitude (e.g., loudness) values of human vocal frequencies follow a predetermined distribution of leading digits. According to some embodiments, the system (e.g., via manipulation detection module120and/or audio processing device124) may isolate a plurality of amplitude values representative of the vocal sample received in step405and determine whether the leading digits frequency distribution of the selected amplitude values diverge from a predetermined distribution. In other embodiments, the system (e.g., manipulation detection module120and/or audio processing device124) may operate on all of the amplitude values of the utterance received in step305to determine whether the leading digits frequency distribution diverges from a predetermined distribution. In decision block525, the system (e.g. manipulation detection module120) may determine whether the divergence between the selected amplitude values and the Benford's distribution exceeds a predetermined threshold. According to some embodiments, the determination comprises determining whether the divergence between the selected amplitude values and a Benford's distribution for leading digits exceeds a predetermined threshold. In some embodiments, the predetermined threshold may be a predetermined p-value. In some embodiments, the predetermined p-value may be p=0.05, although the system may use any p-value to determine whether the divergence from the predetermined threshold is statistically significant. According to some embodiments, when the divergence exceeds the predetermined threshold, the system determines that the received utterance has been manipulated. When the divergence does not exceed the predetermined threshold, the system may return to step505to listen for a second vocal sample, and run a similar analysis on the next vocal sample received from the user. When the divergence exceeds the predetermined threshold, the system may move to step530. According to some embodiments, the divergence calculation may further comprise one of a Jensen-Shannon divergence, a Kullback-Leibler divergence, a symmetrized Renyi divergence, a symmetrized Tsallis divergence, or a Kolmogorov-Smirnov test. In step530, the system may determine that the vocal sample is manipulated when the divergence between the first digit frequency distribution of the selected amplitude values and the predetermined distribution exceeds a predetermined threshold. In response to the determination, the system (e.g., manipulation detection module120) may execute one or more security measures in step535. According to some embodiments, the one or more security measures may include (i) transferring the user from an automated operator to a human operator, (ii) requiring second factor authentication from the user, and/or (iii) denying a user-initiated request. For example, depending on a risk tier of the user request, the system may execute a different type of security measure. For example, if the user request is associated with a first risk tier (e.g., highest risk) the system may deny the user-initiated request. If the user request is associated with a second risk tier (e.g., medium risk) then the system may transform the user to a human operator (e.g., via call center server112). When the risk tier is associated with a third risk tier (e.g., lower risk) the system may request second factor authentication from the user before allowing the user-initiated request. According to some embodiments, the determined risk tier of the user request may be based in part on the statistical significance of the divergence. For example, when the statistical significance of the divergence is at a highest level, the system may determine that the user-initiated request may be associated with the highest risk tier. In some embodiments, the determined risk tier may be based in part on the specific user-initiated request. For example, a user-initiated request to check an account balance may be assigned a lower risk tier than a user-initiated request to change a password or security PIN associated with the user's account. After step530, method500may end. FIG.6is a flow diagram600illustrating example methods for executing one or more security measures in response to detecting manipulated vocal audio, in accordance with certain embodiments of the disclosed technology. As shown in step605of method600, the one or more security measures may include transferring the user from an automated operator (e.g., facilitated by call center server112with an IVR system) to a human operator. As shown in step610, the one or more security measures may include requiring second factor authentication from the user before authorizing the user-initiated request. As shown in the step615, the one or more security measures may include denying the user-initiated request. FIG.7is a flow diagram700illustrating example methods for determining a risk tier associated with a user request and executing one or more security measures after determining that a vocal sample is manipulated, in accordance with certain embodiments of the disclosed technology. For example, depending on a risk tier of the user request, the system may execute a different type of security measure. For example, if the user request is associated with a first risk tier (e.g., highest risk) the system may deny the user-initiated request (e.g., the manipulation detection module120may issue one or more commands to API server122, which may transmit instructions to one of transaction server114to deny a transaction, web server110to deny a login request, or call server112to deny access to sensitive information associated with a user account). If the user request is associated with a second risk tier (e.g., medium risk) then the system may transform the user to a human operator (e.g., manipulation detection module120may issue one or more commands to API server122, which may transmit instructions to call center server112). When the risk tier is associated with a third risk tier (e.g., lower risk) the system may request second factor authentication from the user before allowing the user-initiated request (e.g., manipulation detection module120may transmit instructions to API server122, which may transmit instructions to one of web server110, call center server112, or transaction server114to request second factor authentication from the user). In step705of method700, the system (e.g., audio processing device124) may receive a user-initiated request. For example, a user of system100may contact call center server112in order to request an action from the system (e.g., authenticating a transaction initiated via transaction server114, validation of a user login initiated via web server110, etc.). Audio processing device124may use natural language processing methods to determine the user-initiated request, and after the vocal sample representative of the user-initiated has been processed and transformed (e.g., by audio processing device124), the frequency domain data may be transmitted to manipulation detection module120. Manipulation detection module120may determine the risk tier associated with the request based in part on the first digit frequency distribution of a plurality of amplitude values as described in more detail with respect toFIGS.3-5in step710. Accordingly, in decision block715, when the divergence between the first digit frequency distribution of amplitudes exceeds a predetermined threshold, the system (e.g., manipulation detection module120) may determine whether the risk tier of the user-initiated request exceeds a first threshold (e.g., belongs to a first risk tier). When the system determines that the user-initiated request is associated with the first risk tier, (e.g., exceeds a first threshold) the system (e.g., manipulation detection module120) may deny the user-initiated request in step720. When the system determines that the user-initiated request is not associated with the first risk tier (e.g., risk tier does not exceed the first threshold) the system may move to decision block725. In decision block725, the system (e.g., manipulation detection module120) may determine whether the risk tier of the user-initiated request exceeds a second threshold (e.g., belongs to a second risk tier). When the system determines that the user-initiated request is associated with the second risk tier, (e.g., exceeds a second threshold) the system (e.g., manipulation detection module120) may transfer the user to a human operator in step730. For example, manipulation detection module120may transmit instructions to cell center server112to transfer the user from an IVR system operator to a human operator. When the system determines that the user-initiated request is not associated with the second risk tier (e.g., risk tier does not exceed the second threshold) the system may move to decision block735. In decision block735, the system (e.g., manipulation detection module120) may determine whether the risk tier of the user-initiated request exceeds a third threshold (e.g., belongs to a third risk tier). When the system determines that the user-initiated request is associated with the third risk tier, (e.g., exceeds a third threshold) the system (e.g., manipulation detection module120) may require second factor authorization from the user before authorizing the user-initiated request in step740. For example, manipulation detection module120may transmit instructions to transaction server114to request second factor authorization from the user before completing a transaction associated with the user-initiated request. Similarly, manipulation detection module120may transmit instructions to web server110to require second factor authorization before completing a login request or change password request associated with the user-initiated request. As used in this application, the terms “component,” “module,” “system,” “server,” “processor,” “memory,” and the like are intended to include one or more computer-related units, such as but not limited to hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets, such as data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal. Certain embodiments and implementations of the disclosed technology are described above with reference to block and flow diagrams of systems and methods and/or computer program products according to example embodiments or implementations of the disclosed technology. It will be understood that one or more blocks of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, respectively, can be implemented by computer-executable program instructions. Likewise, some blocks of the block diagrams and flow diagrams may not necessarily need to be performed in the order presented, may be repeated, or may not necessarily need to be performed at all, according to some embodiments or implementations of the disclosed technology. These computer-executable program instructions may be loaded onto a general-purpose computer, a special-purpose computer, a processor, or other programmable data processing apparatus to produce a particular machine, such that the instructions that execute on the computer, processor, or other programmable data processing apparatus create means for implementing one or more functions specified in the flow diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement one or more functions specified in the flow diagram block or blocks. As an example, embodiments or implementations of the disclosed technology may provide for a computer program product, including a computer-usable medium having a computer-readable program code or program instructions embodied therein, said computer-readable program code adapted to be executed to implement one or more functions specified in the flow diagram block or blocks. Likewise, the computer program instructions may be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide elements or steps for implementing the functions specified in the flow diagram block or blocks. Accordingly, blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, can be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special-purpose hardware and computer instructions. Certain implementations of the disclosed technology described above with reference to user devices may include mobile computing devices. Those skilled in the art recognize that there are several categories of mobile devices, generally known as portable computing devices that can run on batteries but are not usually classified as laptops. For example, mobile devices can include, but are not limited to portable computers, tablet PCs, internet tablets, PDAs, ultra-mobile PCs (UMPCs), wearable devices, and smart phones. Additionally, implementations of the disclosed technology can be utilized with internet of things (IoT) devices, smart televisions and media devices, appliances, automobiles, toys, and voice command devices, along with peripherals that interface with these devices. In this description, numerous specific details have been set forth. It is to be understood, however, that implementations of the disclosed technology may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. References to “one embodiment,” “an embodiment,” “some embodiments,” “example embodiment,” “various embodiments,” “one implementation,” “an implementation,” “example implementation,” “various implementations,” “some implementations,” etc., indicate that the implementation(s) of the disclosed technology so described may include a particular feature, structure, or characteristic, but not every implementation necessarily includes the particular feature, structure, or characteristic. Further, repeated use of the phrase “in one implementation” does not necessarily refer to the same implementation, although it may. Throughout the specification and the claims, the following terms take at least the meanings explicitly associated herein, unless the context clearly dictates otherwise. The term “connected” means that one function, feature, structure, or characteristic is directly joined to or in communication with another function, feature, structure, or characteristic. The term “coupled” means that one function, feature, structure, or characteristic is directly or indirectly joined to or in communication with another function, feature, structure, or characteristic. The term “or” is intended to mean an inclusive “or.” Further, the terms “a,” “an,” and “the” are intended to mean one or more unless specified otherwise or clear from the context to be directed to a singular form. By “comprising” or “containing” or “including” is meant that at least the named element, or method step is present in article or method, but does not exclude the presence of other elements or method steps, even if the other such elements or method steps have the same function as what is named. It is to be understood that the mention of one or more method steps does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Similarly, it is also to be understood that the mention of one or more components in a device or system does not preclude the presence of additional components or intervening components between those components expressly identified. Although embodiments are described herein with respect to systems or methods, it is contemplated that embodiments with identical or substantially similar features may alternatively be implemented as systems, methods and/or non-transitory computer-readable media. As used herein, unless otherwise specified, the use of the ordinal adjectives “first,” “second,” “third,” etc., to describe a common object, merely indicates that different instances of like objects are being referred to, and is not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. While certain embodiments of this disclosure have been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that this disclosure is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. This written description uses examples to disclose certain embodiments of the technology and also to enable any person skilled in the art to practice certain embodiments of this technology, including making and using any apparatuses or systems and performing any incorporated methods. The patentable scope of certain embodiments of the technology is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. EXEMPLARY USE CASES A user or customer may place a call to system100(e.g., via call center server112) in order to verify a purchase, change a password, request a change to an account, etc. The user may be connected to an IVR system which may request the user explain why he or she is calling. The system (e.g., audio processing device124) may receive a user utterance in which the user explains the reason for his or her call. Audio processing device124may derive the meaning behind the user request using predetermined rules and natural language processing techniques (e.g., using rule-based platform290and/or trained machine learning model295). Additionally, the same vocal sample may be analyzed in real-time by the system as an additional security measure, to prevent unauthorized account access. For example, audio processing device124may transform the received utterance from the user from a wavelength domain into a frequency domain, and may additionally construct a spectrogram using the transformed audio sample. The transformation may occur in substantially real-time. Once the vocal sample has been transformed, the transformed data may be passed to manipulation detection module120. Manipulation detection module120may identify and isolate a plurality of amplitude values that are associated with certain frequencies of human speech. The system may compare the leading digit values of the selected amplitudes to a Benford's distribution. If the voice is manipulated, the leading digits of the selected amplitudes will diverge from a Benford's distribution, which predicts that the leading digit should be represented by the digit “1” approximately 30% of the time. When the system detects that the leading digits of the selected amplitude values diverge from the expected values, the system (e.g., manipulation detection module120) may transmit instructions to one or more components of system100to execute one or more security measures, which may include denying the user-initiated request. When the system determines that the leading digits of the selected amplitude values do not diverge from the expected values according to Benford's distribution, the system may authorize the user-initiated request. Additionally, the analyzed vocal sample may be stored by the system (e.g., on one of database174, database260, database118, etc.) and be stored as a authentication fingerprint against subsequent vocal samples/utterances from the user may be compared to in order to authenticate the user on a subsequent call. Examples of the present disclosure relate to systems and methods for detecting manipulated vocal audio. In one aspect, a system for detecting manipulated vocal audio is disclosed. The system may implement a method according to the disclosed embodiments. The system may include one or more processors and a memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors are configured to cause the system to perform steps of a method. The system may receive a communication including a first utterance of a user. The system may transform the first utterance from a wavelength domain to a frequency domain. The system may determine a divergence of one or more amplitude values of the transformed frequency domain from a predetermined frequency distribution. The system may execute one or more security measures when the divergence exceeds a predetermined threshold. In some embodiments, the transformation may further include at least one of a Fourier transformation, a fast Fourier transformation, a short-time Fourier transformation, or a discrete cosine transformation. In some embodiments, the predetermined frequency distribution may include a Benford's distribution. In some embodiments, the one or more security measures may include at least one action selected from (i) transferring the user from an automated operator to a human operator, (ii) requiring second factor authentication from the user, and (iii) denying a user-initiated request. In some embodiments, the transformation may further include sampling the communication at a predetermined sampling rate to create a plurality of overlapping discrete signal components, applying a Fourier transformation to each of the plurality of overlapping discrete signal components, and determining a plurality of amplitudes associated with the overlapping discrete signal components. In some embodiments, determining the divergence may further include determining a first digit frequency distribution of the plurality of amplitudes and calculating a divergence between the first digit frequency distribution and a predetermined frequency distribution. In some embodiments, the predetermined threshold may be based in part on a risk tier associated with a user-initiated request. In some embodiments, the divergence includes one of a Jensen-Shannon divergence, a Kullback-Leibler divergence, a symmetrized Renyi divergence, and a symmetrized Tsallis divergence. In another aspect, a method for detecting manipulated vocal audio is disclosed. The method may include receiving a first vocal sample associated with a user. The method may include transforming the first vocal sample from a wavelength domain into a frequency domain. The method may include determining a first digit frequency distribution of a plurality of amplitudes associated with the transformed vocal sample. The method may include calculating a divergence between the first digit frequency distribution and a predetermined frequency distribution. The method may include determining that the first vocal sample is manipulated when the divergence exceeds a predetermined threshold. The method may include executing one or more security measures in response to determining that the first vocal sample is manipulated. In some embodiments, the transformation may further include at least one of a Fourier transformation, a fast Fourier transformation, a short-time Fourier transformation, or a discrete cosine transformation. In some embodiments, the predetermined frequency distribution includes a Benford's distribution. In some embodiments, the one or more security measures includes transferring the user from an automated operator to a human operator. In some embodiments, the one or more security measures include requiring second factor authentication from the user. In some embodiments, the one or more security measures include a denying a user-initiated request. In some embodiments, the predetermined threshold is based in part on a risk tier associated with a user-initiated request. In some embodiments, the divergence includes one of a Jensen-Shannon divergence, a Kullback-Leibler divergence, a symmetrized Renyi divergence, and a symmetrized Tsallis divergence. In another aspect, a method for detecting manipulated vocal audio is disclosed. The method may include receiving a first vocal sample associated with a user. The method may include performing a Fourier transformation of the first vocal sample from a wavelength domain into a frequency domain. The method may include determining a first frequency count for a plurality of amplitudes associated with the transformed first vocal sample. The method may include calculating a divergence between the determined first digit frequency count and a Benford's distribution. The method may include determining that the first vocal sample is manipulated when the divergence exceeds a predetermined threshold. The method may include executing one or more security measures in response to determining that the first vocal sample is manipulated. In some embodiments, the divergence includes one of a Jensen-Shannon divergence, a Kullback-Leibler divergence, a symmetrized Renyi divergence, and a symmetrized Tsallis divergence. In some embodiments, the one or more security measures include at least one action selected from (i) transferring the user from an automated operator to a human operator, (ii) requiring second factor authentication from the user, and (iii) denying a user-initiated request. In some embodiments, the Fourier transformation further includes a short-time Fourier transformation.
81,114
11862180
DETAILED DESCRIPTION Inventive concepts will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of present inventive concepts to those skilled in the art. It should also be noted that these embodiments are not mutually exclusive. Components from one embodiment may be tacitly assumed to be present/used in another embodiment. The following description presents various embodiments of the disclosed subject matter. These embodiments are presented as teaching examples and are not to be construed as limiting the scope of the disclosed subject matter. For example, certain details of the described embodiments may be modified, omitted, or expanded upon without departing from the scope of the described subject matter. The inventive concepts described herein reduce the complexity of the PLC. These embodiments relate to reducing complexity in embodiments where the approach used for packet concealment is sinusoidal modeling in the frequency domain, with an additional low-resolution background noise model to better handle burst errors. In this case, over longer error bursts, the approach proceeds from the sinusoidal model towards the low-resolution noise model. The low-resolution noise model may be updated during the first error frame based on the saved prototype frame. The techniques described may also be used to determine a high quality (and low complexity) frequency band estimate of the energy decay over time for the signal in various frequency bands, which may be used to model the band energies of the concealed frame. When the speech/audio compression is done in the frequency domain, there may already be spectral representation available in the frequency domain, most often in the modified discrete cosine transform (MDCT) domain. The coefficient of the available spectral representation in many situations can be used to form an alternative spectral shape to replace the complexity of short FFTs. For example, the spectral shapes of the first frame error can be used to create spectral estimates corresponding to those that would have been generated by the short FFTs. In embodiments described herein, the available MDCT coefficients may be used to provide a spectral shape while the energy (or level) for the spectral estimate is based on the energy of the windowed prototype frame. However, the inventors came to the realization that using the MDCT coefficients alone for both shape and level has been found to provide insufficient quality estimates for the two short FFTs that are to be replaced. The advantage of the techniques described below is that one can avoid using the two short FFTs. This is important as the avoidance directly reduces the complexity of the first lost frame. In the first lost frame the complexity is high as it involves both a rather long FFT of the prototype frame and an equally long inverse FFT of the reconstructed spectrum. While the MDCT coefficients available in the decoder do not provide a stable energy estimate, the coefficients can be used for a spectral shape estimation. To get the level for the spectral estimate, the energy of a windowed prototype frame may be used as this may produce a better estimate of the actual FFT spectrum. Avoiding the complexity of using the two shorter FFTs may result in a slight difference in both temporal characteristics and spectral characteristics. Such differences are of minor importance for the use in the form of a long-term estimate of the background signal, and the slight differences are also not a major issue for the transient detector energy decay estimation. The inventive concept of the reuse of MDCT coefficients (or any other spectral domain information available in the normal coded domain) and the transformation into a spectral shape that can be used instead of the two short FFT transforms reduces complexity and processing overhead of processing the lost frame. This also involves how the MDCT coefficients are grouped into a format that approximates the FFT bins as close as possible. The decoder apparatus may consist of two units or may be part of the PLC illustrated inFIG.2or the decoder apparatus illustrated inFIG.12andFIG.13. The decoder (1201,1301) may update the spectral shape and frame energy during error free operation. The decoder (1201,1301) may use the saved spectral shapes and frame energies during the first frame error to generate the long-term spectral estimate that is to be used during error concealment. A third component of the decoder (1201,1301) may also be used to determine a frequency band decay to be applied in the PLC reconstruction, such as when there is significant drop in energy. The reuse of MDCT coefficients typically only generates one spectral shape per frame. Having two spectral shapes during the first error frame may be achieved by generating one spectral shape estimate for each good frame and by also saving the spectral shape estimate from the previous good frame. To obtain the correct level of the spectral estimate, the windowed energy of the corresponding PLC-prototype frame may be saved at the end of the good frame processing in a MDCT based decoder. A good frame means a correctly received error free frame, while a bad frame means an erased, i.e. a lost or corrupted, frame. During the lost frame, the second unit uses the two saved spectral shapes and frame energies to generate two spectral estimates corresponding to the ones that would have been generated by the two short FFTs. This reduces complexity and processor overhead. Based on the saved shapes and energies, the third unit may establish the decay factors to be used for each frequency band, in the PLC reconstruction of the lost frame. After this the normal processing of the Phase ECU is continued as before, see international patent application no. WO2014123471 (Appendix 1) or 3GPP TS 26.447 V15.0.0 clause 5.4.3.5. The techniques described herein are not limited to using spectral estimation from MDCT as described above. The techniques can be adapted to work with any other spectral estimation technique that is used in a codec. The following describes the functions of using the MDCT in more detail. To obtain the MDCT coefficients, the MDCT is taken over a 20 ms window with a 10 ms advance. When using one transform, e.g. MDCT, to make a sub-band estimate of another transform, e.g. FFT. It is important to make the grouping into sub-bands over the correct coefficients. The PLC prototype frame saved after good frames is 16 ms in length and the transient detector sub band analysis module uses two short FFT of length 4 ms—that is one quarter of the PLC prototype frame. The actual length of these items depends on the sampling frequency used and can be from 8 kHz to 48 kHz. These lengths affect the number of spectral bins in each transform. The two short FFT analysis results are used to determine a conversion factor μ as described below. Spectral Shape History Update in Good Frames For the transient analysis, the Phase ECU may use a history of the MDCT based spectral shape and MDCT-synthesis windowed energies to build an image of how the input signal has evolved over time. The spectral shape is calculated based on the decoded MDCT coefficients which holds a spectral representation of the decoded signal. The spectral shape consists of sub-bands where the number of sub-bands, Ngrp, depends on the sampling frequency as seen in Table 1. TABLE 1Phase ECU Number of Sub-Bands tablefsNgrp8000416000524000632000744100, 480008 For good frames, that is when the bad frame indicator indicates the frame is not a bad frame (e.g., BFI=0), the values of spectral shape and frame energy may be updated. These steps are illustrated in the flowchart inFIG.3. Turning toFIG.3, at operation301, a determination is made as to whether BIF=0. Note that the parameters may only be calculated for the current frame. When the frame before was a good frame, the values saved during the last frame may be moved to the buffers designated as second last frames (i.e., a shapeooldbuffer(s)). The spectral shape shapeold(k) from the last frame are moved and saved in a second buffer shapeoold(k) as follows in operation303: shapeoold(k)=shapeold(k), 0≤k<Ngrp.  (1) Similarly, in operation305, the last frames energy is moved to a second buffer E_wooldas: E_woold=E_wold.  (2) These updates may be followed by calculation of new values of spectral shape shapeold(k) and frame energy E_woldfor the last frame buffers in operations307and309. Table 2 illustrates how the bins of the current MDCT coefficients may be divided among the sub-bands. The table entries in Table 2 show start coefficients of each sub-band for an embodiment that may be used in the methods described in international application WO 2014/123471. Other sub-bands may be used for other embodiments. TABLE 2Phase ECU MDCT Sub-Bands start bin tableMDCT_grp_bins{4, 14, 24, 44, 84, 164, 244, 324, 404, 484}(=grp_bin(k)) It may be desirable to have the sub-band based spectral shape in the range [0, . . . 1]. This may be achieved by first calculating the total magnitude of the MDCT coefficients (q_d(n)) as: shape_tot=∑n=0NMDCT-1⁢q_d⁢(n)2(3) Where NMDCTis the number of MDCT coefficients and depends on the sampling frequency such as the sampling frequencies illustrated in Table 3. TABLE 3Number of MDCT coefficients for different sampling frequencies.fsNMDCT80008016000160240002403200032044100, 48000480 The calculated value for shape_tot may then be used to normalize the spectral shape of each sub-band which may be determined as shapeold⁡(k)=1shape_tot⁢∑n=grp_bin⁢(k)grp_bin⁢(k+1)-1⁢q_d⁢(n)2,0≤k<Ngrp(4) which forms the spectral shape estimate for the new value of the last frame. Note that there may be some MDCT coefficients that are not assigned to the spectral shape. This is a result of not using the DC bin in the corresponding short FFTs. To be able to use the spectral shape during reconstruction, the frame energy may be calculated based on the windowed prototype frame. This may be determined as: E_wold=∑n=0Lprot-1⁢(wwhr⁡(n)·xprev⁡(n))2(5) where wwhrmay be (especially for long term background approximation estimation) the long FFT spectral analysis window, xprevis the Phase ECU time domain prototype signal as used to create a substitution for a potentially upcoming lost frame, and Lprotis the number of samples in the xprev, signal which also corresponds to the length of the time window wwhr. In an alternative embodiment, the overall spectral approximation performance may be balanced between providing a good background estimate and a good estimate for transient offset detection. This balancing may be done in an embodiment by optionally altering the applied wwhrwindow to differ from the long/16 ms FFT spectral analysis window. One possible approach to alter this is to shorten the window and shift the energy estimation window towards the future so that the energy estimation is further time aligned with the energy content of the short(4 ms) FFT windows. This approach also reduces the complexity of energy alignment calculations. For example, E_woldmay be reduced to the windowed energy of the 3*L_prot/4(12 ms) most recent synthesized samples, or even the L_prot/2(8 ms) most recent samples. This may balance the spectral approximation between background estimation (targeting the overall spectral period of 16 ms) and transient offset estimation (targeting the last 4 ms)). Turning toFIG.4, to avoid the use of old values in the secondary buffers after a bad frame or a burst of bad frames, the shapeoold(k) and E_wooldstates of the spectral shape and frame energy may be re-initialized. Therefore, in the case were a good frame BFI=0 (as illustrated by operation401) is preceded by a bad frame BFIprev=1 (as illustrated by operation403) the calculated values are copied to the secondary buffers as described in the first equations (1) and (2) respectively in operations405and407. Conversion of Spectral Shape into Short FFT Sub-Band Energies The transient analysis may use the saved spectral shape and frame energies to analyze how the sub-band energies are evolving over time. These values may be used for two things, the first is for sub-band transient detector and the second is for forming a long-term average Ētranthat may be used to adjust sub-band energies during burst errors. These values form a basis for calculating signal modification values that are used during error bursts. TABLE 4Phase ECU FFT Sub-Bands start bin tablePhECU_grp_bins{1, 3, 5, 9, 17, 33, 49, 65, 81, 97}(start bins, indexingstarts from 0) The spectral shapes and frame energies are used to generate the approximations of sub-band energies for the two last error free frames. This is illustrated in the flow chart ofFIG.5when the bad frame indicator indicates a bad frame (i.e. BFI=1) at operation501. Turning toFIG.5, the first frame represents the sub-band frame energies before the last frame and may be generated in operation503by: Eoold(k)=μ·shapeoold(k)·E_woold, 0≤k<Ngrp(6) The second sub-frame frame energies are for the last frame and may be generated in operation505by: Eold(k)=μ·shapeold(k)·Ewold, 0≤k<Ngrp(7) where μ is a scalar constant that depends on the sampling frequency and handles the conversion of the MDCT based spectral shape to an approximation of an FFT based spectral analysis, Ētran. An example of μ for various fsfrequencies is shown in Table 5. TABLE 5Phase ECU MDCT to FFT spectral shape conversion factor μ tablefsμ80001.9906160004.0445240006.0980320008.153344100, 4800012.2603 The conversion factor μ may be calculated off-line and depends on the MDCT-window and the window used in the FFT for which it serves as an approximation for during lost frame reconstruction. To find these coefficients, the PLC should be run with both methods (original FFT analysis and the reduced complexity approximation of the FFT using the MDCT) active to calculate the conversion factor(s). A convenient method for calculating the conversion factor is to use sine waves. One wave may be used in the center of each group interval and the calculation may be started with the coefficient set to one. The correct value may be calculated by comparing the two methods. Note that the bins in Table 4 show the bin grouping for an FFT with an analysis length that is a quarter of the one used for the spectral analysis used by the PLC on the prototype frame, i.e. if the spectral analysis is made using a 16 ms FFT, the bin grouping is for a 4 ms spectral analysis. FIG.6illustrates an overview of how the framing and the related Frame structure of the MDCT coder is applied for an asymmetrically located MDCT window and with a segment of look ahead zeros—LA_ZEROS. Note that the signal diagram shows that a frame is only decoded up to a point of ¾ of the current frame due to the use of look ahead zeros (LA_ZEROS—⅜ of the frame length) in the MDCT window. The framing affects which part of the current frame is possible to decode and therefore affects the position of the PLC prototype frame that is saved and used in case the next frame is lost. FIG.6also illustrates the difference in length of the involved transforms used in the embodiment. Even in MDCT with a length twice the length of the encoded frame each spectral point is represented with two coefficients (compare with FFT where an N sample results in N complex numbers—that is 2 N scalar values) where one may be a time reversal of the other. FIG.7illustrates an overview of how the framing and the related frame structure of the MDCT coder is applied to determine the sub-band energies and the spectral shapes as described above.FIG.7illustrates the current frame and the previous frame being good frames and shows where in relation to the coding process the method ofFIGS.3and9-11may be performed. FIG.8illustrates a graphical representation of the different spectral representations. The PLC spectral analysis is made on a 16 ms time segment—this results in a inter bin distance of 62.5 Hz. From a N point FFT one gets N/2+1 bins where the start point is 0 Hz and the last is fs/2 (half the sampling frequency). The same applies for the transient analysis where the short FFTs that are to be replaced—the difference is that the time window is 4 ms—this results in inter bin distance of 250 Hz. For the MDCT which is made over a 20 ms time segment—the inter bin distance becomes 100 Hz after grouping the time and time reversed coefficients for M MDCT and length of M MDCT there are M/4 coefficients after grouping. The MDCT does not have a DC or fs/2 coefficients, so the simplest representation is to have a half bin offset as shown inFIG.8. In an embodiment, these estimates of the spectral estimates for the transient analysis as described above may be used to replace the spectral estimates used in the transient calculation and concealment adaptation as described in international patent application no. WO2014123471 (see Appendix 1). These estimates may also be used in other situations where spectral estimates are used such as in 3GPP TS 25.447 V. 15.0.0. For example, turning toFIG.9, a decoder (1201,1301) may decode a first audio frame of a received audio signal based on a MDCT in operation901. In operation903, the decoder (1201,1301) may determine values of a first spectral shape based upon MDCT coefficients from the decoded first audio frame and store the determined values of the first spectral shape in a shapeoldbuffer, the first spectral shape comprising a number of sub-bands. In operation905, the decoder (1201,1301) may determine a first frame energy of the first audio frame and store the determined first frame energy in an E_woldbuffer. In operation907, the decoder (1201,1301) may decode a second audio frame of the received audio signal based on the MDCT. In operation909, the decoder (1201,1301) may move the determined values of the first spectral shape from the shapeoldbuffer to a shapeooldbuffer. Operation909may correspond to operation303ofFIG.3. In operation911, the decoder (1201,1301) may move the determined first frame energy from the E_woldbuffer to an E_wooldbuffer. Operation911may correspond to operation305ofFIG.3. In operation913, the decoder (1201,1301) may determine values of a second spectral shape based upon decoded MDCT coefficients from the decoded second audio frame and store the determined values of the second spectral shape in the shapeoldbuffer, the second spectral shape comprising the number of sub-bands. In operation915, the decoder (1201,1301) may determine a second frame energy of the second audio frame and store the calculated second frame energy in the E_woldbuffer. In operation917, the decoder (1201,1301) may transform the values of the first spectral shape and the first frame energy into a first representation of a first fast Fourier transform, FFT, based spectral analysis and transform the values of the second spectral shape and the second frame energy into a second representation of a second FFT spectral analysis. In operation919, the decoder (1201,1301) may detect, based on the transformed values of the first spectral shape and the values of the second spectral shape, a condition that could lead to suboptimal reconstruction quality of a substitution frame for the lost audio frame when the concealment method is used to create the substitution frame. In operation921, the decoder (1201,1301), responsive to detecting the condition, may modify the concealment method by selectively adjusting a spectrum magnitude of a substitution frame spectrum. In one embodiment, the spectral estimates describe above may be used to reduce the complexity and processing overhead in the transient calculation and concealment adaptation such as described in international patent application no. WO2014123471 and 3GPP TS 25.447 V. 15.0.0 clause 5.4.3.5. The Eoold(k) and Eold(k) are used to calculate an energy ratio estimate and transient detection may be done using the bins of Eoold(k) and Eold(k). For example, turning toFIG.10, in operation1001, the sub-band energies of Eoold(k) and Eold(k) may be determined as described above. The frequency group selective transient detection can now be based on the band-wise ratio between the respective band energies of the frames associated with Eoold(k) and Eold(k): Rold⁢\⁢oold,band⁡(k)=Eo⁢l⁢d⁡(k)Eo⁢o⁢l⁢d⁡(k) Other ratios may be used. It is to be noted that the interval lk=[mk−1+1, . . . , mk] corresponds to the frequency band Bk=[mk-1+1Npart·fS,…⁢,mkNpart·fs], where fsdenotes the audio sampling frequency, and Npartcorresponds to the size of the frame. The lowest lower frequency band boundary m0can be set to 0 but may also be set to a DFT index corresponding to a larger frequency in order to mitigate estimation errors that grow with lower frequencies. The highest upper frequency band boundary mKcan be set to Npart2 but is preferably chosen to correspond to some lower frequency in which a transient still has a significant audible effect. The ratios may be compared to certain thresholds. For example, a respective upper threshold for (frequency selective) onset detection1003and a respective lower threshold for (frequency selective) offset detection1005may be used. When the energy ratio is above the upper threshold or below the lower threshold, the concealment method may be modified in operation1007. These operations correspond to operation919ofFIG.9. An example of modifying the concealment method of operation921ofFIG.9is illustrated inFIG.11. In this embodiment of concealment method modification, the magnitude and phase of a substitution frame spectrum is determined. The magnitude is modified by means of scaling with two factors α(m) and β(m) and the phase is modified with an additive phase component ϑ(m). This leads to the calculation of the substitution frame: Z(m)=α(m)·β(m)·Y(m)·ej(θk+ϑ(m)) where Z(m) is the substitution frame spectrum, α (m) is a first magnitude attenuation factor, β(m) is a second magnitude attenuation factor, Y(m) is a protype frame, θkis a phase shift, and ϑ(m) is an additive phase component. In this embodiment, the number nburstof observed frame losses in a row is determined where a burst loss counter is incremented with one upon each frame loss and reset to zero upon the reception of a valid frame. Magnitude adaptation, in operation1101, is preferably done if the burst loss counter nburstexceeds some threshold thrburst, e.g. thrburst=3 as determined in operation1103. In that case a value smaller than 1 is used for the attenuation factor, e.g. α(m)=0.1. A further adaptation with regards to the magnitude attenuation factor may be done in case a transient has been detected based on that the indicator Rold\oold,band(k) or alternatively Rold\oold(m) or Rold\ooldhave passed a threshold as determined in operation1105. In that case a suitable adaptation action in operation1107is to modify the second magnitude attenuation factor β(m) such that the total attenuation is controlled by the product of the two factors α(m)·β(m). β(m) may be set in response to an indicated transient. In case an offset is detected the factor β(m) may be chosen to reflect the energy decrease of the offset. A suitable choice is to set β(m) to the detected gain change: β(m)=√{square root over (Rold\oold,band(k))} form∈lk, k=1 . . .K In case an onset is detected it is rather found advantageous to limit the energy increase in the substitution frame. In that case the factor can be set to some fixed value of e.g. 1, meaning that there is no attenuation but not any amplification either. Examples of the phase dithering in operation1109are in international patent application no. WO2014123471 (see Appendix 1) and in 3GPP_TS_26.447_v.15.0.0_2018_06, clause 5.4.3.5.3 and need not be described herein in detail. FIG.12is a schematic block diagram of a decoder that may be used according to the embodiments. The decoder1201comprises an input unit1203configured to receive an encoded audio signal.FIG.11illustrates the frame loss concealment by a logical frame loss concealment-unit1205, which indicates that the decoder is configured to implement a concealment of a lost audio frame, according to the above-described embodiments. Further the decoder comprises a controller1207for implementing the embodiments described above, including the operations illustrated inFIGS.3-5and9-11, and/or operations discussed below with respect to respective Example Embodiments. For example, the controller1207may be configured to determine properties of the previously received and reconstructed audio signal or in the statistical properties of the observed frame losses for which the substitution of a lost frame according to the original, non-adapted Phase ECU method provide relatively reduced quality. In case such a condition is detected, the controller1207may be configured to modify the element of the concealment methods according to which the substitution frame spectrum is calculated by selectively adjusting the phases or the spectrum magnitudes as described above and output the audio frame towards a receiver for playback. The receiver may be a device having a loudspeaker, a loudspeaker device, a phone, etc. The decoder may be implemented in hardware. There are numerous variants of circuitry elements that can be used and combined to achieve the functions of the units of the decoder. Such variants are encompassed by the embodiments. Particular examples of hardware implementation of the decoder is implementation in digital signal processor (DSP) hardware and integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry. The decoder described herein could alternatively be implemented e.g. as illustrated inFIG.13, i.e. by one or more of a processor1305and adequate software1309with suitable storage or memory1311therefore, in order to reconstruct the audio signal, which includes performing audio frame loss concealment according to the embodiments described herein, as shown inFIGS.3-5and9-11. The incoming encoded audio signal is received by an input (IN)1303, to which the processor1305and the memory1311are connected. The decoded and reconstructed audio signal obtained from the software is outputted from the output (OUT)1307towards a receiver for playback. As discussed herein, operations of the decoder1301may be performed by processor1305. Moreover, modules may be stored in memory1311, and these modules may provide instructions so that when instructions of a module are executed by processor1305, processor1305performs respective operations. The technology described above may be used e.g. in a receiver, which can be used in a mobile device (e.g. mobile phone, laptop) or a stationary device, such as a personal computer. It is to be understood that the choice of interacting units or modules, as well as the naming of the units are only for exemplary purpose, and may be configured in a plurality of alternative ways in order to be able to execute the disclosed process actions. ABBREVIATIONS At least some of the following abbreviations may be used in this disclosure. If there is an inconsistency between abbreviations, preference should be given to how it is used above. If listed multiple times below, the first listing should be preferred over any subsequent listing(s). AbbreviationExplanationADCAnalog to Digital ConverterBFIBad Frame IndicatorBFI_prevBad Frame Indicator of previous frameDACDigital to Analog ConverterFFTFast Fourier TransformMDCTModified Discrete Cosine Transform REFERENCES [1] International patent application no. WO2014123470[2] International patent application no. WO2014123471[3] 3GPP TS 26.445 V15.1.0 (clauses 5.3.2.2 and 6.2.4.1), hereby incorporated by reference in its entirety[4] 3GPP TS 26.447 V15.0.0 (clause 5.4.3.5), hereby incorporated by reference in its entirety LISTING OF EXAMPLE EMBODIMENTS Example Embodiments are discussed below. Reference numbers/letters are provided in parenthesis by way of example/illustration without limiting example embodiments to particular elements indicated by reference numbers/letters.1. A method by a computer processor for controlling a concealment method for a lost audio frame of a received audio signal, the method comprising:decoding (901) a first audio frame of the received audio signal based on a modified discrete cosine transform, MDCT;determining (307-309,903) values of a first spectral shape based upon decoded MDCT coefficients from the decoded audio frame and storing the calculated values of the first spectral shape in a shapeoldbuffer, the first spectral shape comprising a number of sub-bands;determining (905) a first frame energy of the audio frame and storing the calculated first frame energy in an E_woldbuffer;decoding (907) a second audio frame of the received audio signal;moving (303,909) the calculated values of the first spectral shape from the shapeoldbuffer to a shapeooldbuffer;moving (305,911) the calculated first frame energy from the E_woldbuffer to an E_wooldbuffer;determining (307-309,913) values of a second spectral shape based upon decoded MDCT coefficients from the decoded second audio frame and storing the calculated values of the second spectral shape in the shapeoldbuffer the second spectral shape comprising the number of sub-bands;determining (915) a second frame energy of the second audio frame and storing the calculated second frame energy in the E_woldbuffer;transforming (917) the values of the first spectral shape and the first frame energy into a first representation of a first fast Fourier transform, FFT, based spectral analysis and transforming (917) the values of the second spectral shape and the second frame energy into a second representation of a second FFT spectral analysis;detecting (919), based on the first representation of the first fast FFT and the second representation of a second FFT, a condition that could lead to suboptimal reconstruction quality of a substitution frame for the lost audio frame when the concealment method is used to create the substitution frame for the lost audio frame; andresponsive to detecting the condition, modifying (921) the concealment method by selectively adjusting a spectrum magnitude of a substitution frame spectrum.2. The method of Embodiment 1 wherein determining the values of the first spectral shape based upon decoded MDCT coefficients comprises:determining (307) a total magnitude of the MDCT coefficients;normalizing each sub-band value of the first spectral shape; andstoring each normalized sub-band value as a value of the values of the first spectral shape.3. The method of Embodiment 2 wherein the total magnitude of the MDCT coefficients is determined in accordance with shape_tot=∑n=0NM⁢D⁢C⁢T-1⁢q_d⁢(n)2where shape_tot is the total magnitude of the MDCT coefficients, NMDCTis a number of MDCT coefficients and depends on a sampling frequency, and q_d(n) are the MDCT coefficients4. The method of any of Embodiments 2-3 where the normalizing of each sub-band is normalized in accordance with shapeold⁡(k)=1shape_tot⁢∑n=grp_bin⁢(k)grp_bin⁢(k+1)-1⁢q_d⁢(n)2,⁢0≤k<Ngrpwhere shapeold(k) is a spectral shape of a sub-band (k), shape_tot is the total magnitude of the MDCT coefficients, q_d(n) are the MDCT coefficients, and Ngrpis a number of the MDCT coefficients, grp_bin(k) is a start index for the MDCT coefficients in sub-band(k), and Ngrpis the number sub-bands.5. The method of any of Embodiments 1-4 wherein frame energy of the first frame energy and the second frame energy is determined in accordance with E_wold=∑n=0Lprot-1⁢(wwhr⁡(n)·xprev⁡(n))2where E_woldis the frame energy, wwhris along FFT spectral analysis window, xprevis a time domain prototype signal used to create a substitution for a potentially upcoming lost frame, and Lprotis a number of samples in the xprevsignal6. The method of any of Embodiments 1-5, wherein transforming the values of the first spectral shape and the first frame energy into the first representation of a first fast FFT based spectral analysis and transforming the values of the second spectral shape and the second frame energy into a second representation of a second FFT spectral analysis comprises applying a conversion factor to the values of the first spectral shape and the first frame energy and to the values of the second spectral shape and the second frame energy.7. The method of Embodiment 6 wherein the conversion factor depends on a sampling frequency of the decoding.8. The method of any of Embodiments 4-7, further comprising:transforming the values of the first spectral shape and the first frame energy into the first representation of a first fast FFT based spectral analysis and transforming the values of the second spectral shape and the second frame energy into a second representation of a second FFT spectral analysis in accordance with Eoold(k)=μ·shapeoold(k)·E_woold, 0≤k<Ngrp and Eold(k)=μ·shapeold(k)·Ewold, 0≤k<Ngrpwhere Eoold(k) is the first representation, μ is the conversion factor, shapeoold(k) is a spectral shape of a sub-band (k) of the first spectral shape, E_wooldis the first frame energy, Eold(k) is the second representation, shapeold(k) is a spectral shape of a sub-band (k) f the second spectral shape, E_woldis the second frame energy, and Ngrpis the number of sub-bands.9. The method of Embodiment 8 further comprising:determining (1105) if a sub-band transient is above a threshold value based on Eoold(k) and Eold(k);responsive to a sub-band transient being above the threshold value, modifying the concealment method by selectively adjusting (1107) the spectrum magnitude of the substitution frame spectrum.10. The method of Embodiment 9 wherein the substitution frame spectrum is calculated according to an expression of Z(m)=α(m)·β(m)˜Y(m)·ej(θk+ϑ(m))and adjusting the spectrum magnitude comprises adjusting β(m) (1107), where Z(m) is the substitution frame spectrum, α (m) is a first magnitude attenuation factor, β(m) is a second magnitude attenuation factor, Y(m) is a protype frame, θkis a phase shift, and ϑ(m) is an additive phase component.11. The method of any of Embodiments 1-10 further comprising:receiving a bad frame indicator (403,501);responsive to receiving the bad frame indicator, flushing the shapeooldbuffer and the E_wooldenergy buffer;receiving a new audio frame of the received audio signal;determining values of a new spectral shape (503) based upon decoded MDCT coefficients from the decoded new audio frame and storing the calculated values of the new spectral shape in the shapeoldbuffer and the shapeooldbuffer (405), the new spectral shape comprising a number of sub-bands; anddetermining a new frame energy (505) of the audio frame and storing the calculated new frame energy in the E_woldbuffer and the E_wooldbuffer (407).12. A decoder apparatus (1201,1301) adapted to perform operations according to any of Embodiments 1-11.13. A decoder apparatus (1201,1301) configured to control a concealment method for a lost audio frame of a received audio signal, the decoder apparatus configured to:decode a first audio frame of the received audio signal based on a modified discrete cosine transform, MDCT;determine values of a first spectral shape based upon decoded MDCT coefficients from the decoded audio frame and store the calculated values of the first spectral shape in a shapeoldbuffer, the first spectral shape comprising a number of sub-bands;determine a first frame energy of the audio frame and store the calculated first frame energy in an E_woldbuffer;decode a second audio frame of the received audio signal;move the calculated values of the first spectral shape from the shapeoldbuffer to a shapeooldbuffer;move the calculated first frame energy from the E_woldbuffer to a E_wooldbuffer;determine values of a second spectral shape based upon decoded MDCT coefficients from the decoded second audio frame and store the calculated values of the second spectral shape in the shapeoldbuffer the second spectral shape comprising the number of sub-bands;determining a second frame energy of the second audio frame and storing the calculated second frame energy in the E_woldbuffer;transform the values of the first spectral shape and the first frame energy into a first representation of a first fast Fourier transform, FFT, based spectral analysis and transform the values of the second spectral shape and the second frame energy into a second representation of a second FFT spectral analysis;detect, based on the first representation of the first fast FFT and the second representation of a second FFT, a condition that could lead to suboptimal reconstruction quality of a substitution frame for the lost audio frame when the concealment method is used to create the substitution frame for the lost audio frame; andresponsive to detecting the condition, modify the concealment method by selectively adjusting a spectrum magnitude of a substitution frame spectrum.14. The decoder apparatus of Embodiment 13, wherein the decoder apparatus is configured to perform the operations of Embodiments 2-11.15. A decoder apparatus (1201,1301) configured to control a concealment method for a lost audio frame of a received audio signal, the decoder apparatus comprising:a processor (1305); anda memory (1311) storing instructions that, when executed by the processor, cause the decoder apparatus (1201,1301) to perform operations comprising:decoding (901) a first audio frame of the received audio signal based on a modified discrete cosine transform, MDCT;determining (903) values of a first spectral shape based upon decoded MDCT coefficients from the decoded audio frame and storing the calculated values of the first spectral shape in a shapeoldbuffer, the first spectral shape comprising a number of sub-bands;determining (905) a first frame energy of the audio frame and storing the calculated first frame energy in an E_woldbuffer;decoding (907) a second audio frame of the received audio signal;moving (303,909) the calculated values of the first spectral shape from the shapeoldbuffer to a shapeooldbuffer;moving (305,911) the calculated first frame energy from the E_woldbuffer to a E_wooldbuffer;determining (307-309,913) values of a second spectral shape based upon decoded MDCT coefficients from the decoded second audio frame and storing the calculated values of the second spectral shape in the shapeoldbuffer the second spectral shape comprising the number of sub-bands;determining (915) a second frame energy of the second audio frame and storing the calculated second frame energy in the E_woldbuffer;transforming (917) the values of the first spectral shape and the first frame energy into a first representation of a first fast Fourier transform, FFT, based spectral analysis and transforming the values of the second spectral shape and the second frame energy into a second representation of a second FFT spectral analysis;detecting (919), based on the first representation of the first fast FFT and the second representation of a second FFT, a condition that could lead to suboptimal reconstruction quality of a substitution frame for the lost audio frame when the concealment method is used to create the substitution frame for the lost audio frame; andresponsive to detecting the condition, modifying (921) the concealment method by selectively adjusting a spectrum magnitude of a substitution frame spectrum.16. The decoder apparatus of Embodiment 1 wherein to determine the values of the first spectral shape based upon decoded MDCT coefficients, the instructions comprise further instructions that, when executed by the processor, cause the apparatus to perform operations comprising:determining (307) a total magnitude of the MDCT coefficients;normalizing each sub-band value of the first spectral shape; andstoring each normalized sub-band value as a value of the values of the first spectral shape.17. The decoder apparatus of Embodiment 16 wherein the total magnitude of the MDCT coefficients is determined in accordance with shape_tot=∑n=0NMDCT-1⁢q_d⁢(n)2where shape_tot is the total magnitude of the MDCT coefficients, NMDCTis a number of MDCT coefficients and depends on a sampling frequency, and q_d(n) are the MDCT coefficients.18. The decoder apparatus of any of Embodiments 16-17 where the normalizing of each sub-band is normalized in accordance with shapeold⁡(k)=1shape_tot⁢∑n=grp_bin⁢(k)grp_bin⁢(k+1)-1⁢q_d⁢(n)2,0≤k<Ngrpwhere shapeold(k) is a spectral shape of a sub-band (k), shape_tot is the total magnitude of the MDCT coefficients, q_d(n) are the MDCT coefficients, grp_bin(k) is a start index for the MDCT coefficients in sub-band(k), and Ngrpis the number of sub-bands.19. The decoder apparatus of any of Embodiments 15-18 wherein frame energy of the first frame energy and the second frame energy is determined in accordance with E_wold=∑n=0Lprot-1⁢(wwhr⁡(n)·xprev⁡(n))2where E_woldis the frame energy, wwhris along FFT spectral analysis window, xprevis a time domain prototype signal used to create a substitution for a potentially upcoming lost frame, and Lprotis a number of samples in the xprevsignal.20. The decoder apparatus of any of Embodiments 15-19, wherein to transform the values of the first spectral shape and the first frame energy into the first representation of a first fast FFT based spectral analysis and to transform the values of the second spectral shape and the second frame energy into a second representation of a second FFT spectral analysis, the instructions comprise further instructions that, when executed by the processor, cause the apparatus to perform operations comprising:applying a conversion factor to the values of the first spectral shape and the first frame energy and to the values of the second spectral shape and the second frame energy.21. The decoder apparatus of Embodiment 20 wherein the conversion factor depends on a sampling frequency of the decoding.22. The decoder apparatus of any of Embodiments 20-21, further comprising:transforming the values of the first spectral shape and the first frame energy into the first representation of a first fast FFT based spectral analysis and transforming the values of the second spectral shape and the second frame energy into a second representation of a second FFT spectral analysis in accordance with Eoold(k)=μ·shapeoold(k)·E_woold, 0≤k<Ngrp and Eold(k)=μ·shapeold(k)·Ewold, 0≤k<Ngrpwhere Eoold(k) is the first representation, μ is the conversion factor, shapeoold(k) is a spectral shape of a sub-band (k) of the first spectral shape, E_wooldis the first frame energy, Eold(k) is the second representation, shapeold(k) is a spectral shape of a sub-band (k) f the second spectral shape, E_woldis the second frame energy, and Ngrpis the number of sub-bands.23. The decoder apparatus of Embodiment 22 wherein the instructions comprise further instructions that, when executed by the processor, cause the apparatus to perform operations further comprising:determining (1105) if a sub-band transient is above a threshold value based on Eoold(k) and Eold(k); andresponsive to a sub-band transient being above the threshold value, modifying the concealment method by selectively adjusting (1107) the spectrum magnitude of the substitution frame spectrum.24. The decoder apparatus of Embodiment 22 wherein the substitution frame spectrum is calculated according to an expression of Z(m)=α(m)·β(m)·Y(m)·ej(θk+ϑ(m))and adjusting the spectrum magnitude comprises adjusting β(m) (1107), where Z(m) is the substitution frame spectrum, α (m) is a first magnitude attenuation factor, β(m) is a second magnitude attenuation factor, Y(m) is a protype frame, θkis a phase shift, and ϑ(m) is an additive phase component25. The decoder apparatus of any of Embodiments 1-10 wherein the instructions comprise further instructions that, when executed by the processor, cause the apparatus to perform operations further comprising:receiving a bad frame indicator (403,501);responsive to receiving the bad frame indicator, flushing the shapeooldbuffer and the E_wooldenergy buffer;receiving a new audio frame of the received audio signal;determining values of a new spectral shape (503) based upon decoded MDCT coefficients from the decoded new audio frame and storing the calculated values of the new spectral shape in the shapeoldbuffer and the shapeooldbuffer (405), the new spectral shape comprising a number of sub-bands; anddetermining a new frame energy (505) of the audio frame and storing the calculated new frame energy in the E_woldbuffer and the E_wooldbuffer (407). ADDITIONAL EXPLANATION Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description. Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art. Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure. The term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. In the above-description of various embodiments of present inventive concepts, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of present inventive concepts. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which present inventive concepts belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. When an element is referred to as being “connected”, “coupled”, “responsive”, or variants thereof to another element, it can be directly connected, coupled, or responsive to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected”, “directly coupled”, “directly responsive”, or variants thereof to another element, there are no intervening elements present. Like numbers refer to like elements throughout. Furthermore, “coupled”, “connected”, “responsive”, or variants thereof as used herein may include wirelessly coupled, connected, or responsive. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Well-known functions or constructions may not be described in detail for brevity and/or clarity. The term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that although the terms first, second, third, etc. may be used herein to describe various elements/operations, these elements/operations should not be limited by these terms. These terms are only used to distinguish one element/operation from another element/operation. Thus a first element/operation in some embodiments could be termed a second element/operation in other embodiments without departing from the teachings of present inventive concepts. The same reference numerals or the same reference designators denote the same or similar elements throughout the specification. As used herein, the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof. Furthermore, as used herein, the common abbreviation “e.g.”, which derives from the Latin phrase “exempli gratia,” may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item. The common abbreviation “i.e.”, which derives from the Latin phrase “id est,” may be used to specify a particular item from a more general recitation. Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s). These computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks. Accordingly, embodiments of present inventive concepts may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor such as a digital signal processor, which may collectively be referred to as “circuitry,” “a module” or variants thereof. It should also be noted that in some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Finally, other blocks may be added/inserted between the blocks that are illustrated, and/or blocks/operations may be omitted without departing from the scope of inventive concepts. Moreover, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows. Many variations and modifications can be made to the embodiments without substantially departing from the principles of the present inventive concepts. All such variations and modifications are intended to be included herein within the scope of present inventive concepts. Accordingly, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the examples of embodiments are intended to cover all such modifications, enhancements, and other embodiments, which fall within the spirit and scope of present inventive concepts. Thus, to the maximum extent allowed by law, the scope of present inventive concepts are to be determined by the broadest permissible interpretation of the present disclosure including the examples of embodiments and their equivalents, and shall not be restricted or limited by the foregoing detailed description. Appendix 1 forms a part of this disclosure. Appendix 1 WO2014123471METHOD AND APPARATUS FOR CONTROLLING AUDIO FRAME LOSS CONCEALMENT Technical Field The application relates to methods and apparatuses for controlling a concealment method for a lost audio frame of a received audio signal. Background Conventional audio communication systems transmit speech and audio signals in frames, meaning that the sending side first arranges the signal in short segments or frames of e.g. 20-40 ms which subsequently are encoded and transmitted as a logical unit in e.g. a transmission packet. The receiver decodes each of these units and reconstructs the corresponding signal frames, which in turn are finally output as continuous sequence of reconstructed signal samples. Prior to encoding there is usually an analog to digital (A/D) conversion step that converts the analog speech or audio signal from a microphone into a sequence of audio samples. Conversely, at the receiving end, there is typically a final D/A conversion step that converts the sequence of reconstructed digital signal samples into a time continuous analog signal for loudspeaker playback. However, such transmission system for speech and audio signals may suffer from transmission errors, which could lead to a situation in which one or several of the transmitted frames are not available at the receiver for reconstruction. In that case, the decoder has to generate a substitution signal for each of the erased, i.e. unavailable frames. This is done in the so-called frame loss or error concealment unit of the receiver-side signal decoder. The purpose of the frame loss concealment is to make the frame loss as inaudible as possible and hence to mitigate the impact of the frame loss on the reconstructed signal quality as much as possible. Conventional frame loss concealment methods may depend on the structure or architecture of the codec, e.g. by applying a form of repetition of previously received codec parameters. Such parameter repetition techniques are clearly dependent on the specific parameters of the used codec and hence not easily applicable for other codecs with a different structure. Current frame loss concealment methods may e.g. apply the concept of freezing and extrapolating parameters of a previously received frame in order to generate a substitution frame for the lost frame. These state of the art frame loss concealment methods incorporate some burst loss handling schemes. In general, after a number of frame losses in a row the synthesized signal is attenuated until it is completely muted after long bursts of errors. In addition the coding parameters that are essentially repeated and extrapolated are modified such that the attenuation is accomplished and that spectral peaks are flattened out. Current state-of-the-art frame loss concealment techniques typically apply the concept of freezing and extrapolating parameters of a previously received frame in order to generate a substitution frame for the lost frame. Many parametric speech codecs such as linear predictive codecs like AMR or AMR-WB typically freeze the earlier received parameters or use some extrapolation thereof and use the decoder with them. In essence, the principle is to have a given model for coding/decoding and to apply the same model with frozen or extrapolated parameters. The frame loss concealment techniques of the AMR and AMR-WB can be regarded as representative. They are specified in detail in the corresponding standards specifications. Many codecs out of the class of audio codecs apply for coding frequency domain techniques. This means that after some frequency domain transform a coding model is applied on spectral parameters. The decoder reconstructs the signal spectrum from the received parameters and finally transforms the spectrum back to a time signal. Typically, the time signal is reconstructed frame by frame. Such frames are combined by overlap-add techniques to the final reconstructed signal. Even in that case of audio codecs, state-of-the-art error concealment typically applies the same or at least a similar decoding model for lost frames. The frequency domain parameters from a previously received frame are frozen or suitably extrapolated and then used in the frequency-to-time domain conversion. Examples for such techniques are provided with the 3GPP audio codecs according to 3GPP standards. Summary Current state-of-the-art solutions for frame loss concealment typically suffer from quality impairments. The main problem is that the parameter freezing and extrapolation technique and re-application of the same decoder model even for lost frames does not always guarantee a smooth and faithful signal evolution from the previously decoded signal frames to the lost frame. This leads typically to audible signal discontinuities with corresponding quality impact. New schemes for frame loss concealment for speech and audio transmission systems are described. The new schemes improve the quality in case of frame loss over the quality achievable with prior-art frame loss concealment techniques. The objective of the present embodiments is to control a frame loss concealment scheme that preferably is of the type of the related new methods described such that the best possible sound quality of the reconstructed signal is achieved. The embodiments aim at optimizing this reconstruction quality both with respect to the properties of the signal and of the temporal distribution of the frame losses. Particularly problematic for the frame loss concealment to provide good quality are cases when the audio signal has strongly varying properties such as energy onsets or offsets or if it is spectrally very fluctuating. In that case the described concealment methods may repeat the onset, offset or spectral fluctuation leading to large deviations from the original signal and corresponding quality loss. Another problematic case is if bursts of frame losses occur in a row. Conceptually, the scheme for frame loss concealment according to the methods described can cope with such cases, though it turns out that annoying tonal artifacts may still occur. It is another objective of the present embodiments to mitigate such artifacts to the highest possible degree. According to a first aspect, a method for a decoder of concealing a lost audio frame comprises detecting in a property of the previously received and reconstructed audio signal, or in a statistical property of observed frame losses, a condition for which the substitution of a lost frame provides relatively reduced quality. In case such a condition is detected, modifying the concealment method by selectively adjusting a phase or a spectrum magnitude of a substitution frame spectrum. According to a second aspect, a decoder is configured to implement a concealment of a lost audio frame, and comprises a controller configured to detect in a property of the previously received and reconstructed audio signal, or in a statistical property of observed frame losses, a condition for which the substitution of a lost frame provides relatively reduced quality. In case such a condition is detected, the controller is configured to modify the concealment method by selectively adjusting a phase or a spectrum magnitude of a substitution frame spectrum. The decoder can be implemented in a device, such as e.g. a mobile phone. According to a third aspect, a receiver comprises a decoder according to the second aspect described above. According to a fourth aspect, a computer program is defined for concealing a lost audio frame, and the computer program comprises instructions which when run by a processor causes the processor to conceal a lost audio frame, in agreement with the first aspect described above. According to a fifth aspect, a computer program product comprises a computer readable medium storing a computer program according to the above-described fourth aspect. An advantage with an embodiment addresses the control of adaptations frame loss concealment methods allowing mitigating the audible impact of frame loss in the transmission of coded speech and audio signals even further over the quality achieved with only the described concealment methods. The general benefit of the embodiments is to provide a smooth and faithful evolution of the reconstructed signal even for lost frames. The audible impact of frame losses is greatly reduced in comparison to using state-of-the-art techniques. Brief Description of the Drawings For a more complete understanding of example embodiments of the present invention, reference is now made to the following description taken in connection with the accompanying drawings in which: FIG.1shows a rectangular window function. FIG.2shows a combination of the Hamming window with the rectangular window. FIG.3shows an example of a magnitude spectrum of a window function. FIG.4illustrates a line spectrum of an exemplary sinusoidal signal with the frequency fk. FIG.5shows a spectrum of a windowed sinusoidal signal with the frequency fk. FIG.6illustrates bars corresponding to the magnitude of grid points of a DFT, based on an analysis frame. FIG.7illustrates a parabola fitting through DFT grid points P1, P2 and P3. FIG.8illustrates a fitting of a main lobe of a window spectrum. FIG.9illustrates a fitting of main lobe approximation function P through DFT grid points P1 and P2. FIG.10is a flow chart illustrating an example method according to embodiments of the invention for controlling a concealment method for a lost audio frame of a received audio signal. FIG.11is a flow chart illustrating another example method according to embodiments of the invention for controlling a concealment method for a lost audio frame of a received audio signal. FIG.12illustrates another example embodiment of the invention. FIG.13shows an example of an apparatus according to an embodiment of the invention. FIG.14shows another example of an apparatus according to an embodiment of the invention. FIG.15shows another example of an apparatus according to an embodiment of the invention. Detailed Description The new controlling scheme for the new frame loss concealment techniques described involve the following steps as shown inFIG.10. It should be noted that the method can be implemented in a controller in a decoder.1. Detect conditions in the properties of the previously received and reconstructed audio signal or in the statistical properties of the observed frame losses for which the substitution of a lost frame according to the described methods provides relatively reduced quality,101.2. In case such a condition is detected in step 1, modify the element of the methods according to which the substitution frame spectrum is calculated by Z(m)=Y(m)·ejθkby selectively adjusting the phases or the spectrum magnitudes,102. Sinusoidal Analysis A first step of the frame loss concealment technique to which the new controlling technique may be applied involves a sinusoidal analysis of a part of the previously received signal. The purpose of this sinusoidal analysis is to find the frequencies of the main sinusoids of that signal, and the underlying assumption is that the signal is composed of a limited number of individual sinusoids, i.e. that it is a multi-sine signal of the following type: s⁡(n)=∑k=1Kak·cos⁡(2⁢π⁢fkfs·n+φk). In this equation K is the number of sinusoids that the signal is assumed to consist of. For each of the sinusoids with index k=1 . . . K, akis the amplitude, fkis the frequency, and φkis the phase. The sampling frequency is denominated by fsand the time index of the time discrete signal samples s(n) by n. It is of main importance to find as exact frequencies of the sinusoids as possible. While an ideal sinusoidal signal would have a line spectrum with line frequencies fk, finding their true values would in principle require infinite measurement time. Hence, it is in practice difficult to find these frequencies since they can only be estimated based on a short measurement period, which corresponds to the signal segment used for the sinusoidal analysis described herein; this signal segment is hereinafter referred to as an analysis frame. Another difficulty is that the signal may in practice be time-variant, meaning that the parameters of the above equation vary over time. Hence, on the one hand it is desirable to use a long analysis frame making the measurement more accurate; on the other hand a short measurement period would be needed in order to better cope with possible signal variations. A good trade-off is to use an analysis frame length in the order of e.g. 20-40 ms. A preferred possibility for identifying the frequencies of the sinusoids fkis to make a frequency domain analysis of the analysis frame. To this end the analysis frame is transformed into the frequency domain, e.g. by means of DFT or DCT or similar frequency domain transforms. In case a DFT of the analysis frame is used, the spectrum is given by: X⁡(m)=D⁢F⁢T⁡(w⁡(n)·x⁡(n))=∑n=0L-1e-j⁢2⁢πL⁢m⁢n·w⁡(n)·x⁡(n). In this equation w(n) denotes the window function with which the analysis frame of length L is extracted and weighted. Typical window functions are e.g. rectangular windows that are equal to 1 for n∈[0 . . . L−1] and otherwise 0 as shown inFIG.1. It is assumed here that the time indexes of the previously received audio signal are set such that the analysis frame is referenced by the time indexes n=0 . . . L−1. Other window functions that may be more suitable for spectral analysis are, e.g., Hamming window, Hanning window, Kaiser window or Blackman window. A window function that is found to be particular useful is a combination of the Hamming window with the rectangular window. This window has a rising edge shape like the left half of a Hamming window of length L1 and a falling edge shape like the right half of a Hamming window of length L1 and between the rising and falling edges the window is equal to 1 for the length of L-L1, as shown inFIG.2. The peaks of the magnitude spectrum of the windowed analysis frame |X(m)| constitute an approximation of the required sinusoidal frequencies fk. The accuracy of this approximation is however limited by the frequency spacing of the DFT. With the DFT with block length L the accuracy is limited to fs2⁢L. Experiments show that this level of accuracy may be too low in the scope of the methods described herein. Improved accuracy can be obtained based on the results of the following consideration: The spectrum of the windowed analysis frame is given by the convolution of the spectrum of the window function with the line spectrum of the sinusoidal model signal S(Ω), subsequently sampled at the grid points of the DFT: X⁡(m)=∫2⁢πδ⁡(Ω-m·2⁢πL)·(W⁡(Ω)*S⁡(Ω))·d⁢Ω. By using the spectrum expression of the sinusoidal model signal, this can be written as X⁡(m)=12⁢∫2⁢πδ⁡(Ω-m·2⁢πL)⁢∑k=1Kak·((W⁡(Ω+2⁢π⁢fkfs)·e-j⁢φk+W⁡(Ω-2⁢π⁢fkfs)⁢ej⁢φk)·d⁢Ω Hence, the sampled spectrum is given by X⁡(m)=12⁢∑k=1Kak·((W⁡(2⁢π⁡(mL+fkfs))·e-j⁢φk+W⁡(2⁢π⁡(mL-fkfs))·ej⁢φk)) with m=0 . . . L−1. Based on this consideration it is assumed that the observed peaks in the magnitude spectrum of the analysis frame stem from a windowed sinusoidal signal with K sinusoids where the true sinusoid frequencies are found in the vicinity of the peaks. Let mkbe the DFT index (grid point) of the observed kthpeak, then the corresponding frequency is fˆk=mkL·fs which can be regarded an approximation of the true sinusoidal frequency fk. The true sinusoid frequency fkcan be assumed to lie within the interval [(mk-12)·fsL,(mk+12)·fsL]. For clarity it is noted that the convolution of the spectrum of the window function with the spectrum of the line spectrum of the sinusoidal model signal can be understood as a superposition of frequency-shifted versions of the window function spectrum, whereby the shift frequencies are the frequencies of the sinusoids. This superposition is then sampled at the DFT grid points. These steps are illustrated by the following figures.FIG.3displays an example of the magnitude spectrum of a window function.FIG.4shows the magnitude spectrum (line spectrum) of an example sinusoidal signal with a single sinusoid of frequency.FIG.5shows the magnitude spectrum of the windowed sinusoidal signal that replicates and superposes the frequency-shifted window spectra at the frequencies of the sinusoid. The bars inFIG.6correspond to the magnitude of the grid points of the DFT of the windowed sinusoid that are obtained by calculating the DFT of the analysis frame. It should be noted that all spectra are periodic with the normalized frequency parameter Ω where Ω=2π that corresponds to the sampling frequency fs. The previous discussion and the illustration ofFIG.6suggest that a better approximation of the true sinusoidal frequencies can only be found through increasing the resolution of the search over the frequency resolution of the used frequency domain transform. One preferred way to find better approximations of the frequencies fkof the sinusoids is to apply parabolic interpolation. One such approach is to fit parabolas through the grid points of the DFT magnitude spectrum that surround the peaks and to calculate the respective frequencies belonging to the parabola maxima. A suitable choice for the order of the parabolas is 2. In detail the following procedure can be applied:1. Identify the peaks of the DFT of the windowed analysis frame. The peak search will deliver the number of peaks K and the corresponding DFT indexes of the peaks. The peak search can typically be made on the DFT magnitude spectrum or the logarithmic DFT magnitude spectrum.2. For each peak k (with k=1 . . . K) with corresponding DFT index mkfit a parabola through the three points {P1; P2; P3}={(mk−1, log(|X(mk−1)|); (mk, log(|X(mk)|); (mk+1, log(|X(mk+1)|)}. This results in parabola coefficients bk(0), bk(1), bk(2) of the parabola defined by pk(q)=∑i=02bk(i)·qi. This parabola fitting is illustrated inFIG.7.3. For each of the K parabolas calculate the interpolated frequency index {circumflex over (m)}kcorresponding to the value of q for which the parabola has its maximum. Use={circumflex over (m)}k·fs/L as approximation for the sinusoid frequency fk. The described approach provides good results but may have some limitations since the parabolas do not approximate the shape of the main lobe of the magnitude spectrum |W(Ω)| of the window function. An alternative scheme doing this is an enhanced frequency estimation using a main lobe approximation, described as follows. The main idea of this alternative is to fit a function P(q), which approximates the main lobe of ❘"\[LeftBracketingBar]"W⁡(2⁢πL·q)❘"\[RightBracketingBar]", through the grid points of the DFT magnitude spectrum that surround the peaks and to calculate the respective frequencies belonging to the function maxima. The function P(q) could be identical to the frequency-shifted magnitude spectrum ❘"\[LeftBracketingBar]"W⁡(2⁢πL·(q-qˆ))| of the window function. For numerical simplicity it should however rather for instance be a polynomial which allows for straightforward calculation of the function maximum. The following detailed procedure can be applied:1. Identify the peaks of the DFT of the windowed analysis frame. The peak search will deliver the number of peaks K and the corresponding DFT indexes of the peaks. The peak search can typically be made on the DFT magnitude spectrum or the logarithmic DFT magnitude spectrum.2. Derive the function P(q) that approximates the magnitude spectrum ❘"\[LeftBracketingBar]"W⁡(2⁢πL·q)|of the window function or of the logarithmic magnitude spectrum log⁢❘"\[LeftBracketingBar]"W⁡(2⁢πL·q)|for a given interval (q1,q2). The choice of the approximation function approximating the window spectrum main lobe is illustrated byFIG.8.3. For each peak k (with k=1 . . . K) with corresponding DFT index mkfit the frequency-shifted function P(q−{circumflex over (q)}k) through the two DFT grid points that surround the expected true peak of the continuous spectrum of the windowed sinusoidal signal. Hence, if |X(mk−1)| is larger than |X(mk+1)| fit P(q−{circumflex over (q)}k) through the points {P1; P2}={(mk−1, log(|X(mk−1)|); (mk, log(|X(mk)|)} and otherwise through the points {P1; P2}={(mk, log(|X(mk)|); (mk+1, log(|X(mk+1)|)}. P(q) can for simplicity be chosen to be a polynomial either of order 2 or 4. This renders the approximation in step 2 a simple linear regression calculation and the calculation of {circumflex over (q)}kstraightforward. The interval (q1,q2) can be chosen to be fixed and identical for all peaks, e.g. (q1,q2)=(−1,1), or adaptive. In the adaptive approach the interval can be chosen such that the function P(q−{circumflex over (q)}k) fits the main lobe of the window function spectrum in the range of the relevant DFT grid points {P1; P2}. The fitting process is visualized inFIG.9.4. For each of the K frequency shift parameters {circumflex over (q)}kfor which the continuous spectrum of the windowed sinusoidal signal is expected to have its peak calculate {circumflex over (f)}k={circumflex over (q)}k·fs/L as approximation for the sinusoid frequency fk. There are many cases where the transmitted signal is harmonic meaning that the signal consists of sine waves which frequencies are integer multiples of some fundamental frequency f0. This is the case when the signal is very periodic like for instance for voiced speech or the sustained tones of some musical instrument. This means that the frequencies of the sinusoidal model of the embodiments are not independent but rather have a harmonic relationship and stem from the same fundamental frequency. Taking this harmonic property into account can consequently improve the analysis of the sinusoidal component frequencies substantially. One enhancement possibility is outlined as follows:1. Check whether the signal is harmonic. This can for instance be done by evaluating the periodicity of signal prior to the frame loss. One straightforward method is to perform an autocorrelation analysis of the signal. The maximum of such autocorrelation function for some time lag τ>0 can be used as an indicator. If the value of this maximum exceeds a given threshold, the signal can be regarded harmonic. The corresponding time lag τ then corresponds to the period of the signal which is related to the fundamental frequency through f0=fsτ. Many linear predictive speech coding methods apply so-called open or closed-loop pitch prediction or CELP coding using adaptive codebooks. The pitch gain and the associated pitch lag parameters derived by such coding methods are also useful indicators if the signal is harmonic and, respectively, for the time lag. A further method for obtaining f0is described below.2. For each harmonic index j within the integer range check whether there is a peak in the (logarithmic) DFT magnitude spectrum of the analysis frame within the vicinity of the harmonic frequency fj=j·f0. The vicinity of fjmay be defined as the delta range around fjwhere delta corresponds to the frequency resolution of the DFT fsL,i.e. the interval [j·f0-fs2·L,j·f0+fs2·L]. In case such a peak with corresponding estimated sinusoidal frequencyis present, supersedeby=j·f0. For the two-step procedure given above there is also the possibility to make the check whether the signal is harmonic and the derivation of the fundamental frequency implicitly and possibly in an iterative fashion without necessarily using indicators from some separate method. An example for such a technique is given as follows: For each f0,pout of a set of candidate values {f0,1. . . f0,P} apply the procedure step 2, though without supersedingbut with counting how many DFT peaks are present within the vicinity around the harmonic frequencies, i.e. the integer multiples of f0,p. Identify the fundamental frequency f0,pmaxfor which the largest number of peaks at or around the harmonic frequencies is obtained. If this largest number of peaks exceeds a given threshold, then the signal is assumed to be harmonic. In that case f0,pmaxcan be assumed to be the fundamental frequency with which step 2 is then executed leading to enhanced sinusoidal frequencies. A more preferable alternative is however first to optimize the fundamental frequency f0based on the peak frequenciesthat have been found to coincide with harmonic frequencies. Assume a set of M harmonics, i.e. integer multiples {n1. . . nM} of some fundamental frequency that have been found to coincide with some set of M spectral peaks at frequencies(m), m=1 . . . M, then the underlying (optimized) fundamental frequency f0,optcan be calculated to minimize the error between the harmonic frequencies and the spectral peak frequencies. If the error to be minimized is the mean square error E2=∑m=1M(nm·f0-fˆk⁡(m))2, then the optimal fundamental frequency is calculated as f0,opt=∑m=1Mnm·f^k⁡(m)∑m=1Mnm2. The initial set of candidate values {f0,1. . . f0,P} can be obtained from the frequencies of the DFT peaks or the estimated sinusoidal frequencies {circumflex over (f)}k. A further possibility to improve the accuracy of the estimated sinusoidal frequencies {circumflex over (f)}kis to consider their temporal evolution. To that end, the estimates of the sinusoidal frequencies from a multiple of analysis frames can be combined for instance by means of averaging or prediction. Prior to averaging or prediction a peak tracking can be applied that connects the estimated spectral peaks to the respective same underlying sinusoids. Applying the Sinusoidal Model The application of a sinusoidal model in order to perform a frame loss concealment operation described herein may be described as follows. It is assumed that a given segment of the coded signal cannot be reconstructed by the decoder since the corresponding encoded information is not available. It is further assumed that a part of the signal prior to this segment is available. Let y(n) with n=0 . . . N−1 be the unavailable segment for which a substitution frame z(n) has to be generated and y(n) with n<0 be the available previously decoded signal. Then, in a first step a prototype frame of the available signal of length L and start index n−1is extracted with a window function w(n) and transformed into frequency domain, e.g. by means of DFT: Y-1(m)=∑n=0L-1y⁡(n-n-1)·w⁡(n)·ej⁢2⁢πL⁢nm. The window function can be one of the window functions described above in the sinusoidal analysis. Preferably, in order to save numerical complexity, the frequency domain transformed frame should be identical with the one used during sinusoidal analysis. In a next step the sinusoidal model assumption is applied. According to that the DFT of the prototype frame can be written as follows: Y-1(m)=12⁢∑k=1Kak·((W⁡(2⁢π⁡(mL+fkfs))·e-j⁢φk+W⁡(2⁢π⁡(mL-fkfs))·ej⁢φk)). The next step is to realize that the spectrum of the used window function has only a significant contribution in a frequency range close to zero. As illustrated inFIG.3the magnitude spectrum of the window function is large for frequencies close to zero and small otherwise (within the normalized frequency range from −π to π, corresponding to half the sampling frequency). Hence, as an approximation it is assumed that the window spectrum W(m) is non-zero only for an interval M=[−mmin, mmax], with mminand mmaxbeing small positive numbers. In particular, an approximation of the window function spectrum is used such that for each k the contributions of the shifted window spectra in the above expression are strictly non-overlapping. Hence in the above equation for each frequency index there is always only at maximum the contribution from one summand, i.e. from one shifted window spectrum. This means that the expression above reduces to the following approximate expression: Y^-1(m)=ak2·W⁡(2⁢π⁡(mL-fkfs))·ej⁢φk for non-negative m∈Mkand for each k. Herein, Mkdenotes the integer interval Mk=[round(fkfs·L)-mmin,k,round(fkfs·L)+mmax,k], where mmin,kand mmax,kfulfill the above explained constraint such that the intervals are not overlapping. A suitable choice for mmin,kand mmax,kis to set them to a small integer value δ, e.g. δ=3. If however the DFT indices related to two neighboring sinusoidal frequencies fkand fk+1are less than 2δ, then δ is set to floor(round(fk+1fs·L)·round(fkfs·L)2) such that it is ensured that the intervals are not overlapping. The function floor (⋅) is the closest integer to the function argument that is smaller or equal to it. The next step according to the embodiment is to apply the sinusoidal model according to the above expression and to evolve its K sinusoids in time. The assumption that the time indices of the erased segment compared to the time indices of the prototype frame differs by samples means that the phases of the sinusoids advance by θk=2⁢π·fkfs⁢n-1. Hence, the DFT spectrum of the evolved sinusoidal model is given by: Y0(m)=12⁢∑k=1Kak·((W⁡(2⁢π⁡(mL+fkfs))·e-j⁡(φk+θk)+W⁡(2⁢π⁡(mL-fkfs))·ej⁡(φk+θk))). Applying again the approximation according to which the shifted window function spectra do no overlap gives: Y^0(m)=ak2·W⁡(2⁢π⁡(mL-fkfs))·ej⁡(φk+θk) for non-negative m c Mk and for each k. Comparing the DFT of the prototype frame Y−1(m) with the DFT of evolved sinusoidal model Y0(m) by using the approximation, it is found that the magnitude spectrum remains unchanged while the phase is shifted by θk=2⁢π·fkfs⁢n-1, for each m∈Mk. Hence, the frequency spectrum coefficients of the prototype frame in the vicinity of each sinusoid are shifted proportional to the sinusoidal frequency fkand the time difference between the lost audio frame and the prototype frame n−1. Hence, according to the embodiment the substitution frame can be calculated by the following expression: z(n)=IDTF{Z(m)} withZ(m)=Y(m)·ejθkfor non-negativem∈Mkand for eachk. A specific embodiment addresses phase randomization for DFT indices not belonging to any interval Mk. As described above, the intervals Mk, k=1 . . . K have to be set such that they are strictly non-overlapping which is done using some parameter δ which controls the size of the intervals. It may happen that δ is small in relation to the frequency distance of two neighboring sinusoids. Hence, in that case it happens that there is a gap between two intervals. Consequently, for the corresponding DFT indices m no phase shift according to the above expression Z(m)=Y(m)·ejθkis defined. A suitable choice according to this embodiment is to randomize the phase for these indices, yielding Z(m)=Y(m)·ej2πrand(−), where the function rand(⋅) returns some random number. It has been found beneficial for the quality of the reconstructed signals to optimize the size of the intervals Mk. In particular, the intervals should be larger if the signal is very tonal, i.e. when it has clear and distinct spectral peaks. This is the case for instance when the signal is harmonic with a clear periodicity. In other cases where the signal has less pronounced spectral structure with broader spectral maxima, it has been found that using small intervals leads to better quality. This finding leads to a further improvement according to which the interval size is adapted according to the properties of the signal. One realization is to use a tonality or a periodicity detector. If this detector identifies the signal as tonal, the δ-parameter controlling the interval size is set to a relatively large value. Otherwise, the δ-parameter is set to relatively smaller values. Based on the above, the audio frame loss concealment methods involve the following steps:1. Analyzing a segment of the available, previously synthesized signal to obtain the constituent sinusoidal frequencies fkof a sinusoidal model, optionally using an enhanced frequency estimation.2. Extracting a prototype frame y−1from the available previously synthesized signal and calculate the DFT of that frame.3. Calculating the phase shift θkfor each sinusoid k in response to the sinusoidal frequency fkand the time advance between the prototype frame and the substitution frame. Optionally in this step the size of the interval M may have been adapted in response to the tonality of the audio signal.4. For each sinusoid k advancing the phase of the prototype frame DFT with θkselectively for the DFT indices related to a vicinity around the sinusoid frequency fk.5. Calculating the inverse DFT of the spectrum obtained in step 4. Signal and Frame Loss Property Analysis and Detection The methods described above are based on the assumption that the properties of the audio signal do not change significantly during the short time duration from the previously received and reconstructed signal frame and a lost frame. In that case it is a very good choice to retain the magnitude spectrum of the previously reconstructed frame and to evolve the phases of the sinusoidal main components detected in the previously reconstructed signal. There are however cases where this assumption is wrong which are for instance transients with sudden energy changes or sudden spectral changes. A first embodiment of a transient detector according to the invention can consequently be based on energy variations within the previously reconstructed signal. This method, illustrated inFIG.11, calculates the energy in a left part and a right part of some analysis frame113. The analysis frame may be identical to the frame used for sinusoidal analysis described above. A part (either left or right) of the analysis frame may be the first or respectively the last half of the analysis frame or e.g. the first or respectively the last quarter of the analysis frame,110. The respective energy calculation is done by summing the squares of the samples in these partial frames: Eleft=∑n=0Npart-1y2(n-nleft),and⁢Eright=∑n=0Npart-1y2(n-nright). Herein y(n) denotes the analysis frame, nleftand nrightdenote the respective start indices of the partial frames that are both of size Npart. Now the left and right partial frame energies are used for the detection of a signal discontinuity. This is done by calculating the ratio Rl/r=EleftEright. A discontinuity with sudden energy decrease (offset) can be detected if the ratio Rl/rexceeds some threshold (e.g. 10),115. Similarly a discontinuity with sudden energy increase (onset) can be detected if the ratio Rl/ris below some other threshold (e.g. 0.1),117. In the context of the above described concealment methods it has been found that the above defined energy ratio may in many cases be a too insensitive indicator. In particular in real signals and especially music there are cases where a tone at some frequency suddenly emerges while some other tone at some other frequency suddenly stops. Analyzing such a signal frame with the above-defined energy ratio would in any case lead to a wrong detection result for at least one of the tones since this indicator is insensitive to different frequencies. A solution to this problem is described in the following embodiment. The transient detection is now done in the time frequency plane. The analysis frame is again partitioned into a left and a right partial frame,110. Though now, these two partial frames are (after suitable windowing with e.g. a Hamming window,111) transformed into the frequency domain, e.g. by means of a Npart-point DFT,112. Yleft(m)=DFT{y(n−nleft)}Npartand Yright(m)=DFT{y(n−nright)}Npart, withm=0 . . .Npart−1. Now the transient detection can be done frequency selectively for each DFT bin with index m. Using the powers of the left and right partial frame magnitude spectra, for each DFT index m a respective energy ratio can be calculated113as Rl/r(m)=❘"\[LeftBracketingBar]"Yleft(m)❘"\[RightBracketingBar]"2❘"\[LeftBracketingBar]"Yright(m)❘"\[RightBracketingBar]"2. Experiments show that frequency selective transient detection with DFT bin resolution is relatively imprecise due to statistical fluctuations (estimation errors). It was found that the quality of the operation is rather enhanced when making the frequency selective transient detection on the basis of frequency bands. Let lk=[mk−1+1, . . . , mk] specify the kthinterval, k=1 . . . K, covering the DFT bins from mk−1+1 to mk, then these intervals define K frequency bands. The frequency group selective transient detection can now be based on the band-wise ratio between the respective band energies of the left and right partial frames: Rl/r,band(k)=∑m∈Ik❘"\[LeftBracketingBar]"Yleft(m)❘"\[RightBracketingBar]"2∑m∈Ik❘"\[LeftBracketingBar]"Yright(m)❘"\[RightBracketingBar]"2. It is to be noted that the interval lk={mk−1+1, . . . mk} corresponds to the frequency band Bk=[mk-1+1Npart·fs,…,mkNpart·fs], where fsdenotes the audio sampling frequency. The lowest lower frequency band boundary m0can be set to 0 but may also be set to a DFT index corresponding to a larger frequency in order to mitigate estimation errors that grow with lower frequencies. The highest upper frequency band boundary mkcan be set to Npart2 but is preferably chosen to correspond to some lower frequency in which a transient still has a significant audible effect. A suitable choice for these frequency band sizes or widths is either to make them equal size with e.g. a width of several 100 Hz. Another preferred way is to make the frequency band widths following the size of the human auditory critical bands, i.e. to relate them to the frequency resolution of the auditory system. This means approximately to make the frequency band widths equal for frequencies up to 1 kHz and to increase them exponentially above 1 kHz. Exponential increase means for instance to double the frequency bandwidth when incrementing the band index k. As described in the first embodiment of the transient detector that was based on an energy ratio of two partial frames, any of the ratios related to band energies or DFT bin energies of two partial frames are compared to certain thresholds. A respective upper threshold for (frequency selective) offset detection115and a respective lower threshold for (frequency selective) onset detection117is used. A further audio signal dependent indicator that is suitable for an adaptation of the frame loss concealment method can be based on the codec parameters transmitted to the decoder. For instance, the codec may be a multi-mode codec like ITU-T G.718. Such codec may use particular codec modes for different signal types and a change of the codec mode in a frame shortly before the frame loss may be regarded as an indicator for a transient. Another useful indicator for adaptation of the frame loss concealment is a codec parameter related to a voicing property and the transmitted signal. Voicing relates to highly periodic speech that is generated by a periodic glottal excitation of the human vocal tract. A further preferred indicator is whether the signal content is estimated to be music or speech. Such an indicator can be obtained from a signal classifier that may typically be part of the codec. In case the codec performs such a classification and makes a corresponding classification decision available as a coding parameter to the decoder, this parameter is preferably used as signal content indicator to be used for adapting the frame loss concealment method. Another indicator that is preferably used for adaptation of the frame loss concealment methods is the burstiness of the frame losses. Burstiness of frame losses means that there occur several frame losses in a row, making it hard for the frame loss concealment method to use valid recently decoded signal portions for its operation. A state-of-the-art indicator is the number nburstof observed frame losses in a row. This counter is incremented with one upon each frame loss and reset to zero upon the reception of a valid frame. This indicator is also used in the context of the present example embodiments of the invention. Adaptation of the Frame Loss Concealment Method In case the steps carried out above indicate a condition suggesting an adaptation of the frame loss concealment operation the calculation of the spectrum of the substitution frame is modified. While the original calculation of the substitution frame spectrum is done according to the expression Z(m)=Y(m)·ejθk, now an adaptation is introduced modifying both magnitude and phase. The magnitude is modified by means of scaling with two factors α(m) and β(m) and the phase is modified with an additive phase component9(m). This leads to the following modified calculation of the substitution frame: Z(m)=α(m)·β(m)·Y(m)·ej(θk+ϑ(m)). It is to be noted that the original (non-adapted) frame-loss concealment methods is used if α(m)=1, β(m)=1, and ϑ(m)=0. These respective values are hence the default. The general objective with introducing magnitude adaptations is to avoid audible artifacts of the frame loss concealment method. Such artifacts may be musical or tonal sounds or strange sounds arising from repetitions of transient sounds. Such artifacts would in turn lead to quality degradations, which avoidance is the objective of the described adaptations. A suitable way to such adaptations is to modify the magnitude spectrum of the substitution frame to a suitable degree. FIG.12illustrates an embodiment of concealment method modification. Magnitude adaptation,123, is preferably done if the burst loss counter nburstexceeds some threshold thrburst, e.g. thrburst=3,121. In that case a value smaller than 1 is used for the attenuation factor, e.g. α(m)=0.1. It has however been found that it is beneficial to perform the attenuation with gradually increasing degree. One preferred embodiment which accomplishes this is to define a logarithmic parameter specifying a logarithmic increase in attenuation per frame, att_per_frame. Then, in case the burst counter exceeds the threshold the gradually increasing attenuation factor is calculated by α(m)=10c·att_per_frame(nburst−thrburst). Here the constant c is mere a scaling constant allowing to specify the parameter att_per_frame for instance in decibels (dB). An additional preferred adaptation is done in response to the indicator whether the signal is estimated to be music or speech. For music content in comparison with speech content it is preferable to increase the threshold thrburstand to decrease the attenuation per frame. This is equivalent with performing the adaptation of the frame loss concealment method with a lower degree. The background of this kind of adaptation is that music is generally less sensitive to longer loss bursts than speech. Hence, the original, i.e. the unmodified frame loss concealment method is still preferable for this case, at least for a larger number of frame losses in a row. A further adaptation of the concealment method with regards to the magnitude attenuation factor is preferably done in case a transient has been detected based on that the indicator Rl/r,band(k) or alternatively Rl/r(m) or Rl/rhave passed a threshold,122. In that case a suitable adaptation action,125, is to modify the second magnitude attenuation factor β(m) such that the total attenuation is controlled by the product of the two factors α(m)·β(m). β(m) is set in response to an indicated transient. In case an offset is detected the factor β(m) is preferably be chosen to reflect the energy decrease of the offset. A suitable choice is to set β(m) to the detected gain change: β(m)=√{square root over (Rl/r,band(k))}, form∈lk,k=1 . . .K. In case an onset is detected it is rather found advantageous to limit the energy increase in the substitution frame. In that case the factor can be set to some fixed value of e.g. 1, meaning that there is no attenuation but not any amplification either. In the above it is to be noted that the magnitude attenuation factor is preferably applied frequency selectively, i.e. with individually calculated factors for each frequency band. In case the band approach is not used, the corresponding magnitude attenuation factors can still be obtained in an analogue way. β(m) can then be set individually for each DFT bin in case frequency selective transient detection is used on DFT bin level. Or, in case no frequency selective transient indication is used at all β(m) can be globally identical for all m. A further preferred adaptation of the magnitude attenuation factor is done in conjunction with a modification of the phase by means of the additional phase component ϑ(m)127. In case for a given m such a phase modification is used, the attenuation factor β(m) is reduced even further. Preferably, even the degree of phase modification is taken into account. If the phase modification is only moderate, β(m) is only scaled down slightly, while if the phase modification is strong, β(m) is scaled down to a larger degree. The general objective with introducing phase adaptations is to avoid too strong tonality or signal periodicity in the generated substitution frames, which in turn would lead to quality degradations. A suitable way to such adaptations is to randomize or dither the phase to a suitable degree. Such phase dithering is accomplished if the additional phase component ϑ(m) is set to a random value scaled with some control factor: ϑ(m)=a(m)·rand(⋅). The random value obtained by the function rand(⋅) is for instance generated by some pseudo-random number generator. It is here assumed that it provides a random number within the interval [0, 2π]. The scaling factor α(m) in the above equation control the degree by which the original phase θkis dithered. The following embodiments address the phase adaptation by means of controlling this scaling factor. The control of the scaling factor is done in an analogue way as the control of the magnitude modification factors described above. According to a first embodiment scaling factor a(m) is adapted in response to the burst loss counter. If the burst loss counter nburstexceeds some threshold thrburst, e.g. thrburst=3, a value larger than 0 is used, e.g. a(m)=0.2. It has however been found that it is beneficial to perform the dithering with gradually increasing degree. One preferred embodiment which accomplishes this is to define a parameter specifying an increase in dithering per frame, dith_increase_per_frame. Then in case the burst counter exceeds the threshold the gradually increasing dithering control factor is calculated by a(m)=dith_increase_per_frame·(nburst−thrburst). It is to be noted in the above formula that α(m) has to be limited to a maximum value of 1 for which full phase dithering is achieved. It is to be noted that the burst loss threshold value thrburstused for initiating phase dithering may be the same threshold as the one used for magnitude attenuation. However, better quality can be obtained by setting these thresholds to individually optimal values, which generally means that these thresholds may be different. An additional preferred adaptation is done in response to the indicator whether the signal is estimated to be music or speech. For music content in comparison with speech content it is preferable to increase the threshold thrburstmeaning that phase dithering for music as compared to speech is done only in case of more lost frames in a row. This is equivalent with performing the adaptation of the frame loss concealment method for music with a lower degree. The background of this kind of adaptation is that music is generally less sensitive to longer loss bursts than speech. Hence, the original, i.e. unmodified frame loss concealment method is still preferable for this case, at least for a larger number of frame losses in a row. A further preferred embodiment is to adapt the phase dithering in response to a detected transient. In that case a stronger degree of phase dithering can be used for the DFT bins m for which a transient is indicated either for that bin, the DFT bins of the corresponding frequency band or of the whole frame. Part of the schemes described address optimization of the frame loss concealment method for harmonic signals and particularly for voiced speech. In case the methods using an enhanced frequency estimation as described above are not realized another adaptation possibility for the frame loss concealment method optimizing the quality for voiced speech signals is to switch to some other frame loss concealment method that specifically is designed and optimized for speech rather than for general audio signals containing music and speech. In that case, the indicator that the signal comprises a voiced speech signal is used to select another speech-optimized frame loss concealment scheme rather than the schemes described above. The embodiments apply to a controller in a decoder, as illustrated inFIG.13.FIG.13is a schematic block diagram of a decoder according to the embodiments. The decoder130comprises an input unit132configured to receive an encoded audio signal. The figure illustrates the frame loss concealment by a logical frame loss concealment-unit134, which indicates that the decoder is configured to implement a concealment of a lost audio frame, according to the above-described embodiments. Further the decoder comprises a controller136for implementing the embodiments described above. The controller136is configured to detect conditions in the properties of the previously received and reconstructed audio signal or in the statistical properties of the observed frame losses for which the substitution of a lost frame according to the described methods provides relatively reduced quality. In case such a condition is detected, the controller136is configured to modify the element of the concealment methods according to which the substitution frame spectrum is calculated by Z(m)=Y(m)·ejθkby selectively adjusting the phases or the spectrum magnitudes. The detection can be performed by a detector unit146and modifying can be performed by a modifier unit148as illustrated inFIG.14. The decoder with its including units could be implemented in hardware. There are numerous variants of circuitry elements that can be used and combined to achieve the functions of the units of the decoder. Such variants are encompassed by the embodiments. Particular examples of hardware implementation of the decoder is implementation in digital signal processor (DSP) hardware and integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry. The decoder150described herein could alternatively be implemented e.g. as illustrated inFIG.15, i.e. by one or more of a processor154and adequate software155with suitable storage or memory156therefore, in order to reconstruct the audio signal, which includes performing audio frame loss concealment according to the embodiments described herein, as shown inFIG.13. The incoming encoded audio signal is received by an input (IN)152, to which the processor154and the memory156are connected. The decoded and reconstructed audio signal obtained from the software is outputted from the output (OUT)158. The technology described above may be used e.g. in a receiver, which can be used in a mobile device (e.g. mobile phone, laptop) or a stationary device, such as a personal computer. It is to be understood that the choice of interacting units or modules, as well as the naming of the units are only for exemplary purpose, and may be configured in a plurality of alternative ways in order to be able to execute the disclosed process actions. It should also be noted that the units or modules described in this disclosure are to be regarded as logical entities and not with necessity as separate physical entities. It will be appreciated that the scope of the technology disclosed herein fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of this disclosure is accordingly not to be limited. Reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” All structural and functional equivalents to the elements of the above-described embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed hereby. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the technology disclosed herein, for it to be encompassed hereby. In the preceding description, for purposes of explanation and not limitation, specific details are set forth such as particular architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the disclosed technology. However, it will be apparent to those skilled in the art that the disclosed technology may be practiced in other embodiments and/or combinations of embodiments that depart from these specific details. That is, those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the disclosed technology. In some instances, detailed descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the disclosed technology with unnecessary detail. All statements herein reciting principles, aspects, and embodiments of the disclosed technology, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, e.g. any elements developed that perform the same function, regardless of structure. Thus, for example, it will be appreciated by those skilled in the art that the figures herein can represent conceptual views of illustrative circuitry or other functional units embodying the principles of the technology, and/or various processes which may be substantially represented in computer readable medium and executed by a computer or processor, even though such computer or processor may not be explicitly shown in the figures. The functions of the various elements including functional blocks may be provided through the use of hardware such as circuit hardware and/or hardware capable of executing software in the form of coded instructions stored on computer readable medium. Thus, such functions and illustrated functional blocks are to be understood as being either hardware-implemented and/or computer-implemented, and thus machine-implemented. The embodiments described above are to be understood as a few illustrative examples of the present invention. It will be understood by those skilled in the art that various modifications, combinations and changes may be made to the embodiments without departing from the scope of the present invention. In particular, different part solutions in the different embodiments can be combined in other configurations, where technically possible.
110,396
11862181
DETAILED DESCRIPTION The inventive concept will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the inventive concept are shown. This inventive concept may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and will fully convey the scope of the inventive concept to those skilled in the art. Like numbers refer to like elements throughout the description. Any step or feature illustrated by dashed lines should be regarded as optional. In the following description, terms channel coherence and spatial coherence are interchangeably used. When two mono encoders each having its own DTX system working separately on the signals in each of the two stereo channels are used, different energy and spectral shape in the two different signals can be transmitted. In most realistic cases the difference in energy and spectral shape between the signal in the left channel and the signal in the right channel will not be large but there could still be a big difference in how wide the stereo image of the signal is perceived. If the random sequences used to generate the comfort noise is synchronized between the signal in the left channel and the signal in the right channel the result will be a stereo signal sounds with a very narrow stereo image and which gives the sensation of the sound originating from within the head of the user. If instead the signal in the left channel and the signal in the right channel would not be synchronized it would give the opposite effect, i.e. a signal with a very wide stereo image. In most cases the original background noise will have a stereo image that is somewhere in-between these two extremes which mean that there would be an annoying difference in the stereo image when the transmitting device switches between active speech encoding and non-active noise encoding. The perceived stereo image width of the original background noise might also change during a call, e.g. because the user of the transmitting device is moving around and/or because of things occurring in the background. A system with two mono encoders each having its own DTX system has no mechanism to follow these changes. One additional issue with using a dual mono DTX system is that the VAD decision will not be synchronized between the two channels, which might lead to audible artifacts when e.g. the signal in the left channel is encoded with active encoding and the signal in the right channel is encoded with the low bit rate comfort noise encoding. It might also lead to that the random sequence will be synchronized in some time instances and unsynchronized in others, resulting in a stereo image that toggles between being extremely wide and extremely narrow over time. Hence, there is still a need for an improved generation of comfort noise for two or more channels. FIG.1is a schematic diagram illustrating a communication network100where embodiments presented herein can be applied. The communication network100comprises a transmitting node200acommunicating with a receiving node200bover a communications link110. The transmitting node200amight communicate with the receiving node200bover a direct communication link110or over an indirect communication link110via one or more other devices, nodes, or entities, such as network nodes, etc. in the communication network100. In some aspects the transmitting node200ais part of a radio transceiver device200and the receiving node200bis part of another radio transceiver device200. Additionally, in some aspects the radio transceiver device200comprises both the transmitting node200aand the receiving node200b. There could be different examples of radio transceiver devices. Examples include, but are not limited to, portable wireless devices, mobile stations, mobile phones, handsets, wireless local loop phones, user equipment (UE), smartphones, laptop computers, and tablet computers. As disclosed above, a DTX system can be used in order to transmit encoded speech/audio only when needed.FIG.2is a schematic block diagram of a DTX system300for one or more audio channels. The DTX system300could be part of, collocated with, or implemented in, the transmitting node200a. Input audio is provided to a VAD310, a speech/audio encoder320and a CNG encoder330. The speech/audio encoder is activated when the VAD indicates that the signal contains speech or audio and the CNG encoder is activated when the VAD indicates that the signal contains background noise. The VAD correspondingly selectively controls whether to transmit the output from the speech/audio encoder or the CNG encoder. Issues with existing mechanisms for generation of comfort noise for two or more channels have been disclosed above. The embodiments disclosed herein therefore relate to mechanisms for supporting generation of comfort noise for at least two audio channels at a receiving node200band for generation of comfort noise for at least two audio channels at a receiving node200b. In order to obtain such mechanisms there is provided a transmitting node200a, a method performed by the transmitting node200a, a computer program product comprising code, for example in the form of a computer program, that when run on processing circuitry of the transmitting node200a, causes the transmitting node200ato perform the method. In order to obtain such mechanisms there is further provided a receiving node200b, a method performed by the receiving node200b, and a computer program product comprising code, for example in the form of a computer program, that when run on processing circuitry of the receiving node200b, causes the receiving node200bto perform the method. Reference is now made toFIG.3illustrating a method for supporting generation of comfort noise for at least two audio channels at a receiving node200bas performed by the transmitting node200aaccording to embodiments. S104: The transmitting node200adetermines a spatial coherence between audio signals on the respective audio channels. At least one spatial coherence value Cb,mper frame m and frequency band b is determined to form a vector of spatial coherence values Cm. A vector Ĉpred,m(q)of predicted spatial coherence values Ĉpred,b,m(q)is formed by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m. The first coherence prediction Ĉ1,b1m(q)and the second coherence prediction Ĉ2,b,mare combined using a weight factor α. S106: The transmitting node200adetermines the weight factor α based on a bit-budget Bmavailable for encoding the vector of spatial coherence values in each frame m. S110: The transmitting node200asignals information such that the weight factor α can be reconstructed in the receiving node200b, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node200b. Embodiments relating to further details of supporting generation of comfort noise for at least two audio channels at a receiving node200bas performed by the transmitting node200awill now be disclosed. In some aspects each frequency band b is represented by one single reconstructed spatial coherence value Ĉb,mper frame m and frequency band b. In some aspects each frequency band b is represented by more than one reconstructed spatial coherence value Ĉb,mper frame m and frequency band b to more accurately describe the shape of the spatial coherence within each frequency band b. One example would be to approximate the coherence within a frequency band b with a function, C(k)=ab*k+Kb, for limit(b)≤k<limit(b+1), where aband Kbare the two values to be encoded for each frequency band b, where k is the frequency bin index, and where limit(b) denotes the lowest frequency bin of frequency band b. In some aspects limit(b) is provided as a function or lookup table. The herein disclosed embodiments are applicable to a stereo encoder and decoder architecture as well as for a multi-channel encoder and decoder where the channel coherence is considered in channel pairs. In some aspects the stereo encoder receives a channel pair [l(m, n) r(m, n)] as input, where l(m, n) and r(m, n) denote the input signals for the left and right channel, respectively, for sample index n of frame m. The signal is processed in frames of length N samples at a sampling frequency fs, where the length of the frame might include an overlap (such as a look-ahead and/or memory of past samples). As inFIG.2a stereo CNG encoder is activated when the stereo encoder VAD indicates that the signal contains background noise. The signal is transformed to frequency domain by means of e.g. a discrete Fourier transform (DFT) or any other suitable filter-bank or transform such as quadrature mirror filter (QMF), Hybrid QMF or modified discrete cosine transform (MDCT). In case a DFT or MDCT transform is used, the input signal is typically windowed before the transform, resulting in the channel pair [lwin(m, n) rwin(m, n)] determined according to: [lwin(m, n)rwin(m, n)]=[l(m, n)win(n)r(m, n)win(n)],n=0,1,2, . . . , N−1. Hence, in some aspects the audio signals l(m, n), r(m, n), for frame index m and sample index n, of the at least two audio channels are windowed to form respective windowed signals lwin(m, n), rwin(m, n) before the spectral characteristics are determined. The choice of window might generally depend on various parameters, such as time and frequency resolution characteristics, algorithmic delay (overlap length), reconstruction properties, etc. The thus windowed channel pair [lwin(m, n) rwin(m, n)] is then transformed according to: [L⁡(m,k)⁢R⁡(m,k)]=[DFT⁡(lwin(m,n))⁢DF⁢T⁡(rw⁢i⁢n(m,n))],{n=0,1,2,…,N-1k=0,1,2,…,N-1m=0,1,2,…. A general definition of the channel coherence Cgen(f) for frequency f is given by: Cg⁢e⁢n(f)=❘"\[LeftBracketingBar]"Sx⁢y(f)❘"\[RightBracketingBar]"2Sxx(f)⁢Sy⁢y(f) where Sxx(f) and Syy(f) represent the respective power spectrum of the two channels x and y, and Sxy(f) is the cross power spectrum of the two channels x and y. In a DFT based solution, the spectra may be represented by the DFT spectra. In some aspects the spatial coherence C(m, k) for frame index m and sample index k is determined as: C⁡(m,k)=❘"\[LeftBracketingBar]"L⁡(m,k)⋆·R⁡(m,k)❘"\[RightBracketingBar]"2❘"\[LeftBracketingBar]"L⁡(m,k)❘"\[RightBracketingBar]"2·❘"\[LeftBracketingBar]"R⁡(m,k)❘"\[RightBracketingBar]"2 where L(m, k) is the spectrum of the windowed audio signal lwin(m, n), where R(m, k) is the spectrum of the windowed audio signal rwin(m, n), and where * denotes the complex conjugate. The above expression for the coherence is commonly computed with a high frequency resolution. One reason for this is that for some parts of the coherence calculation, the left and right power spectra Sxx(f) and Syy(f), are needed with high resolution for other purposes in a typical audio encoder. A typical value with a sampling frequency fx=48 kHz and frame length of 20 ms would be 960 frequency bins for the channel coherence. For an application of DTX where it is crucial to keep the bit rate for encoding inactive (i.e. non-speech) segments low it is not feasible to transmit the channel coherence with high frequency resolution. To reduce the number of bits to encode the channel coherence values, the spectrum can be divided into frequency bands as shown inFIG.4. The number of frequency bands is typically in the order of 2-50 for the full audible bandwidth of 20-20000 Hz. All frequency bands might have equal frequency-wise width, but more common in audio coding applications is to match the width of each frequency band to the human perception of audio, thus resulting in comparatively narrow frequency bands for the low frequencies and increasing widths of the frequency bands for higher frequencies. In some aspects the spatial coherence is divided into frequency bands of non-equal lengths. For example, the frequency bands can be created using the ERB-rate scale, where ERB is short for equivalent rectangular frequency bandwidth. The coherence representative values given per frequency band form the vector of spatial coherence values Cm=[C1,mC2,m. . . Cb,m. . . CNbnd,m], where Nbndis the number of frequency bands, b is the frequency band index and m is the frame index. The vector of spatial coherence values Cmis then encoded to be stored or transmitted to a decoder of the receiving node200b. Particularly, according to an embodiment the transmitting node200ais configured to perform (optional) steps S102, S110a. S102: The transmitting node200adetermines spectral characteristics of the audio signals on the input audio channels. S110a: The transmitting node200asignals information about the spectral characteristics to the receiving node200b. This information can e.g. be the filter coefficients obtained through Linear Prediction Analysis or the magnitude spectrum obtained through a Discrete Fourier Transform. Step S110acould be performed as part of step S110. If the number of bits available to encode the vector of spatial coherence values Cmfor a given frame m is varying between frames and there is an intra-frame coding scheme designed to efficiently encode Cmwhere this coding scheme has the property that it is possible to truncate the number of encoded bits if the bit budget is not met, then the herein disclosed embodiments can be used to further enhance the intra-frame coding scheme. Therefore, according to an embodiment the first coherence prediction Ĉ1,b,mQ)is defined by an intra-frame prediction Ĉintra,b,m(q)of the vector of spatial coherence values. Further, according to an embodiment the second prediction Ĉ2,b,mis defined by an inter-frame coherence prediction Ĉinter,b,mof the vector of spatial coherence values. The at least one reconstructed spatial coherence value Ĉb,mis then formed based on a predicted spatial coherence value Ĉpred,b,m(q). In cases where the background noise is stable or changing slowly, the frame-to-frame variation in the coherence band values Cb,mwill be small. Hence, an inter-frame prediction using the values from previous frame will often be a good approximation which yields a small prediction residual and a small residual coding bit rate. Particularly, according to an embodiment the predicted spatial coherence value Ĉpred,b,m(q)is determined according to: Ĉpred,b,m(q)=αĈintra,b,m(q)+(1−α)Ĉinter,b,m, where the resulting prediction Ĉpred,b,m(q)thus is a sum of the intra-frame prediction) Ĉintra,b,m(q)and the inter-frame prediction Ĉinter,b,m. A balance can thereby be found between taking advantage of the inter-frame correlation of the spatial coherence whilst minimizing the risk of error propagation in case of frame loss. In general terms, the weight factor α can take a value in the range from 0 to 1, i.e. from only using information from the current frame (α=1) to only using information from the previous frame (α=0) and anything in-between (0<α<1). It is in some aspects desirable to use an as high weight factor α as possible since a lower weight factor α might make the encoding more sensitive to lost frames. But selection of the weight factor α has to be balanced with the bit budget Bmper frame m since a lower value of the weight factor α commonly yields less encoded bits. The value of the weight factor α used in the encoding has to, at least implicitly, be known in the decoder at the receiving node200b. That is, information about the weight factor α has to be encoded and transmitted (as in step S110) to the decoder at the receiving node200b. Further aspects of how to provide the information about the weight factor α will be disclosed below. It is further assumed that the bit budget Bmfor frame m for encoding the spatial coherence is known in the decoder at the receiving node200bwithout explicit signaling from the transmitting node200a. In this respect the value of the bit budget Bmis thus explicitly signalled to the receiving node200b. It comes as a side effect, since the decoder at the receiving node200bknows how to interpret the bitstream it also knows how many bits have been decoded. The remaining bits are simply found at the decoder at the receiving node200bby subtracting the decoded number of bits from the total bit budget (which is also known). In some aspects, based on the bit-budget Bma set of candidate weight factors is selected and a trial encoding (without performing the rate-truncation strategy as disclosed below) with the combined prediction and residual encoding scheme is performed for all these candidate weight factors in order to find the total number of encoded bits, given the candidate weight factor used. Particularly, according to an embodiment the weight factor α is determined by selecting a set of at least two candidate weight factors and performing trial encoding of the vector of spatial coherence values for each candidate weight factor. In some aspects, which candidate weight factors to use during the trial encoding is based on the bit-budget Bm. In this respect, the candidate weight factors might be determined by means of performing a table lookup with the bit-budget Bmas input or by inputting the bit-budget Bmto a function. The table lookup might be performed on table values obtained through training on a set of background noise. The trial encoding for each candidate weight factor yields a respective total number of encoded bits for the vector of spatial coherence values. The weight factor α might then be selected depending on whether the total number of encoded bits for the candidate weight factors fits within the bit-budget Bmor not. Particularly, according to an embodiment the weight factor α is selected as the largest candidate weight factor for which the total number of encoded bits fits within the bit-budget Bm. According to an embodiment the weight factor α is selected as the candidate weight factor yielding fewest total number of encoded bits when the total number of encoded bits does not fit within the bit-budget Bmfor any of the candidate weight factors. That is, if all candidate weight factors lead to a total number of encoded bits being within the bit-budget Bm, the highest candidate weight factor is selected as the weight factor α. Likewise, if only the lowest or none of the candidate weight factors lead to a total number of bits within the bit-budget Bm, the candidate weight factor that leads to the lowest number of bits is selected as the weight factor α. Which of the candidate weight factor is selected is then signaled to the decoder at the receiving node200b. Further aspects of the intra-frame prediction and the inter-frame prediction will now be disclosed. For each frame m, the encoder at the transmitting node200areceives a vector Cmto encode, a memory of the last reconstructed vector Ĉm-1, and a bit budget Bm. A variable Bcurr,m, to keep track of the bits spent is initialized to zero, Bcurr,m=0. Bits spent in preceding encoding steps may be included in Bmand Bcurr,m. In that case the bit budget in the step outlined can be written as: Bm-Bcurr,m. In some aspects the transmitting node200aselects a predictor set P(q)which gives the smallest prediction error. That is, the predictor set P(q′)is selected out of the available predictor sets P(q)=1,2, . . . , Nqsuch that: q*=arg⁢minq′⁢∑b=2Nbnd❘"\[LeftBracketingBar]"Cintra,b,m(q′)-Cb,m❘"\[RightBracketingBar]"2,q′=1,TagBox[",", "NumberComma", Rule[SyntaxForm, "0"]]2⁣,…,Nq. Here, b=1 is omitted since the prediction is zero and contribution to the error will be the same for all predictor sets. The selected predictor set index is stored and Bcurr,mis increased with the required number of bits, e.g., Bcurr,m:=Bcurr,m+z, where z denotes the number of bits required to encode the selected predictor set P(ξ*). Since the first coefficient cannot rely on prediction from previous coefficients, it might, optionally, be desirable to encode this coefficient separately. For instance, the first coefficient might be encoded using a scalar quantizer to produce the reconstructed value ĈSQ,1,m. In that case: Ĉintra,1,m=ĈSQ,1,m instead of: Ĉintra,1,m(q)=0. Alternatively, Ĉintra,1,m(q)is given by an average valueC: Ĉintra,1,m(q)=C. If the first coefficient indeed is encoded, the bits for the encoding are then added to the spent number of bits, e.g. Bcurr,m,:=Bcurr,m+z1, where z1denotes the number of bits used to encode the first coefficient. An illustrative example where the trial encoding is performed for two candidate weight factors αlowand αhigh, resulting in the number of bits Bcurrlow,mand Bcurrhigh,m, respectively, needed for the encoding of the vector of spatial coherence values will now be disclosed. Using Bcurr,mas the input, two candidate weight factors αlowand αhighare obtained, either by means of performing a table lookup with the bit-budget Bmas input or by inputting the bit-budget Bmto a function. Trial encoding is performed without the rate-truncation strategy described below for each candidate weight factor αlowand αhigh, yielding two values Bcurrlow,mand Bcurrhigh,mof the number of bits needed for the encoding. Based on this, one of the two candidate weight factors αlowand αhighis selected according for the encoding as follows: α={αhigh,Bcurrhigh,m≤Bmαlow,Bcurrlow≤Bm<Bcurrhigh,marg⁢min⁡(Bcurr,m),min⁡(Bcurrlow,m,Bcurrhigh,m)>Bm. The selected weight factor α is encoded using one bit, e.g. “0” for αlowand “1” for αhigh. The third alternative in the expression above for the weight factor α should be interpreted as follows: If both candidate weight factors αlowand αhighyield a resulting number of encoded bits that exceeds the bit budget Bm, then the candidate weight factor yielding the lowest number of encoded bits is selected. For each of the frequency bands b=1,2, . . .Nbnd, the following steps are then performed. The transmitting node200aobtains an intra-frame prediction value Ĉintra,b,m(q). For the first frequency band, b=1, there are no preceding coherence values encoded. In this case, the intra-frame prediction may thus be encoded as disclosed above. For the remaining frequency bands b=2,3, . . . , Nbnd, the intra-frame prediction Ĉintra,b,mis based on the previously encoded coherence values. That is: Ĉintra,b,m(q)=Σi=1b−1Pb,i(q)Ĉi,m. The transmitting node200aobtains an inter-frame prediction value Ĉinter,b,mbased on previously reconstructed elements of the vector of spatial coherence values from one or more preceding frames. An example of an inter-frame prediction value is to, for frequency band b use the last reconstructed value for frequency band b. That is, Ĉinter,b,m=Ĉb,m-1. The transmitting node200aforms a weighted prediction Ĉpred,b,m(q), based on the intra-frame prediction Ĉintra,b,m(q)and the inter-frame prediction Ĉinter,b,m, according to the above expression for the predicted spatial coherence value Ĉpred,b,m(q). That is, Ĉpred,b,m(q)=αĈintra,b,m(q)+(1−α)Ĉinter,b,m. The transmitting node200athen determines a prediction residual rb,m=Cb,m−Ĉpred,b,m. The prediction residual may be quantized using a scalar quantizer and then encoded with a variable length code scheme such that fewer bits are consumed for smaller residuals. Some examples for encoding the residual are by means of Huffman coding, Golomb-Rice coding or a unary code (where the latter is the same as the Golomb-Rice coding with divisor 1). For the residual encoding, the remaining bit budget Bm-Bcurr,mneeds to be considered. If there are not sufficiently many remaining bits to encode the residual rb,m, a bit rate truncation strategy can be applied. One possible strategy is to encode the largest possible residual value, assuming that the smaller residual values cost fewer bits. Another strategy is to set the residual value to zero, which could be the most common prediction residual value and would be encoded with one bit. Hence, according to an embodiment the transmitting node200ais configured to perform (optional) steps S108, S110b. S108: The transmitting node200adetermines a quantized prediction error per frame m and frequency band b by subtracting the at least one predicted spatial coherence value Ĉpred,b,m(q)from the vector of spatial coherence values. S110b: The transmitting node200asignals information about the quantized prediction error to the receiving node200b. Step S110bcould be performed as part of step S110. If there are no bits remaining within the bit budget, i.e. Bm=Bcurr,m, then the residual might be set to zero without sending the index to the bitstream. The decoder at the receiving node200bcan also detect that the bit budget has run out and use the zero residual rb,m=0 without explicit signaling. The receiving node200bcould then derive a reconstructed spatial coherence value Ĉb,m, using the reconstructed prediction residual {circumflex over (r)}b,mfrom the scalar quantizer and the predicted spatial coherence value Ĉpred,b,m(q), Ĉb,m=Ĉpred,b,m(q){circumflex over (r)}b,m. It should be noted that the reconstructed spatial coherence value Ĉb,mis similarly derived at the encoder where previously encoded coherence values Ĉi,mare used in the intra-frame prediction for frame m, and previously reconstructed elements from one or more preceding frames are used in the inter-frame prediction, e.g. the last reconstructed value Ĉb,m-1for frequency band b. Reference is now made toFIG.5illustrating a method for generation of comfort noise for at least two audio channels at a receiving node200bas performed by the receiving node200baccording to embodiments. In general terms, the receiving node200bis configured to reproduce the first and second prediction of the coherence value based on information obtained from the transmitting node200a. In some aspects the receiving node200bperform operations corresponding to those of the transmitting node200a, starting with reception of necessary information. S202: The receiving node200breceives information about the weight factor α from the transmitting node200a. This enables the receiving node200bto reproduce the first and second prediction identical to the ones in the transmitting node200a. The receiving node200b, then performs essentially the same steps as the transmitting node200a. S204: The receiving node200bdetermines a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using the weight factor α. S206: The receiving node200bdetermines the weight factor α based on a bit-budget Bmavailable for encoding the vector of spatial coherence values in each frame and the received information. S208: The receiving node200bgenerates comfort noise for the at least two audio channels based on the weighted combination of the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,m. Embodiments relating to further details of generation of comfort noise for at least two audio channels at a receiving node200bas performed by the receiving node200bwill now be disclosed. In general terms, the embodiments as disclosed above with reference to the transmitting node200aare also applicable to the receiving node200bas modified where needed. As disclosed above, according to an embodiment the transmitting node200asignals information about the spectral characteristics to the receiving node200b. Therefore, according to an embodiment the receiving node200bis configured to perform (optional) steps S202aand S208a: S202a: The receiving node200breceives information about spectral characteristics of the audio signals. S208a: The receiving node200bgenerates the comfort noise also based on the information about the spectral characteristics. In some aspects step S202ais performed as part of step S202and step S208ais performed as part of step S202. As disclosed above, according to an embodiment the transmitting node200asignals information about the quantized prediction error to the receiving node200b. Therefore, according to an embodiment the receiving node200bis configured to perform (optional) steps S202aand S208a: S202b: The receiving node200breceives information about a quantized prediction error per frame m and frequency band b. S208b: The receiving node200badds the quantized prediction error to the vector of spatial coherence values as part of generating the comfort noise. In some aspects step S202bis performed as part of step S202and step S208bis performed as part of step S202. In some aspects the weight factor α is determined by selecting a set of at least two candidate weight factors and using the received information about the weight factor α to select which candidate weight factors to use during trial encoding. FIG.6schematically illustrates, in terms of a number of functional units, the components of a transmitting node200aaccording to an embodiment. Processing circuitry210is provided using any combination of one or more of a suitable central processing unit (CPU), multiprocessor, microcontroller, digital signal processor (DSP), etc., capable of executing software instructions stored in a computer program product1010a(as inFIG.10), e.g. in the form of a storage medium230. The processing circuitry210may further be provided as at least one application specific integrated circuit (ASIC), or field programmable gate array (FPGA). Particularly, the processing circuitry210is configured to cause the transmitting node200ato perform a set of operations, or steps, as disclosed above. For example, the storage medium230may store the set of operations, and the processing circuitry210may be configured to retrieve the set of operations from the storage medium230to cause the transmitting node200ato perform the set of operations. The set of operations may be provided as a set of executable instructions. Thus the processing circuitry210is thereby arranged to execute methods as herein disclosed. In an embodiment the transmitting node200afor supporting generation of comfort noise for at least two audio channels at a receiving node comprises a processing circuitry210. The processing circuitry is configured to cause the transmitting node to determine a spatial coherence between audio signals on the respective audio channels, wherein at least one spatial coherence value Cb,mper frame m and frequency band b is determined to form a vector of spatial coherence values. A vector of predicted spatial coherence values Ĉpred,b,m(q)is formed by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m. The first coherence prediction Ĉ1,b,m(q)and the second coherence prediction e,cir C2,b,mare combined using a weight factor α. The weight factor α is determined based on a bit-budget Bmavailable for encoding the vector of spatial coherence values in each frame m. The transmitting node is further caused to signal information about the weight factor α to the receiving node, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node. The storage medium230may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory. The transmitting node200amay further comprise a communications interface220for communications with a receiving node200b. As such the communications interface220may comprise one or more transmitters and receivers, comprising analogue and digital components. The processing circuitry210controls the general operation of the transmitting node200ae.g. by sending data and control signals to the communications interface220and the storage medium230, by receiving data and reports from the communications interface220, and by retrieving data and instructions from the storage medium230. Other components, as well as the related functionality, of the transmitting node200aare omitted in order not to obscure the concepts presented herein. FIG.7schematically illustrates, in terms of a number of functional modules, the components of a transmitting node200aaccording to an embodiment. The transmitting node200aofFIG.7comprises a number of functional modules; a determine module210aconfigured to perform step S102, a determine module210bconfigured to perform step S104, a determine module210cconfigured to perform step S106, a determine module210dconfigured to perform step S108, and a signal module210econfigured to perform step S110. The signal module210emight further be configured to perform any of steps S110aand S110b. In general terms, each functional module210a-210emay be implemented in hardware or in software. Preferably, one or more or all functional modules210a-210emay be implemented by the processing circuitry210, possibly in cooperation with the communications interface220and/or the storage medium230. The processing circuitry210may thus be arranged to from the storage medium230fetch instructions as provided by a functional module210a-210eand to execute these instructions, thereby performing any steps of the transmitting node200aas disclosed herein. FIG.8schematically illustrates, in terms of a number of functional units, the components of a receiving node200baccording to an embodiment. Processing circuitry410is provided using any combination of one or more of a suitable central processing unit (CPU), multiprocessor, microcontroller, digital signal processor (DSP), etc., capable of executing software instructions stored in a computer program product1010b(as inFIG.10), e.g. in the form of a storage medium430. The processing circuitry410may further be provided as at least one application specific integrated circuit (ASIC), or field programmable gate array (FPGA). Particularly, the processing circuitry410is configured to cause the receiving node200bto perform a set of operations, or steps, as disclosed above. For example, the storage medium430may store the set of operations, and the processing circuitry410may be configured to retrieve the set of operations from the storage medium430to cause the receiving node200bto perform the set of operations. The set of operations may be provided as a set of executable instructions. Thus the processing circuitry410is thereby arranged to execute methods as herein disclosed. In an embodiment the receiving node200bfor generation of comfort noise for at least two audio channels at the receiving node comprises processing circuitry410. The processing circuitry is configured to cause the receiving node to receive information about a weight factor α from the transmitting node, and to determine a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values. The vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using the weight factor α. The weight factor α is determined based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame and the received information. The receiving node is further caused to generate comfort noise for the at least two audio channels based on the weighted combination of the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,m. The storage medium430may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory. The receiving node200bmay further comprise a communications interface420for communications with a transmitting node200a. As such the communications interface420may comprise one or more transmitters and receivers, comprising analogue and digital components. The processing circuitry410controls the general operation of the receiving node200be.g. by sending data and control signals to the communications interface420and the storage medium430, by receiving data and reports from the communications interface420, and by retrieving data and instructions from the storage medium430. Other components, as well as the related functionality, of the receiving node200bare omitted in order not to obscure the concepts presented herein. FIG.9schematically illustrates, in terms of a number of functional modules, the components of a receiving node200baccording to an embodiment. The receiving node200bofFIG.9comprises a number of functional modules; a receive module410aconfigured to perform step S202, a determine module410bconfigured to perform step S204, a determine module410cconfigured to perform step S206, and a generate module410dconfigured to perform step S208. In some aspects the receive module410ais further configured to perform any of steps S202aand S202b. In some aspects the generate module410dis further configured to perform any of steps S208aand S208b. The receiving node200bofFIG.9may further comprise a number of optional functional modules. In general terms, each functional module410a-410dmay be implemented in hardware or in software. Preferably, one or more or all functional modules410a-410dmay be implemented by the processing circuitry410, possibly in cooperation with the communications interface420and/or the storage medium430. The processing circuitry410may thus be arranged to from the storage medium430fetch instructions as provided by a functional module410a-410dand to execute these instructions, thereby performing any steps of the receiving node200bas disclosed herein. The transmitting node200aand/or the receiving node200bmay be provided as a standalone device or as a part of at least one further device. For example, as in the example ofFIG.1, in some aspects the transmitting node200ais part of a radio transceiver device200. Hence, in some aspects there is provided a radio transceiver device200comprising a transmitting node200aand/or a receiving node200bas herein disclosed. Alternatively, functionality of the transmitting node200aand/or the receiving node200bmay be distributed between at least two devices, or nodes. These at least two nodes, or devices, may either be part of the same network part or may be spread between at least two such network parts. Thus, a first portion of the instructions performed by the transmitting node200aand/or the receiving node200bmay be executed in a first device, and a second portion of the of the instructions performed by the transmitting node200aand/or the receiving node200bmay be executed in a second device; the herein disclosed embodiments are not limited to any particular number of devices on which the instructions performed by the transmitting node200aand/or the receiving node200bmay be executed. Hence, the methods according to the herein disclosed embodiments are suitable to be performed by a transmitting node200aand/or the receiving node200bresiding in a cloud computational environment. Therefore, although a single processing circuitry210,410is illustrated inFIGS.6and8the processing circuitry210,410may be distributed among a plurality of devices, or nodes. The same applies to the functional modules210a-210e,410a-410dofFIGS.7and9and the computer programs1020a,1020bofFIG.10(see below). FIG.10shows one example of a computer program product1010a,1010bcomprising computer readable means1030. On this computer readable means1030, a computer program1020acan be stored, which computer program1020acan cause the processing circuitry210and thereto operatively coupled entities and devices, such as the communications interface220and the storage medium230, to execute methods according to embodiments described herein. The computer program1020aand/or computer program product1010amay thus provide means for performing any steps of the transmitting node200aas herein disclosed. On this computer readable means1030, a computer program1020bcan be stored, which computer program1020bcan cause the processing circuitry410and thereto operatively coupled entities and devices, such as the communications interface420and the storage medium430, to execute methods according to embodiments described herein. The computer program1020band/or computer program product1010bmay thus provide means for performing any steps of the receiving node200bas herein disclosed. In the example ofFIG.10, the computer program product1010a,1010bis illustrated as an optical disc, such as a CD (compact disc) or a DVD (digital versatile disc) or a Blu-Ray disc. The computer program product1010a,1010bcould also be embodied as a memory, such as a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or an electrically erasable programmable read-only memory (EEPROM) and more particularly as a non-volatile storage medium of a device in an external memory such as a USB (Universal Serial Bus) memory or a Flash memory, such as a compact Flash memory. Thus, while the computer program1020a,1020bis here schematically shown as a track on the depicted optical disk, the computer program1020a,1020bcan be stored in any way which is suitable for the computer program product1010a,1010b. Here now follows a set of example embodiments to further describe the concepts presented herein. 1. A method for supporting generation of comfort noise for at least two audio channels at a receiving node, the method being performed by a transmitting node, the method comprising: determining a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using a weight factor α; determining the weight factor α based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame m; and signaling information about the weight factor α to the receiving node, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node. 2. The method according to item 1, further comprising: determining spectral characteristics of the audio signals on the input audio channels; and signaling information about the spectral characteristics to the receiving node. 3. The method according to item 1, further comprising: determining a quantized prediction error per frame m and frequency band b by subtracting said at least one predicted spatial coherence value Ĉpred,b,m(q)from the vector of spatial coherence values; and signaling information about the quantized prediction error to the receiving node. 4. The method according to item 1, wherein the weight factor α is determined by selecting a set of at least two candidate weight factors and performing trial encoding of the vector of spatial coherence values for each candidate weight factor. 5. The method according to item 4, wherein the trial encoding for each candidate weight factor yields a respective total number of encoded bits for the vector of spatial coherence values, and wherein the weight factor α is selected depending on whether the total number of encoded bits for the candidate weight factors fits within the bit-budget Bmor not. 6. The method according to item 1, wherein the first coherence prediction Ĉ1,b,m(q)is defined by an intra-frame prediction Ĉintra,b,m(q)of the vector of spatial coherence values. 7. The method according to item 1, wherein the second prediction Ĉ2,b,mis defined by an inter-frame coherence prediction Ĉinter,b,mof the vector of spatial coherence values. 8. The method according to items 6 and 7, wherein said at least one predicted spatial coherence value Ĉb,m(q)is defined by a prediction value Ĉpred,b,m. 9. The method according to item 8, wherein the prediction value Ĉpred,b,m(q)is determined according to: Ĉpred,b,m=αĈintra,b,m+(1−α)Ĉinter,b,m. 10. The method according to items 5 and 9, wherein the weight factor α is selected as the largest candidate weight factor for which the total number of encoded bits fits within the bit-budget Bm. 11. The method according to items 5 and 9, wherein the weight factor α is selected as the candidate weight factor yielding fewest total number of encoded bits when the total number of encoded bits does not fit within the bit-budget Bmfor any of the candidate weight factors. 12. The method according to any of items 4, 5, 10, or 11, wherein the trial encoding is performed for two candidate weight factors αlowand αhigh, resulting in the number of bits Bcurrlow,mand Bcurrhigh,m, respectively, needed for the encoding of the vector of spatial coherence values. 13. The method according to item 12, wherein the weight factor α is selected according to: α={αhigh,Bcurrhigh,m≤Bmαlow,Bcurrlow,m≤Bm<Bcurrhigh,marg⁢min⁡(Bcurr,m),min⁡(Bcurrlow,m,Bcurrhigh,m)>Bm. 14. The method according to any of items4, 5, 10, 11, 12, or 13, wherein which candidate weight factors to use during the trial encoding is based on the bit-budget Bm. 15. The method according to item 14, wherein the candidate weight factors are determined by means of performing a table lookup with the bit-budget Bm, as input or by inputting the bit-budget Bmto a function. 16. The method according to item 15, wherein the table lookup is performed on table values obtained through training on a set of background noise. 17. A method for generation of comfort noise for at least two audio channels at a receiving node, the method being performed by the receiving node, the method comprising: receiving information about a weight factor α from the transmitting node; determining a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using the weight factor α; determining the weight factor α based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame and the received information; and generating comfort noise for the at least two audio channels based on the weighted combination of the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,m. 18. The method according to item 18, further comprising: receiving information about spectral characteristics of the audio signals; and generating the comfort noise also based on the information about the spectral characteristics 19. The method according to item 17, further comprising: receiving information about a quantized prediction error per frame m and frequency band b; and adding the quantized prediction error to the vector of spatial coherence values as part of generating the comfort noise. 20. The method according to item 17, wherein the weight factor α is determined by selecting a set of at least two candidate weight factors and using the received information about the weight factor α to select which candidate weight factors to use during trial encoding. 21. A transmitting node for supporting generation of comfort noise for at least two audio channels at a receiving node, the transmitting node comprising processing circuitry, the processing circuitry being configured to cause the transmitting node to: determine a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using a weight factor α; determine the weight factor α based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame m; and signaling information about the weight factor α to the receiving node, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node. 22. A transmitting node for supporting generation of comfort noise for at least two audio channels at a receiving node, the transmitting node comprising: a determine module configured to determine a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using a weight factor α; a determine module configured to determine the weight factor α based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame m; and a signal module configured to signaling information about the weight factor α to the receiving node, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node. 23. The transmitting node according to item 21 or 22, further being configured to perform the method according to any of items 2 to 16. 24. A receiving node for generation of comfort noise for at least two audio channels at the receiving node, the receiving node comprising processing circuitry, the processing circuitry being configured to cause the receiving node to: receive information about a weight factor α from the transmitting node; determine a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using the weight factor α; determine the weight factor α based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame and the received information; and generate comfort noise for the at least two audio channels based on the weighted combination of the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,m. 25. A receiving node for generation of comfort noise for at least two audio channels at the receiving node, the receiving node comprising: a receive module configured to receive information about a weight factor α from the transmitting node; a determine module configured to determine a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using the weight factor α; a determine module configured to determine the weight factor α based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame and the received information; and a generate module configured to generate comfort noise for the at least two audio channels based on the weighted combination of the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,m. 26. The receiving node according to item 24 or 25, further being configured to perform the method according to any of items 18 to 20 27. A radio transceiver device, the radio transceiver device comprising a transmitting node according to any of items 21 to 23, and/or a receiving node according to any of items 24 to 26. 28. A computer program for supporting generation of comfort noise for at least two audio channels at a receiving node, the computer program comprising computer code which, when run on processing circuitry (210) of a transmitting node, causes the transmitting node to: determine a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using a weight factor α; determine the weight factor α based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame m; and signaling information about the weight factor α to the receiving node, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node. 29. A computer program for generation of comfort noise for at least two audio channels at a receiving node, the computer program comprising computer code which, when run on processing circuitry of the receiving node, causes the receiving node to: receive information about a weight factor α from the transmitting node; determine a spatial coherence between audio signals on the respective audio channels, wherein at least one predicted spatial coherence value Ĉpred,b,m(q)per frame m and frequency band b is determined to form a vector of predicted spatial coherence values, wherein the vector of predicted spatial coherence values is represented by a weighted combination of a first coherence prediction Ĉ1,b,m(q)and a second coherence prediction Ĉ2,b,m, wherein the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,mare combined using the weight factor α; determine the weight factor α based on a bit-budget Bmavailable for encoding a vector of spatial coherence values in each frame and the received information; and generate comfort noise for the at least two audio channels based on the weighted combination of the first coherence prediction Ĉ1,b,m(q)and the second coherence prediction Ĉ2,b,m. 30. A computer program product comprising a computer program according to at least one of items 28 and 29, and a computer readable storage medium on which the computer program is stored. Generally, all terms used in the example embodiments and appended claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to “a/an/the element, apparatus, component, means, module, step, etc.” are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, module, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated. The inventive concept has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the inventive concept, as defined by the appended claims.
56,938
11862182
DETAILED DESCRIPTION OF THE INVENTION FIG.1shows a frequency-domain audio decoder supporting transform length switching in accordance with an embodiment of the present application. The frequency-domain audio decoder ofFIG.1is generally indicated using reference sign10and comprises a frequency-domain coefficient extractor12, a scaling factor extractor14, an inverse transformer16, and a combiner18. At their input, frequency-domain coefficient extractor and scaling factor extractor12and14have access to an inbound data stream20. Outputs of frequency-domain coefficient extractor12and scaling factor extractor14are connected to respective inputs of inverse transformer16. Inverse transformer's16output, in turn, is connected to an input of combiner18. The latter outputs the reconstructed audio signal at an output22of encoder10. The frequency-domain coefficient extractor12is configured to extract frequency-domain coefficients24of frames26of the audio signal from data stream20. The frequency-domain coefficients24may be MDCT coefficients or may belong to some other transform such as another lapped transform. In a manner described further below, the frequency-domain coefficients24belonging to a certain frame26describe the audio signal's spectrum within the respective frame26in a varying spectro-temporal resolution. The frames26represent temporal portions into which the audio signal is sequentially subdivided in time. Putting together all frequency-domain coefficients24of all frames, same represent a spectrogram28of the audio signal. The frames26may, for example, be of equal length. Due to the kind of audio content of the audio signal changing over time, it may be disadvantageous to describe the spectrum for each frame26with continuous spectro-temporal resolution by use of, for example, transforms having a constant transform length which spans, for example, the time-length of each frame26, i.e. involves sample values within this frame26of the audio signal as well as time-domain samples preceding and succeeding the respective frame. Pre-echo artifacts may, for example, result from lossy transmitting the spectrum of the respective frame in form of the frequency-domain coefficients24. Accordingly, in a manner further outlined below, the frequency-domain coefficients24of a respective frame26describe the spectrum of the audio signal within this frame26in a switchable spectro-temporal resolution by switching between different transform lengths. As far as the frequency-domain coefficient extractor12is concerned, however, the latter circumstance is transparent for the same. The frequency-domain coefficient extractor12operates independent from any signalization signaling the just-mentioned switching between different spectro-temporal resolutions for the frames26. The frequency-domain coefficient extractor12may use entropy coding in order to extract the frequency-domain coefficients24from data stream20. For example, the frequency-domain coefficient extractor may use context-based entropy decoding, such as variable-context arithmetic decoding, to extract the frequency-domain coefficients24from the data stream20with assigning, to each of frequency-domain coefficients24, the same context regardless of the aforementioned signalization signaling the spectro-temporal resolution of the frame26to which the respective frequency-domain coefficient belongs. Alternatively, and as a second example, the extractor12may use Huffman decoding and define a set of Huffman codewords irrespective of said signalization specifying the resolution of frame26. Different possibilities exist for the way the frequency-domain coefficients24describe the spectrogram28. For example, the frequency-domain coefficients24may merely represent some prediction residual. For example, the frequency-domain coefficients may represent a residual of a prediction which, at least partially, has been obtained by stereo prediction from another audio signal representing a corresponding audio channel or downmix out of a multi-channel audio signal to which the signal spectrogram28belongs. Alternatively, or additionally to a prediction residual, the frequency-domain coefficients24may represent a sum (mid) or a difference (side) signal according to the M/S stereo paradigm [5]. Further, frequency-domain coefficients24may have been subject to temporal noise shaping. Beyond that, the frequency-domain coefficients12are quantized and in order to keep the quantization error below a psycho-acoustic detection (or masking) threshold, for example, the quantization step size is spectrally varied in a manner controlled via respective scaling factors associated with the frequency-domain coefficients24. The scaling factor extractor14is responsible for extracting the scaling factors from the data stream20. Briefly spending a little bit more detail on the switching between different spectro-temporal resolutions from frame to frame, the following is noted. As will be described in more detail below, the switching between different spectro-temporal resolutions will indicate that either, within a certain frame26, all frequency-domain coefficients24belong to one transform, or that the frequency-domain coefficients24of the respective frame26actually belong to different transforms such as, for example, two transforms, the transform length of which is half the transform length of the just-mentioned one transform. The embodiment described hereinafter with respect to the figures assumes the switching between one transform on the one hand and two transforms on the other hand, but in fact, a switching between the one transform and more than two transforms would, in principle, be feasible as well with the embodiments given below being readily transferable to such alternative embodiments. FIG.1illustrates, using hatching, the exemplary case that the current frame is of the type represented by two short transforms, one of which has been derived using a trailing half of current frame26, and the other one of which has been obtained by transforming a leading half of the current frame26of the audio signal. Due to the shortened transform length the spectral resolution at which the frequency-domain coefficients24describe the spectrum of frame26is reduced, namely halved in case of using two short transforms, while the temporal resolution is increased, namely doubled in the present case. InFIG.1, for example, the frequency-domain coefficients24shown hatched shall belong to the leading transform, whereas the non-hatched ones shall belong to the trailing transform. Spectrally co-located frequency-domain coefficients24, thus, describe the same spectral component of the audio signal within frame26, but at slightly different time instances, namely at two consecutive transform windows of the transform splitting frame. In data stream20, the frequency-domain coefficients24are transmitted in an interleaved manner so that spectrally corresponding frequency-domain coefficients of the two different transforms immediately follow each other. In even other words, the frequency-domain coefficients24of a split transform frame, i.e. a frame26for which the transform splitting is signaled in the data stream20, are transmitted such that if the frequency-domain coefficients24as received from the frequency-domain coefficient extractor12would be sequentially ordered in a manner as if they were frequency-domain coefficients of a long transform, then they are arranged in this sequence in an interleaved manner so that spectrally co-located frequency-domain coefficients24immediately neighbor each other and the pairs of such spectrally co-located frequency-domain coefficients24are ordered in accordance with a spectral/frequency order. Interestingly, ordered in such a manner, the sequence of interleaved frequency-domain coefficients24look similar to a sequence of frequency-domain coefficients24having been obtained by one long transform. Again, as far as the frequency-domain coefficient extractor12is concerned, the switching between different transform lengths or spectro-temporal resolutions in units of the frames26is transparent for the same, and accordingly, the context selection for entropy-coding the frequency-domain coefficients24in a context-adaptive manner results in the same context being selected—irrespective of the current frame actually being a long transform frame or the current frame being of the split transform type without extractor12knowing thereabout. For example, the frequency-domain coefficient extractor12may select the context to be employed for a certain frequency-domain coefficient based on already coded/decoded frequency-domain coefficients in a spectro-temporal neighborhood with this spectro-temporal neighborhood being defined in the interleaved state depicted inFIG.1. This has the following consequence. Imagine, a currently coded/decoded frequency-domain coefficient24was part of the leading transform indicated using hatching inFIG.1. An immediately spectrally adjacent frequency-domain coefficient would then actually be a frequency-domain coefficient24of the same leading transform (i.e. a hatched one inFIG.1). Nevertheless, however, the frequency-domain coefficient extractor12uses for context selection, a frequency-domain coefficient24belonging to the trailing transform, namely the one being spectrally neighboring (in accordance with a reduced spectral resolution of the shortened transform), assuming that the latter would be the immediate spectral neighbor of one long transform of the current frequency-domain coefficient24. Likewise, in selecting the context for a frequency-domain coefficient24of a trailing transform, the frequency-domain coefficient extractor12would use as an immediate spectral neighbor a frequency-domain coefficient24belonging to the leading transform, and being actually spectrally co-located to that coefficient. In particular, the decoding order defined among coefficients24of current frame26could lead, for example, from lowest frequency to highest frequency. Similar observations are valid in case of the frequency-domain coefficient extractor12being configured to entropy decode the frequency-domain coefficients24of a current frame26in groups/tuples of immediately consecutive frequency-domain coefficients24when ordered non-de-interleaved. Instead of using the tuple of spectrally neighboring frequency-domain coefficients24solely belonging to the same short transform, the frequency-domain coefficient extractor12would select the context for a certain tuple of a mixture of frequency-domain coefficients24belonging to different short transforms on the basis of a spectrally neighboring tuple of such a mixture of frequency-domain coefficients24belonging to the different transforms. Due to the fact that, as indicated above, in the interleaved state, the resulting spectrum as obtained by two short transforms looks very similar to a spectrum obtained by one long transform, the entropy coding penalty resulting from the agnostic operation of frequency-domain coefficient extractor12with respect to the transform length switching is low. The description of decoder10is resumed with the scaling factor extractor14which is, as mentioned above, responsible for extracting the scaling factors of the frequency-domain coefficients24from data stream20. The spectral resolution at which scale factors are assigned to the frequency-domain coefficients24is coarser than the comparatively fine spectral resolution supported by the long transform. As illustrated by curly brackets30, the frequency-domain coefficients24may be grouped into multiple scale factor bands. The subdivision in the scale factor bands may be selected based on psycho-acoustic thoughts and may, for example, coincide with the so-called Bark (or critical) bands. As the scaling factor extractor14is agnostic for the transform length switching, just as frequency-domain coefficient extractor12is, scaling factor extractor14assumes each frame26to be subdivided into a number of scale factor bands30which is equal, irrespective of the transform length switching signalization, and extracts for each such scale factor band30a scale factor32. At the encoder-side, the attribution of the frequency-domain coefficients24to these scale factor bands30is done in the non-de-interleaved state illustrated inFIG.1. As a consequence, as far as frames26corresponding to the split transform are concerned, each scale factor32belongs to a group populated by both, frequency-domain coefficients24of the leading transform, and frequency-domain coefficients24of the trailing transform. The inverse transformer16is configured to receive for each frame26the corresponding frequency-domain coefficients24and the corresponding scale factors32and subject the frequency-domain coefficients24of the frame26, scaled according to the scale factors32, to an inverse transformation to acquire time-domain portions of the audio signal. A lapped transform may be used by inverse transformer16such as, for example, a modified discrete cosine transform (MDCT). The combiner18combines the time-domain portions to obtain the audio signal such as by use of, for example, a suitable overlap-add process resulting in, for example, time-domain aliasing cancelation within the overlapping portions of the time-domain portions output by inverse transformer16. Naturally, the inverse transformer16is responsive to the aforementioned transform length switching signaled within the data stream20for the frames26. The operation of inverse transformer16is described in more detail with respect toFIG.2. FIG.2shows a possible internal structure of the inverse transformer16in more detail. As indicated inFIG.2, the inverse transformer16receives for a current frame the frequency-domain coefficients24associated with that frame, as well as the corresponding scale factors32for de-quantizing the frequency-domain coefficients24. Further, the inverse transformer16is controlled by the signalization34which is present in data stream20for each frame. The inverse transformer16may further be controlled via other components of the data stream20optionally comprised therein. In the following description, the details concerning these additional parameters are described. As shown inFIG.2, the inverse transformer16ofFIG.2comprises a de-quantizer36, an activatable de-interleaver38and an inverse transformation stage40. For the ease of understanding the following description, the inbound frequency-domain coefficients24as derived for the current frame from frequency-domain coefficient extractor12are shown to be numbered from 0 to N−1. Again, as the frequency-domain coefficient extractor12is agnostic to, i.e. operates independent from, signalization34, frequency-domain coefficient extractor12provides the inverse transformer16with frequency-domain coefficients24in the same manner irrespective of the current frame being of the split transform type, or the 1-transform type, i.e. the number of frequency-domain coefficients24is N in the present illustrative case and the association of the indices 0 to N−1 to the N frequency-domain coefficients24also remains the same irrespective of the signalization34. In case of the current frame being of the one or long transform type, the indices 0 to N−1 correspond to the ordering of the frequency-domain coefficients24from the lower frequency to the highest frequency, and in case of the current frame being of the split transform type, the indices correspond to the order to the frequency-domain coefficients when spectrally arranged according to their spectral order, but in an interleaved manner so that every second frequency-domain coefficient24belongs to the trailing transform, whereas the others belong to the leading transform. Similar facts hold true for the scale factors32. As the scale factor extractor14operates in a manner agnostic with respect to signalization34, the number and order as well as the values of scale factors32arriving from scale factor extractor14is independent from the signalization34, with the scale factors32inFIG.2being exemplarily denoted as S0to SMwith the index corresponding to the sequential order among the scale factor bands with which these scale factors are associated. In a manner similar to frequency-domain coefficient extractor12and scale factor extractor14, the de-quantizer36may operate agnostically with respect to, or independently from, signalization34. De-quantizer36de-quantizes, or scales, the inbound frequency-domain coefficients24using the scale factor associated with the scale factor band to which the respective frequency-domain coefficients belong. Again, the membership of the inbound frequency-domain coefficients24to the individual scale factor bands, and thus the association of the inbound frequency-domain coefficients24to the scale factors32, is independent from the signalization34, and the inverse transformer16thus subjects the frequency-domain coefficients24to scaling according to the scale factors32at a spectral resolution which is independent from the signalization. For example, de-quantizer36, independent from signalization34, assigns frequency-domain coefficients with indices 0 to 3 to the first scale factor band and accordingly the first scale factor S0, the frequency-domain coefficients with indices 4 to 9 to the second scale factor band and thus scale factor S1and so forth. The scale factor bounds are merely meant to be illustrative. The de-quantizer36could, for example, in order to de-quantize the frequency-domain coefficients24perform a multiplication using the associated scale factor, i.e. compute frequency-domain coefficient x0to be x0·s0, x1to be x1·s0, . . . , x3to be x3·s0, x4to be x4·s1, . . . , x9to be x9·s1, and so on. Alternatively, the de-quantizer36may perform an interpolation of the scale factors actually used for de-quantization of the frequency-domain coefficients24from the coarse spectral resolution defined by the scale factor bands. The interpolation may be independent from the signalization34. Alternatively, however, the latter interpolation may be dependent on the signalization in order to account for the different spectro-temporal sampling positions of the frequency-domain coefficients24depending on the current frame being of the split transform type or one/long transform type. FIG.2illustrates that up to the input side of activatable de-interleaver38, the order among the frequency-domain coefficients24remains the same and the same applies, at least substantially, with respect to the overall operation up to that point.FIG.2shows that upstream of activatable de-interleaver38, further operations may be performed by the inverse transformer16. For example, inverse transformer16could be configured to perform noise filling onto the frequency-domain coefficients24. For example, in the sequence of frequency-domain coefficients24scale factor bands, i.e. groups of inbound frequency-domain coefficients in the order following indices 0 to N−1, could be identified, where all frequency-domain coefficients24of the respective scale factor bands are quantized to zero. Such frequency-domain coefficients could be filled, for example, using artificial noise generation such as, for example, using a pseudorandom number generator. The strength/level of the noise filled into a zero-quantized scale factor band could be adjusted using the scale factor of the respective scale factor band as same is not needed for scaling since the spectral coefficients therein are all zero. Such a noise filling is shown inFIG.2at40and described in more detail in an embodiment in patent EP2304719A1 [6]. FIG.2shows further that inverse transformer16may be configured to support joint-stereo coding and/or inter-channel stereo prediction. In the framework of inter-channel stereo prediction, the inverse transformer16could, for example, predict42the spectrum in the non-de-interleaved arrangement represented by the order of indices 0 to N−1 from another channel of the audio signal. That is, it could be that the frequency-domain coefficients24describe the spectrogram of a channel of a stereo audio signal, and that the inverse transformer16is configured to treat the frequency-domain coefficients24as a prediction residual of a prediction signal derived from the other channel of this stereo audio signal. This inter-channel stereo prediction could be, for example, performed at some spectral granularity independent from signalization34. The complex prediction parameters44controlling the complex stereo prediction42could for example activate the complex stereo prediction42for certain ones of the aforementioned scale factor bands. For each scale factor band for which complex prediction is activated by way of the complex prediction parameter44, the scaled frequency-domain coefficients24, arranged in the order of 0 to N−1, residing within the respective scale factor band, would be summed-up with the inter-channel prediction signal obtained from the other channel of the stereo audio signal. A complex factor contained within the complex prediction parameters44for this respective scale factor band could control the prediction signal. Further, within the joint-stereo coding framework, the inverse transformer16could be configured to perform MS decoding46. That is, decoder10ofFIG.1could perform the operations described so far twice, once for a first channel and another time for a second channel of a stereo audio signal, and controlled via MS parameters within the data stream20, the inverse transformer16could MS decode these two channels or leave them as they are, namely as left and right channels of the stereo audio signal. The MS parameters48could switch between MS coding on a frame level or even at some finer level such as in units of scale factor bands or groups thereof. In case of activated MS decoding, for example, the inverse transformer16could form a sum of the corresponding frequency-domain coefficients24in the coefficients' order 0 to N−1, with corresponding frequency-domain coefficients of the other channel of the stereo audio signal, or a difference thereof. FIG.2then shows that the activatable de-interleaver38is responsive to the signalization34for the current frame in order to, in case of the current frame being signaled by signalization34to be a split transform frame, de-interleave the inbound frequency-domain coefficients so as to obtain two transforms, namely a leading transform50and a trailing transform52, and to leave the frequency-domain coefficients interleaved so as to result in one transform54in case of the signalization34indicating the current frame to be a long transform frame. In case of de-interleaving, de-interleaver38forms one transform out of50and52, a first short transform out of the frequency-domain coefficients having even indices, and the other short transform out of the frequency-domain coefficients at the uneven index positions. For example, the frequency-domain coefficients of even index could form the leading transform (when starting at index 0), whereas the others form the trailing transform. Transforms50and52are subject to inverse transformation of shorter transform length resulting in time-domain portions56and58, respectively. Combiner18ofFIG.1correctly positions time-domain portions56and58in time, namely the time-domain portion56resulting from the leading transform50in front of the time-domain portion58resulting from the trailing transform52, and performs the overlap-and-add process there-between and with time-domain portions resulting from preceding and succeeding frames of the audio signal. In case of non-de-interleaving, the frequency-domain coefficients arriving at the interleaver38constitute the long transform54as they are, and inverse transformation stage40performs an inverse transform thereon so as to result in a time-domain portion60spanning over, and beyond, the current frame's26whole time interval. The combiner18combines the time-domain portion60with respective time-domain portions resulting from preceding and succeeding frames of the audio signal. The frequency-domain audio decoder described so far enables transform length switching in a manner which allows to be compatible with frequency-domain audio decoders which are not responsive to signalization34. In particular, such “old fashioned” decoders would erroneously assume that frames which are actually signaled by signalization34to be of the split transform type, to be of the long transform type. That is, they would erroneously leave the split-type frequency-domain coefficients interleaved and perform an inverse transformation of the long transform length. However, the resulting quality of the affected frames of the reconstructed audio signal would still be quite reasonable. The coding efficiency penalty, in turn, is still quite reasonable, too. The coding efficiency penalty results from the disregarding signalization34as the frequency-domain coefficients and scale factors are encoded without taking into account the varying coefficients' meaning and exploiting this variation so as to increase coding efficiency. However, the latter penalty is comparatively small compared to the advantage of allowing backward compatibility. The latter statement is also true with respect to the restriction to activate and deactivate noise filler40, complex stereo prediction42and MS decoding46merely within continuous spectral portions (scale factor bands) in the de-interleaved state defined by indices 0 to N−1 inFIG.2. The opportunity to render control these coding tools specifically for the type of frame (e.g. having two noise levels) could possibly provide advantages, but the advantages are overcompensated by the advantage of having backward compatibility. FIG.2shows that the decoder ofFIG.1could even be configured to support TNS coding while nevertheless keeping the backward compatibility with decoders being insensitive for signalization34. In particular,FIG.2illustrates the possibility of performing inverse TNS filtering after any complex stereo prediction42and MS decoding46, if any. In order to maintain backward compatibility, the inverse transformer16is configured to perform inverse TNS filtering62onto a sequence of N coefficients irrespective of signalization34using respective TNS coefficients64. By this measure, the data stream20codes the TNS coefficients64equally, irrespective of signalization34. That is, the number of TNS coefficients and the way of coding same is the same. However, the inverse transformer16is configured to differently apply the TNS coefficients64. In case of the current frame being a long transform frame, inverse TNS filtering is performed onto the long transform54, i.e. the frequency-domain coefficients sequentialized in the interleaved state, and in case of the current frame being signaled by signalization34as a split transform frame, inverse transformer16inverse TNS filters62a concatenation of leading transform50and trailing transform52, i.e. the sequence of frequency-domain coefficients of indices 0, 2, . . . , N−2, 1, 3, 5, . . . , N−1. Inverse TNS filtering62may, for example, involve inverse transformer16applying a filter, the transfer function of which is set according to the TNS coefficients64onto the de-interleaved or interleaved sequence of coefficients having passed the sequence of processing upstream de-interleaver38. Thus, an “old fashioned” decoder which accidentally treats frames of the split transform type as long transform frames, applies TNS coefficients64which have been generated by an encoder by analyzing a concatenation of two short transforms, namely50and52, onto transform54and accordingly produces, by way of the inverse transform applied onto transform54, an incorrect time-domain portion60. However, even this quality degradation at such decoders might be endurable for listeners in case of restricting the use of such split transform frames to occasions where the signal represents rain or applause or the like. For the sake of completeness,FIG.3shows that inverse TNS filtering62of inverse transformer16may also be inserted elsewhere into the sequence of processing shown inFIG.2. For example, the inverse TNS filtering62could be positioned upstream the complex stereo prediction42. In order to keep the de-interleaved domain downstream and upstream the inverse TNS filtering62,FIG.3shows that in that case the frequency domain coefficients24are merely preliminarily de-interleaved66, in order to perform the inverse TNS filtering68within the de-interleaved concatenated state where the frequency-domain coefficients24as processed so far are in the order of indices 0, 2, 4, . . . , N−2, 1, 3, . . . , N−3, N−1, whereupon the de-interleaving is reversed70so as to obtain the frequency-domain coefficients in the inversely TNS filtered version in their interleaved order 0, 1, 2, . . . , N−1 again. The position of the inverse TNS filtering62within the sequence of processing steps shown inFIG.2could be fixed or could be signaled via the data stream20such as, for example, on a frame by frame basis or at some other granularity. It should be noted that, for sake of alleviating the description, the above embodiments concentrated on the juxtaposition of long transform frames and split transform frames only. However, embodiments of the present application may well be extended by the introduction of frames of other transform type such as frames of eight short transforms. In this regard, it should be noted that the afore-mentioned agnosticism, merely relates to frames distinguished, by way of a further signalization, from such other frames of any third transform type so that an “old fashioned” decoder, by inspecting the further signalization contained in all frames, accidentally treats split transform frames as long transform frames, and merely the frames distinguished from the other frames (all except for split transform and long transform frames) would comprise signalization34. As far as such other frames (all except for split transform and long transform frames) are concerned, it is noted that the extractors'12and14mode of operation such as context selection and so forth could depend on the further signalization, that is, said mode of operation could be different from the mode of operation applied for split transform and long transform frames. Before describing a suitable encoder fitting to the decoder embodiments described above, an implementation of the above embodiments is described which would be suitable for accordingly upgrading xHE-AAC-based audio coders/decoders to allow the support of transform splitting in a backward-compatible manner. That is, in the following a possibility is described how to perform transform length splitting in an audio codec which is based on MPEG-D xHE-AAC (USAC) with the objective of improving the coding quality of certain audio signals at low bit rates. The transform splitting tool is signaled semi-backward compatibly such that legacy xHE-AAC decoders can parse and decode bitstreams according to the above embodiments without obvious audio errors or drop-outs. As will be shown hereinafter, this semi-backward compatible signalization exploits unused possible values of a frame syntax element controlling, in a conditionally coded manner, the usage of noise filling. While legacy xHE-AAC decoders are not sensitive for these possible values of the respective noise filling syntax element, enhanced audio decoders are. In particular, the implementation described below enables, in line with the embodiments described above, to offer an intermediate transform length for coding signals similar to rain or applause, advantageously a split long block, i.e. two sequential transforms, each of half or a quarter of the spectral length of a long block, with a maximum time overlap between these transforms being less than a maximum temporal overlap between consecutive long blocks. To allow coded bitstreams with transform splitting, i.e. signalization34, to be read and parsed by legacy xHE-AAC decoders, splitting should be used in a semi-backward compatible way: the presence of such a transform splitting tool should not cause legacy decoders to stop—or not even start—decoding. Readability of such bitstreams by xHE-AAC infrastructure can also facilitate market adoption. To achieve the just mentioned objective of semi-backward compatibility for using transform splitting in the context of xHE-AAC or its potential derivatives, a transform splitting is signaled via the noise filling signalization of xHE-AAC. In compliance with the embodiments described above, in order to build transform splitting into xHE-AAC coders/decoders, instead of a frequency-domain (FD) stop-start window sequence a split transform consisting of two separate, half-length transforms may be used. The temporally sequential half-length transforms are interleaved into a single stop-start like block in a coefficient-by-coefficient fashion for decoders which do not support transform splitting, i.e. legacy xHE-AAC decoders. The signaling via noise filling signalization is performed as described hereafter. In particular, the 8-bit noise filling side information may be used to convey transform splitting. This is feasible because the MPEG-D standard [4] states that all 8 bits are transmitted even if the noise level to be applied is zero. In that situation, some of the noise-fill bits can be reused for transform splitting, i.e. for signalization34. Semi-backward compatibility regarding bitstream parsing and playback by legacy xHE-AAC decoders may be ensured as follows. Transform splitting is signaled via a noise level of zero, i.e. the first three noise-fill bits all having a value of zero, followed by five non-zero bits (which traditionally represent a noise offset) containing side information concerning the transform splitting as well as the missing noise level. Since a legacy xHE-AAC decoder disregards the value of the 5-bit offset if the 3-bit noise level is zero, the presence of transform splitting signalization34only has an effect on the noise filling in the legacy decoder: noise filling is turned off since the first three bits are zero, and the remainder of the decoding operation runs as intended. In particular, a split transform is processed like a traditional stop-start block with a full-length inverse transform (due to the above mentioned coefficient interleaving) and no de-interleaving is performed. Hence, a legacy decoder still offers “graceful” decoding of the enhanced data stream/bitstream20because it does not need to mute the output signal22or even abort the decoding upon reaching a frame of the transform splitting type. Naturally, such a legacy decoder is unable to provide a correct reconstruction of split transform frames, leading to deteriorated quality in affected frames in comparison with decoding by an appropriate decoder in accordance withFIG.1, for instance. Nonetheless, assuming the transform splitting is used as intended, i.e. only on transient or noisy input at low bitrates, the quality through an xHE-AAC decoder should be better than if the affected frames would drop out due to muting or otherwise lead to obvious playback errors. Concretely, an extension of an xHE-AAC coder/decoder towards transform splitting could be as follows. In accordance with the above description, the new tool to be used for xHE-AAC could be called transform splitting (TS). It would be a new tool in the frequency-domain (FD) coder of xHE-AAC or, for example, MPEG-H 3D-Audio being based on USAC [4]. Transform splitting would then be usable on certain transient signal passages as an alternative to regular long transforms (which lead to time-smearing, especially pre-echo, at low bitrates) or eight-short transforms (which lead to spectral holes and bubble artifacts at low bitrates). TS might then be signaled semi-backward-compatibly by FD coefficient interleaving into a long transform which can be parsed correctly by a legacy MPEG-D USAC decoder. A description of this tool would be similar to the above description. When TS is active in a long transform, two half-length MDCTs are employed instead of one full-length MDCT, and the coefficients of the two MDCTs, i.e.50and52, are transmitted in a line-by-line interleaved fashion. The interleaved transmission had already been used, for example, in case of FD (stop) start transforms, with the coefficients of the first-in-time MDCT placed at even and the coefficients of the second-in-time MDCT placed at odd indices (where the indexing begins at zero), but a decoder not being able to handle stop-start transforms would not have been able to correctly parse the data stream. That is, owing to different contexts used for entropy coding the frequency-domain coefficients serve such a stop-start transform, a varied syntax streamlined onto the halved transforms, any decoder not able to support stop-start windows would have had to disregard the respective stop-start window frames. Briefly referring back to the embodiment described above, this means that the decoder ofFIG.1could be, beyond the description brought forward so far, be able to alternatively support further transform length, i.e. a subdivision of certain frames26into even more than two transforms using a signalization which extends signalization34. With regard to the juxtaposition of transform subdivisions of frames26, other than the split transform activated using signalization34, however, FD coefficient extractor12and scaling factor extractor14would be sensitive to this signalization in that their mode of operation would change in dependence on that extra signalization in addition to signalization34. Further, a streamlined transmission of TNS coefficients, MS parameters and complex prediction parameters, tailored to the signaled transform type other than the split transform type according to56and59, would necessitate that each decoder has to be able to be responsive to, i.e. understand, the signalization selecting between these “known transform types” or frames including the long transform type according to60, and other transform types such as one subdividing frames into eight short transforms as in case of AAC, for example. In that case, this “known signalization” would identify frames for which signalization34signals the split transform type, as frames of the long transform type so that decoders not able to understand signalization34, treat these frames as long transform frames rather than frames of other types, such as 8-short-transform type frames. Back again to the description of a possible extension of xHE-AAC, certain operational constraints could be provided in order to build a TS tool into this coding framework. For example, TS could be allowed to be used only in an FD long-start or stop-start window. That is, the underlying syntax-element window_sequence could be requested to be equal to 1. Besides, due to the semi-backward-compatible signaling, it may be a requirement that TS can only be applied when the syntax element noiseFilling is one in the syntax container UsacCoreConfig( ). When TS is signaled to be active, all FD tools except for TNS and inverse MDCT operate on the interleaved (long) set of TS coefficients. This allows for the reuse of the scale factor band offset and long-transform arithmetic coder tables as well as the window shapes and overlap lengths. In the following, terms and definitions are presented which are used in the following in order to explain as to how the USAC standard described in [4] could be extended to offer the backward-compatible TS functionality, wherein sometimes reference is made to sections within that standard for the interested reader. A new data element could be:split_transform binary flag indicating whether TS is utilized in the current frame and channel New help elements could be:window_sequence FD window sequence type for the current frame and channel (section 6.2.9)noise_offset noise-fill offset to modify scale factors of zero-quantized bands (section 7.2)noise_level noise-fill level representing amplitude of added spectrum noise (section 7.2)half_transform_length one half of coreCoderFrameLength (ccfl, the transform length, section 6.1.1)half_lowpass_line one half of the number of MDCT lines transmitted for the current channel. The decoding of an FD (stop-) start transform using transform splitting (TS) in the USAC framework could be performed on purely sequential steps as follows: First, a decoding of split_transform and half_lowpass_line could be performed. split_transform actually would not represent an independent bit-stream element but is derived from the noise filling elements, noise_offset and noise_level, and in case of a UsacChannelPairElement( ) the common_window flag in StereoCoreToolInfo( ) If noiseFilling==0, split_transform is 0. Otherwise,if ((noiseFilling !=0) && (noise_level==0)) {split_transform=(noise_offset & 16)/16;noise_level=(noise_offset & 14)/2;noise_offset=(noise_offset & 1)*16;}else {split_transform=0;} In other words, if noise_level==0, noise_offset contains the split_transform flag followed by 4 bit of noise filling data, which are then rearranged. Since this operation changes the values of noise_level and noise_offset, it has to be executed before the noise filling process of section 7.2. Furthermore, if common_window==1 in a UsacChannelPairElement( ) split_transform is determined only in the left (first) channel; the right channel's split_transform is set equal to (i.e. copied from) the left channel's split_transform, and the above pseudo-code is not executed in the right channel. half_lowpass_line is determined from the “long” scale factor band offset table, swb_offset_long_window, and the max_sfb of the current channel, or in case of stereo and common_window==1, max_sfb_ste.max_sfb_ste in elements with StereoCoreToolInfo( ) and common_window==1,lowpass_sfb=max_sfb otherwise. Based on the igFilling flag, half_lowpass_line is derived:if (igFilling !=0) {lowpass_sfb=max(lowpass_sfb, ig_stop_sfb);}half_lowpass_line=swb_offset_long_window[lowpass_sfb]/2; Then, as a second step, de-interleaving of half-length spectra for temporal noise shaping would be performed. After spectrum de-quantization, noise filling, and scale factor application and prior to the application of Temporal Noise Shaping (TNS), the TS coefficients in spec[ ] are de-interleaved using a helper buffer[ ]:for (i=0, i2=0; i<half_lowpass_line; i+=1, i2+=2) {spec[i]=spec[i2]; /* isolate 1st window */buffer[i]=spec[i2+1]; /* isolate 2nd window */}for (i=0; i<half_lowpass_line; i+=1) {spec[i+half_lowpass_line]=buffer[i]; /* copy 2nd window */} The in-place de-interleaving effectively places the two half-length TS spectra on top of each other, and the TNS tool now operates as usual on the resulting full-length pseudo-spectrum. Referring to the above, such a procedure has been described with respect toFIG.3. Then, as the third step, temporary reinterleaving would be used along with two sequential inverse MDCTs. If common_window==1 in the current frame or the stereo decoding is performed after TNS decoding (tns_on_lr==0 in section 7.8), spec[ ] has to be re-interleaved temporarily into a full-length spectrum:for (i=0; i<half_lowpass_line; i+=1) {buffer[i]=spec[i]; /* copy 1st window */}for (i=0, i2=0; i<half_lowpass_line; i+=1, i2+=2) {spec[i2]=buffer[i]; /* merge 1st window */spec[i2+1]=spec[i+half_lowpass_line]; /* merge 2nd window */} The resulting pseudo-spectrum is used for stereo decoding (section 7.7) and to update dmx_re_prev[ ](sections 7.7.2 and A.1.4). In case of tns_on_lr==0, the stereo-decoded full-length spectra are againde-interleaved by repeating the process of section A.1.3.2. Finally, the 2 inverse MDCTs are calculatedwith ccfl and the channel's window_shape of the current and last frame. See section 7.9 andFIG.1. Some modification may be made to complex predictions stereo decoding of xHE-AAC. An implicit semi-backward compatible signaling method may alternatively be used in order to build TS into xHE-AAC. The above described an approach which employs one bit in a bit-stream to signal usage of the inventive transform splitting, contained in split_transform, to an inventive decoder. In particular, such signaling (let's call it explicit semi-backward-compatible signaling) allows the following legacy bitstream data—here the noise filling side-information—to be used independently of the inventive signal: In the present embodiment, the noise filling data does not depend on the transform splitting data, and vice versa. For example, noise filling data consisting of all-zeros (noise_level=noise_offset=0) may be transmitted while split_transform may hold any possible value (being a binary flag, either 0 or 1). In cases where such strict independence between the legacy and the inventive bit-stream data is not necessitated and the inventive signal is a binary decision, the explicit transmission of a signaling bit can be avoided, and said binary decision can be signaled by the presence or absence of what may be called implicit semi-backward-compatible signaling. Taking again the above embodiment as an example, the usage of transform splitting could be transmitted by simply using the inventive signaling: If noise_level is zero and, at the same time, noise_offset is not zero, then split_transform is set equal to 1. If both noise_level and noise_offset are not zero, split_transform is set equal to 0. A dependence of the inventive implicit signal on the legacy noise-fill signal arises when both noise_level and noise_offset are zero. In this case, it is unclear whether legacy or inventive implicit signaling is being used. To avoid such ambiguity, the value of split_transform has to be defined in advance. In the present example, it is appropriate to define split_transform=0 if the noise filling data consists of all-zeros, since this is what legacy encoders without transform splitting shall signal when noise filling is not to be used in a frame. The issue which remains to be solved in case of implicit semi-backward-compatible signaling is how to signal split_transform==1 and no noise filling at the same time. As explained, the noise-fill data do not have to be all-zero, and if a noise magnitude of zero is requested, noise_level ((noise_offset & 14)/2 as above) has to equal 0. This leaves only a noise_offset ((noise_offset & 1)*16 as above) greater than 0 as a solution. Fortunately, the value of noise_offset is ignored if no noise filling is performed in a decoder based on USAC [4], so this approach turns out to be feasible in the present embodiment. Therefore, the signaling of split_transform in the pseudo-code as above could be modified as follows, using the saved TS signaling bit to transmit 2 bits (4 values) instead of 1 bit for noise_offset:if ((noiseFilling !=0) && (noise_level==0) && (noise_offset !=0)) {split_transform=1;noise_level=(noise_offset & 28)/4;noise_offset=(noise_offset & 3)*8;}else {split_transform=0;} Accordingly, applying this alternative, the description of USAC could be extended using the following description. The tool description would be largely the same. That is, When Transform splitting (TS) is active in a long transform, two half-length MDCTs are employed instead of one full-length MDCT. The coefficients of the two MDCTs are transmitted in a line-by-line interleaved fashion as a traditional frequency domain (FD) transform, with the coefficients of the first-in-time MDCT placed at even and the coefficients of the second-in-time MDCT placed at odd indices. Operational constraints could necessitate that TS can only be used in a FD long-start or stop-start window (window_sequence==1) and that TS can only be applied when noiseFilling is 1 in UsacCoreConfig( ) When TS is signaled, all FD tools except for TNS and inverse MDCT operate on the interleaved (long) set of TS coefficients. This allows the reuse of the scale factor band offset and long-transform arithmetic coder tables as well as the window shapes and overlap lengths. Terms and definitions used hereinafter involve the following Help Elementscommon_window indicates if channel 0 and channel 1 of a CPE use identical window parameters (see ISO/IEC 23003-3:2012 section 6.2.5.1.1).window_sequence FD window sequence type for the current frame and channel (see ISO/IEC 23003-3:2012 section 6.2.9).tns_on_lr Indicates the mode of operation for TNS filtering (see ISO/IEC 23003-3:2012 section 7.8.2).noiseFilling This flag signals the usage of the noise filling of spectral holes in the FD core coder (see ISO/IEC 23003-3:2012 section 6.1.1.1).noise_offset noise-fill offset to modify scale factors of zero-quantized bands (see ISO/IEC 23003-3:2012 section 7.2).noise_level noise-fill level representing amplitude of added spectrum noise (see ISO/IEC 23003-3:2012 section 7.2).split_transform binary flag indicating whether TS is utilized in the current frame and channel.half_transform_length one half of coreCoderFrameLength (ccfl, the transform length, see ISO/IEC 23003-3:2012 section 6.1.1).half_lowpass_line one half of the number of MDCT lines transmitted for the current channel. The decoding process involving TS could be described as follows. In particular, the decoding of an FD (stop-)start transform with TS is performed in three sequential steps as follows. First, decoding of split_transform and half_lowpass_line is performed. The help element split_transform does not represent an independent bit-stream element but is derived from the noise filling elements, noise_offset and noise_level, and in case of a UsacChannelPairElement( ) the common_window flag in StereoCoreToolInfo( ) If noiseFilling==0, split_transform is 0. Otherwise,if ((noiseFilling !=0) && (noise_level==0)) {split_transform=1;noise_level=(noise_offset & 28)/4;noise_offset=(noise_offset & 3)*8; }else {split_transform=0;} In other words, if noise_level==0, noise_offset contains the split_transform flag followed by 4 bit of noise filling data, which are then rearranged. Since this operation changes the values of noise_level and noise_offset, it has to be executed before the noise filling process of ISO/IEC 23003-3:2012 section 7.2. Furthermore, if common_window==1 in a UsacChannelPairElement( ) split_transform is determined only in the left (first) channel; the right channel's split_transform is set equal to (i.e. copied from) the left channel's split_transform, and the above pseudo-code is not executed in the right channel. The help element half_lowpass_line is determined from the “long” scale factor band offset table, swb_offset_long_window, and the max_sfb of the current channel, or in case of stereo and common_window==1, max_sfb_ste. lowpass_sfb={max_sfb⁢_stein⁢elements⁢with⁢StereoCoreToolInfo()⁢and⁢common_window==1,max_sfbotherwise. Based on the igFilling flag, half_lowpass_line is derived:if (igFilling !=0) {lowpass_sfb=max(lowpass_sfb, ig_stop_sfb);}half_lowpass_line=swb_offset_long_window[lowpass_sfb]/2; Then, de-interleaving of the half-length spectra for temporal noise shaping is performed. After spectrum de-quantization, noise filling, and scale factor application and prior to the application of Temporal Noise Shaping (TNS), the TS coefficients in spec[ ] are de-interleaved using a helper buffer[ ]:for (i=0, i2=0; i<half_lowpass_line; i+=1, i2+=2) {spec[i]=spec[i2]; /* isolate 1st window */buffer[i]=spec[i2+1]; /* isolate 2nd window */}for (i=0; i<half_lowpass_line; i+=1) {spec[i+half_lowpass_line]=buffer[i]; /* copy 2nd window */} The in-place de-interleaving effectively places the two half-length TS spectra on top of each other, and the TNS tool now operates as usual on the resulting full-length pseudo-spectrum. Finally, temporary re-interleaving and two sequential Inverse MDCTs may be used: If common_window==1 in the current frame or the stereo decoding is performed after TNS decoding (tns_on_lr==0 in section 7.8), spec[ ] has to be re-interleaved temporarily into a full-length spectrum:for (i=0; i<half_lowpass_line; i+=1) {buffer[i]=spec[i]; /* copy 1st window */}for (i=0, i2=0; i<half_lowpass_line; i+=1, i2+=2) {spec[i2]=buffer[i]; /* merge 1st window */spec[i2+1]=spec[i+half_lowpass_line]; /* merge 2nd window */} The resulting pseudo-spectrum is used for stereo decoding (ISO/IEC 23003-3:2012 section 7.7) and to update dmx_re_prev[ ] (ISO/IEC 23003-3:2012 section 7.7.2) and in case of tns_on_lr==0, the stereo-decoded full-length spectra are again de-interleaved by repeating the process of section. Finally, the 2 inverse MDCTs are calculated with ccfl and the channel's window_shape of the current and last frame. The processing for TS follows the description given in ISO/IEC 23003-3:2012 section “7.9 Filterbank and block switching”. The following additions should be taken into account. The TS coefficients in spec[ ] are de-interleaved using a helper buffer[ ] with N, the window length based on the window_sequence value:for (i=0, i2=0; i<N/2; i+=1, i2+=2) {spec[0][i]=spec[i2]; /* isolate 1st window */buffer[i]=spec[i2+1]; /* isolate 2nd window */}for (i=0; i<N/2; i+=1) {spec[1][i]=buffer[i]; /* copy 2nd window */} The IMDCT for the half-length TS spectrum is then defined as: x(0,1),n=2N⁢∑N4-1k=0spec[(0,1)][k]⁢cos⁡(4⁢πN⁢(n+n0)⁢(k+12))⁢for⁢⁢0≤n<N2 Subsequent windowing and block switching steps are defined in the next subsections. Transform splitting with STOP_START_SEQUENCE would look like the following description: A STOP_START_SEQUENCE in combination with transform splitting was depicted inFIG.2. It comprises two overlapped and added half-length windows56,58with a length of N_l/2 which is 1024 (960, 768). N_s is set to 256 (240, 192) respectively. The windows (0,1) for the two half-length IMDCTs are given as follows: W(0,1)(n)={0.,for⁢0≤n<N_l/2-N_s4W(0,1),LEFT,N_s(n-N_l/2-N_s4),for⁢N_l/2-N_s4≤n<N_l/2+N_s41.,for⁢N_l/2+N_s4≤n<3⁢N_l/2-N_s4W(0,1),RIGHT,N_s(n+N_s2-3⁢N_l/2-N_s4),for⁢3⁢N_l/2-N_s4≤n<3⁢N_l/2+N_s40.,for⁢3⁢N_l/2+N_s4≤n<N_l/2 where for the first IMDCT the windows W0,LEFT,N_s(n)={WKBD_LEFT,N_s(n),if⁢window_shape⁢_previous⁢_block==1WSIN_LEFT,N_s(n),if⁢window_shape⁢_previous⁢_block==0W0,RIGHT,N_s(n)={WKBD_RIGHT,N_s(n),if⁢window_shape==1WSIN_RIGHT,N_s(n),if⁢window_shape==0,are⁢applied⁢and⁢for⁢the⁢second⁢IMDCT⁢the⁢windowsW1,LEFT,N_s(n)={WKBD_LEFT,N_s(n),if⁢window_shape==1WSIN_LEFT,N_s(n),if⁢window_shape==0W1,RIGHT,N_s(n)={WKBD_RIGHT,N_s(n),if⁢window_shape==1WSIN_RIGHT,N_s(n),if⁢window_shape==0, are applied. The overlap and add between the the two half-length windows resulting in the windowed time domain values zi,n is described as follows. Here, N_l is set to 2048 (1920, 1536), N_s to 256 (240, 192) respectively: Zi,n(n)={0.,for⁢0≤n<N_sx0,n-N_s·W0(n-N_s),for⁢N_s≤n<2⁢N_l-N_s4x0,n-N_s·W0(n-N_s)+x1,n-(N_l/2-N_s)·W1(n-(N_l/2-N_s)),for⁢N_l+N_s4≤n<2⁢N_l+N_s4x1,n-(N_l/2-N_s)·W1⁢(n-(N_l/2-N_s)),2⁢N_l+N_s4≤n<N Transform Splitting with LONG_START_SEQUENCE would look like the following description: The LONG_START_SEQUENCE in combination with transform splitting is depicted inFIG.4. It comprises three windows defined as follows, where N_l/ is set to 1024 (960, 768), N_s is set to 256 (240, 192) respectively. W0(n)={1.,for⁢0≤n<3⁢N_l/2-N_s4W0,RIGHT,N_s(n+N_s2-3⁢N_l/2-N_s4),for⁢3⁢N_l/2-N_s4≤n<3⁢N_l/2+N_s40.,for⁢⁢3⁢N_l/2+N_s4≤n<N_l/2 W1(n)={0.,for⁢0≤n<N_l/2-N_s4W1,LEFT,N_s(n-N_l/2-N_s4),for⁢N_l/2-N_s4≤n<N_l/2+N_s41.,for⁢N_l/2+N_s4≤n<3⁢N_l/2-N_s4W1,RIGHT,N_s(n+N_s2-3⁢N_l/2-N_s4),for⁢3⁢N_l/2-N_s4≤n<3⁢N_l/2+N_s40.,for⁢3⁢N_l/2+N_s4≤n<N_l/2 The left/right window halves are given by: W1,L⁢EFT,N_s⁢(n)={WKBD_LEFT,N_s(n),if⁢⁢window_shape==1WSIN_LEFT,N_s,(n),if⁢w⁢indow_shape==0W(0,1),RIGHT,N_s(n)={WKBD_RIGHT,N_s(n),if⁢⁢windnow_shape==1WSIN_RIGHT,N_s(n),if⁢window_shape==0, The third window equals the left half of a LONG_START_WINDOW: W2(n)={WL⁢EFT,N-⁢l(n),for⁢⁢0≤n<N-⁢l/21.0,for⁢⁢N-⁢l/2≤n<N-⁢l with WLEFT,N_l(n)={WKBD_LEFT,N_l(n),if⁢window_shape⁢_previous⁢_block==1WSIN_LEFT,N_l(n),if⁢w⁢indow_shape⁢_previous⁢_block==0 The overlap and add between the the two half-length windows resulting in intermediate windowed time domain values {tilde over (Z)}i,nis described as follows. Here, N_l is set to 2048 (1920, 1536), N_s to 256 (240, 192) respectively. Z~i,n(n)={-1·x0,2⁢N_s-n-1·W0(2⁢N_s-n-1),for⁢0≤n<N_sx0,n-N_s·W0(n-N_s),for⁢N_s≤n<2⁢N_l-N_s4x0,n-N_s·W0⁢(n-N_s)+x1,n-(N_l/2-N_s)·W1(n-(N_l/2-N_s)),for⁢⁢N_l+N_s4≤n<2⁢N_l+N_s4x1,n-(N_l/2-N_s)·W1⁢(n-(N_l/2-N_s)),for⁢2⁢N_l+N_s4≤n<N The final windowed time domain values Zi,n are obtained by applying W2: Zi,n(n)={tilde over (Z)}i,n(n)·W2(n), for 0≤n<N_l Regardless of whether explicit or implicit semi-backward-compatible signaling is being used, both of which were described above, some modification may be necessitated to the complex prediction stereo decoding of xHE-AAC in order to achieve meaningful operation on the interleaved spectra. The modification to complex prediction stereo decoding could be implemented as follows. Since the FD stereo tools operate on an interleaved pseudo-spectrum when TS is active in a channel pair, no changes are necessitated to the underlying M/S or Complex Prediction processing. However, the derivation of the previous frame's downmix dmx_re_prev[ ] and the computation of the downmix MDST dmx_im[ ] in ISO/IEC 23003-3:2012 section 7.7.2 need to be adapted if TS is used in either channel in the last or current frame:use_prev_frame has to be 0 if the TS activity changed in either channel from last to current frame. In other words, dmx_re_prev[ ] should not be used in that case due to transform length switching.If TS was or is active, dmx_re_prev[ ] and dmx_re[ ] specify interleaved pseudo-spectra and has to be de-interleaved into their corresponding two half-length TS spectra for correct MDST calculation.Upon TS activity, 2 half-length MDST downmixes are computed using adapted filter coefficients (Tables 1 and 2) and interleaved into a full-length spectrum dmx_im[ ] (just like dmx_re[ ]).window_sequence: Downmix MDST estimates are computed for each group window pair. use_prev_frame is evaluated only for the first of the two half-window pairs. For the remaining window pair, the preceding window pair is used in the MDST estimate, which implies use_prev_frame=1.Window shapes: The MDST estimation parameters for the current window, which are filter coefficients as described below, depend on the shapes of the left and right window halves. For the first window, this means that the filter parameters are a function of the current and previous frames' window_shape flags. The remaining window is only affected by the current window_shape. TABLE 1MDST Filter Parameters for Current Window (filter_coefs)Left Half: Sine ShapeLeft Half: KBD ShapeCurrent Window SequenceRight Half: Sine ShapeRight Half: KBD ShapeLONG_START_SEQUENCE[0.185618f, −0.000000f, 0.627371f,[0.203599f, −0.000000f, 0.633701f,STOP_START_SEQUENCE0.000000f,0.000000f,−0.627371f, 0.000000f, −0.185618f]−0.633701f, 0.000000f, −0.203599f]Left Half: Sine ShapeLeft Half: KBD ShapeCurrent Window SequenceRight Half: KBD ShapeRight Half: Sine ShapeLONG_START_SEQUENCE[0.194609f, 0.006202f, 0.630536f,[0.194609f, −0.006202f, 0.630536f,STOP_START_SEQUENCE0.000000f,0.000000f,−0.630536f, −0.006202f, −0.194609f]−0.630536f, 0.006202f, −0.194609f] TABLE 2MDST Filter Parameters for Previous Window (filter_coefs_prev)Left Half of Current Window:Left Half of Current Window:Current Window SequenceSine ShapeKBD ShapeLONG_START_SEQUENCE[0.038498, 0.039212, 0.039645,[0.038498, 0.039212, 0.039645,STOP_START_SEQUENCE0.039790,0.039790,0.039645, 0.039212, 0.038498]0.039645, 0.039212, 0.038498] Finally,FIG.5shows, for the sake of completeness, a possible frequency-domain audio encoder supporting transform length switching fitting to the embodiments outlined above. That is, the encoder ofFIG.5which is generally indicated using reference sign100is able to encode an audio signal102into data stream20in a manner so that the decoder ofFIG.1and the corresponding variants described above are able to take advantage of the transform splitting mode for some of the frames, whereas “old-fashioned” decoders are still able to process TS frames without parsing errors or the like. The encoder100ofFIG.5comprises a transformer104, an inverse scaler106, a frequency-domain coefficient inserter108and a scale factor inserter110. The transformer104receives the audio signal102to be encoded and is configured to subject time-domain portions of the audio signal to transformation to obtain frequency-domain coefficients for frames of the audio signal. In particular, as became clear from the above discussion, transformer104decides on a frame-by-frame basis as to which subdivision of these frames26into transforms—or transform windows—is used. As described above, the frames26may be of equal length and the transform may be a lapped transform using overlapping transforms of different lengths.FIG.5illustrates, for example, that a frame26ais subject to one long transform, a frame26bis subject to transform splitting, i.e. to two transforms of half length, and a further frame26cis shown to be subject to more than two, i.e. 2n>2, even shorter transforms of 2−nthe long transform length. As described above, by this measure, the encoder100is able to adapt the spectro-temporal resolution of the spectrogram represented by the lapped transform performed by transformer104to the time-varying audio content or kind of audio content of audio signal102. That is, frequency-domain coefficients result at the output of transformer104representing a spectrogram of audio signal102. The inverse scaler106is connected to the output of transformer104and is configured to inversely scale, and concurrently quantize, the frequency-domain coefficients according to scale factors. Notably, the inverse scaler operates on the frequency coefficients as they are obtained by transformer104. That is, inverse scaler106needs to be, necessarily, aware of the transform length assignment or transform mode assignment to frames26. Note also that the inverse scaler106needs to determine the scale factors. Inverse scaler106is, to this end, for example, the part of a feedback loop which evaluates a psycho-acoustic masking threshold determined for audio signal102so as to keep the quantization noise introduced by the quantization and gradually set according to the scale factors, below the psycho-acoustic threshold of detection as far as possible with or without obeying some bitrate constraint. At the output of inverse scaler106, scale factors and inversely scaled and quantized frequency-domain coefficients are output and the scale factor inserter110is configured to insert the scale factors into data stream20, whereas frequency-domain coefficient inserter108is configured to insert the frequency-domain coefficients of the frames of the audio signal, inversely scaled and quantized according to the scale factors, into data stream20. In a manner corresponding to the decoder, both inserters108and110operate irrespective of the transform mode associated with the frames26as far as the juxtaposition of frames26aof the long transform mode and frames26bof the transform splitting mode is concerned. In other words, inserters110and108operate independent from the signalization34mentioned above which the transformer104is configured to signal in, or insert into, data stream20for frames26aand26b, respectively. In other words, in the above embodiment, it is the transformer104which appropriately arranges the transform coefficients of long transform and split transform frames, namely by plane serial arrangement or interleaving, and the inserter works really independent from104. But in a more general sense it suffices if the frequency-domain coefficient inserter's independence from the signalization is restricted to the insertion of a sequence of the frequency-domain coefficients of each long transform and split transform frames of the audio signal, inversely scaled according to scale factors, into the data stream in that, depending on the signalization, the sequence of frequency-domain coefficients is formed by sequentially arranging the frequency-domain coefficients of the one transform of a respective frame in a non-interleaved manner in case of the frame being a long transform frame, and by interleaving the frequency-domain coefficients of the more than one transform of the respective frame in case of the respective frame being a split transform frame. As far as the frequency-domain coefficient inserter108is concerned, the fact that same operates independent from the signalization34distinguishing between frames26aon the one hand and frames26bon the other hand, means that inserter108inserts the frequency-domain coefficients of the frames of the audio signal, inversely scaled according to the scale factors, into the data stream20in a sequential manner in case of one transform performed for the respective frame, in a non-interleaved manner, and inserts the frequency-domain coefficients of the respective frames using interleaving in case of more than one transform performed for the respective frame, namely two in the example ofFIG.5. However, as already denoted above, the transform splitting mode may also be implemented differently so as to split-up the one transform into more than two transforms. Finally, it should be noted that the encoder ofFIG.5may also be adapted to perform all the other additional coding measures outlined above with respect toFIG.2such as the MS coding, the complex stereo prediction42and the TNS with, to this end, determining the respective parameters44,48and64thereof. Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus. Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable. Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed. Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier. Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier. In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer. A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary. A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet. A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein. A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein. A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver. In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus. While this invention has been described in terms of several advantageous embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention. REFERENCES [1] Internet Engineering Task Force (IETF), RFC 6716, “Definition of the Opus Audio Codec,” Proposed Standard, September 2012. Available online at http://tools.ietf.org/html/rfc6716.[2] International Organization for Standardization, ISO/IEC 14496-3:2009, “Information Technology—Coding of audio-visual objects—Part 3: Audio,” Geneva, Switzerland, August 2009.[3] M. Neuendorf et al., “MPEG Unified Speech and Audio Coding—The ISO/MPEG Standard for High-Efficiency Audio Coding of All Content Types,” in Proc. 132nd Convention of the AES, Budapest, Hungary, April 2012. Also to appear in the Journal of the AES, 2013.[4] International Organization for Standardization, ISO/IEC 23003-3:2012, “Information Technology—MPEG audio—Part 3: Unified speech and audio coding,” Geneva, January 2012.[5] J. D. Johnston and A. J. Ferreira, “Sum-Difference Stereo Transform Coding”, in Proc. IEEE ICASSP-92, Vol. 2, March 1992.[6] N. Rettelbach, et al., European Patent EP2304719A1, “Audio Encoder, Audio Decoder, Methods for Encoding and Decoding an Audio Signal, Audio Stream and Computer Program”, April 2011.
71,489
11862183
DETAILED DESCRIPTION Hereinafter, example embodiments will be described in detail with reference to the accompanying drawings. However, various alterations and modifications may be made to the examples. Here, the examples are not construed as limited to the disclosure and should be understood to include all changes, equivalents, and replacements within the idea and the technical scope of the disclosure. The terminology used herein is for the purpose of describing particular examples only and is not to be limiting of the examples. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises/comprising” and/or “includes/including” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof. Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains consistent with and after an understanding of the present disclosure. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein. In the description of example embodiments, detailed description of structures or functions that are thereby known after an understanding of the disclosure of the present application will be omitted when it is deemed that such description will cause ambiguous interpretation of the example embodiments. In addition, terms such as first, second, A, B, (a), (b), and the like may be used herein to describe components. Each of these terminologies is not used to define an essence, order, or sequence of a corresponding component but used merely to distinguish the corresponding component from other component(s). Throughout the specification, when an element, such as a layer, region, or substrate, is described as being “on,” “connected to,” or “coupled to” another element, it may be directly “on,” “connected to,” or “coupled to” the other element, or there may be one or more other elements intervening therebetween. In contrast, when an element is described as being “directly on,” “directly connected to,” or “directly coupled to” another element, there can be no other elements intervening therebetween. Likewise, expressions, for example, “between” and “immediately between” and “adjacent to” and “immediately adjacent to” may also be construed as described in the foregoing. Hereinafter, example embodiments will be described in detail with reference to the accompanying drawings. Regarding the reference numerals assigned to the elements in the drawings, it should be noted that the same elements will be designated by the same reference numerals, wherever possible, even though they are shown in different drawings. FIG.1is a diagram illustrating an example of an encoder and an example of a decoder according to example embodiments. The present disclosure relates to a technology for effectively removing long-term redundancy and short-term redundancy when encoding and decoding an audio signal by sequentially using a recurrent encoding model, a recurrent decoding model, a nonrecurrent encoding model, and a nonrecurrent decoding model. Referring toFIG.1, an encoder101may encode an input signal to generate a bitstream, and a decoder102may decode the bitstream received from the encoder101to generate an output signal. The encoder101and the decoder102may each include a processor, and the respective processors of the encoder101and the decoder102may perform an encoding method and a decoding method. The input signal described herein may be an original audio signal that is a target to be encoded and may include a plurality of frames. The output signal described herein may be an audio signal reconstructed from the encoded input signal by the decoder102. The recurrent encoding model and the recurrent decoding model may each be a deep learning-based neural network model used to effectively remove long-term redundancy. For example, the recurrent encoding model and the recurrent decoding model may be an encoder and decoder part of autoencoder with a recurrent structure for signal compression and reconstruction. For example, a recurrent part of the recurrent autoencoder may be implemented using one of popular recurrent networks such as recurrent neural network (RNN), long short-term memory (LSTM), gated recurrent unit (GRU), and the like. These exemplary recurrent networks may have an internal network structure such as fully-connected network (FCN), convolutional neural network (CNN), and the like. The recurrent encoding model, as a model configured to encode a current frame of an input signal, may be effective in removing long-term redundancy for the current frame using history information about previous frames of the input signal. Thus, the recurrent encoding model may eliminate the long-term redundancy in the input signal and then output the resulting feature information. The recurrent decoding model may reconstruct the current frame of the input signal using the history information about previous frames of the input signal and the feature information of the current frame. The history information represents the long-term redundancy contained in the past input frames, and it is utilized as common input to the recurrent encoding and decoding models. The recurrent encoding model and the recurrent decoding model are not limited to the foregoing examples, and various neural network models that are available to those having ordinary skill in the art may also be used. In contrast to the recurrent models, the nonrecurrent encoding model and the nonrecurrent decoding model may each be a deep learning-based neural network model used to effectively remove short-term redundancy of a current frame independently of previous frames of an input signal. For example, the nonrecurrent encoding model and the nonrecurrent decoding model may be a encoder and decoder part of autoencoder without recurrent structure for signal compression and reconstruction. For example, the nonrecurrent autoencoder may be implemented using various types of autoencoder such as deterministic autoencoder, variational autoencoder (VAE), and the like. These exemplary nonrecurrent neural networks may have an internal network structure such as FCN, CNN, and the like. The nonrecurrent encoding model may encode the current frame of the input signal independently of previous frames of the input signal by removing short-term redundancy in the input signal and outputting the resulting feature information. The nonrecurrent decoding model may decode the feature information of the input signal independently of the previous frames to compute an output signal. The nonrecurrent encoding model and the nonrecurrent decoding model are not limited to the foregoing examples, and various neural network models that are available to those having ordinary skill in the art may also be used. A detailed method of training the recurrent encoding and decoding models and the nonrecurrent encoding and decoding models according to an example embodiment will be described hereinafter with reference toFIG.5. According to an example embodiment, in a residual structure encoding and decoding method, the encoder101may compute a feature information of an input signal using a recurrent encoding model, and quantize the feature information. The encoder101may decode the quantized feature information to compute an output signal using a recurrent decoding model. The encoder101may then compute a residual signal by subtracting the output signal from the input signal. The encoder101may compute a feature information of the residual signal using a nonrecurrent encoding model and quantize the feature information of the residual signal. The encoder101may convert the quantized feature information of the input signal and the residual signal to bitstream respectively, and multiplex them into overall bitstream. Herein, the feature information of the residual signal computed using the nonrecurrent encoding model may be referred as to the second feature information, and the feature information of the input signal computed using the recurrent encoding model may be referred as to the first feature information. The decoder102may demultiplex the overall bitstream into bitstream of the first feature information and bitstream of the second feature information, and dequantize them to reconstruct the quantized first feature information and the quantized second feature information, respectively. The decoder102may then compute the first output signal from the quantized first feature information using the recurrent decoding model, and compute the second output signal from the quantized second feature information using the nonrecurrent decoding model. The first output signal described herein may correspond to an input signal reconstructed by the recurrent decoding model, and the second output signal described herein may correspond to a residual signal reconstructed by the nonrecurrent decoding model. The decoder102may reconstruct a final output signal by adding the first output signal and the second output signal. According to another example embodiment, in a nested structure encoding and decoding method, the encoder101may compute a feature information of an input signal using a nonrecurrent encoding model, and then compute another feature information of the feature information obtained by the nonrecurrent encoding model using a recurrent encoding model. The encoder101may quantize another feature information and convert it to bitstream. The feature information obtained by the nonrecurrent encoding model may be referred herein as to the first feature information, and the feature information obtained by the recurrent encoding model may be referred herein as to the second feature information. The nonrecurrent encoding model may be used to compute the first feature information for the input signal, and the recurrent encoding model may be used to compute the second feature information for the first feature information. The recurrent encoding model may encode the first feature information of a current frame of the input signal using history information about the first feature information of previous frames of the input signal to output the second feature information. The second feature information may be converted to a bitstream through quantization. The decoder102may dequantize the bitstream to produce the quantized second feature information. The decoder102may compute the first feature information from the quantized second feature information using the recurrent decoding model, and compute an output signal from the first feature information using the nonrecurrent decoding model. The recurrent decoding model may compute the first feature information from the second feature information using the history information about the first feature information of the previous frames. The nonrecurrent decoding model may compute the output signal from the first feature information. A detailed method of training the recurrent encoding and decoding models and the nonrecurrent encoding and decoding models according to another example embodiment will be described hereinafter with reference toFIG.9. FIG.2is a diagram illustrating an example of a configuration of neural network models included in an encoder and a decoder in a residual structure encoding and decoding method according to an example embodiment. The encoder101may compute the first feature information from an input signal201using a recurrent encoding model202. The first feature information may correspond to the feature information computed by the recurrent encoding model202. The encoder101may use the input signal201at the current time step and history information as inputs to the recurrent encoding model202to encode the input signal201. The recurrent encoding model202may be a neural network model that is trained to compute the first feature information using the input signal201and the history information. The encoder101may produce a quantized first feature information as an input to a recurrent decoding model203and the first bitstream by quantizing the first feature information obtained by the recurrent encoding model202. The encoder101may compute an output signal by decoding the quantized first feature information using the recurrent decoding model203. The encoder101may input the quantized first feature information and the history information to the recurrent decoding model203. The output signal may correspond to a signal reconstructed by the recurrent decoding model203. The encoder101may internally compute a updated history information at the current time step using the quantized first feature information and the input history information in the recurrent decoding model203. The encoder101may compute the first output signal using the updated history information. The updated history information at the current time may be used as input history information for the recurrent encoding model202and the recurrent decoding model203to encode an input signal at the next time step. Thus, the recurrent encoding model202and the recurrent decoding model203of the encoder101may share the history information at each time step. The encoder101may determine a residual signal by subtracting the first output signal203from the input signal201. The residual signal may correspond to an error signal indicating a difference between the input signal201and the output signal of the recurrent decoding model203. The encoder101may compute the second feature information from the residual signal using the nonrecurrent encoding model204. The nonrecurrent encoding model204may be a neural network model that is trained to compute the second feature information from the residual signal. The encoder101may produce the second bitstream by quantizing the second feature information obtained by the nonrecurrent encoding model204. The encoder101may produce an overall bitstream205by multiplexing the first and the second bitstream. The decoder102may receive the overall bitstream205, and reconstruct the quantized first feature information and the quantized second feature information by demultiplexing the overall bitstream into the first and the second bitstream and dequantizing the respective bitstream. The decoder102may compute the first output signal from the quantized first feature information using a recurrent decoding model206, and compute the second output signal from the quantized second feature information using a nonrecurrent decoding model207. The decoder102may compute the final output signal208by adding the first output signal and the second output signal. The first output signal may correspond to an output signal computed by the recurrent decoding model206, and the second output signal may correspond to an output signal computed by the nonrecurrent decoding model207. The decoder102may compute the updated history information at the current time from the first feature information and the input history information using the recurrent decoding model206, and compute the first output signal using the updated history information. This foregoing process may be the same as one performed in the recurrent decoding model203of the encoder101, and thus the recurrent decoding model203of the encoder101and the recurrent decoding model206of the decoder102may compute the first output signal from the quantized first feature information using history information synchronized between encoder101and decoder102at each time step. FIG.3is a flowchart illustrating an example of an encoding method using a neural network model according to an example embodiment. An input signal of an encoder may indicate a frame consisting of a predefined number of samples at a specific time step t. An overlap interval may exist across frames. The encoder may operate on a frame-by-frame basis. In operation301, the encoder may compute the first feature information of an input signal using a recurrent encoding model. The encoder may compute the first feature information by feeding the input signal and the history information to the recurrent encoding model. The history information may be initialized to arbitrary values at an initial time step (t=0), and then be updated to new history information through decoding process at each time step t using history information at time step t−1 and the first feature information of the input signal obtained through the recurrent encoding model. Thus, information of previous time steps may be maintained during encoding operation. The history information described herein may be history or state information that is transferred from a current time step to a next time step through a recurrent path of a recurrent neural network. The history information may be updated at each time step using a history information and an input signal, and the updated history information may be used to compute a history information at the next time step. In operation302, the encoder may produce the first bitstream by quantizing the first feature information computed using the recurrent encoding model. In operation303, the encoder may extract the quantized first feature information from the bitstream. In operation304, the encoder may compute an output signal from the quantized first feature information. The encoder may update the history information using the first feature information and the input history information in a recurrent decoding model, and compute the first output signal from the updated history information. The updated history information may be used as an input to the recurrent encoding model and the recurrent decoding model for encoding an input signal at the next time step. In operation305, the encoder may compute a residual signal by subtracting the first output signal from the input signal. In operation306, the encoder may compute the second feature information from the residual signal using a nonrecurrent encoding model. In operation307, the encoder may produce the second bitstream by quantizing the second feature information. The encoder may multiplex the first bitstream and the second bitstream, and transmit the resulting overall bitstream to a decoder. FIG.4is a flowchart illustrating an example of a decoding method using a neural network model according to an example embodiment. In operation401, a decoder may demultiplex the overall bitstream received from the encoder, and dequantize the first bitstream and the second bitstream to reconstruct the quantized first feature information and the quantized second feature information. In operation402, the decoder may compute the first output signal from the quantized first feature information. The decoder may compute the first output signal from the quantized first feature information and the history information using a recurrent decoding model. The history information updated in the decoding process may be used to compute the first feature information at the next time step. In operation403, the decoder may compute the second output signal from the quantized second feature information using a nonrecurrent decoding model. The first output signal may be an output signal computed using the recurrent decoding model. The second output signal may be a reconstructed residual signal which is an output signal computed using the nonrecurrent decoding model. In operation404, the decoder may reconstruct an input signal by adding the first output signal and the second output signal. FIG.5is a flowchart illustrating an example of a method of training a neural network model according to an example embodiment. In an audio database provided for training the encoding and decoding models, each audio materials may be divided into multiple frames consisting of N consecutive audio samples, and then frames are arranged into multiple groups of temporally-consecutive T frames. Groups of T frames may be grouped randomly into multiple sets of B groups. A training process for a recurrent neural network model in the example embodiment may be iteratively performed on B frames corresponding to each time step in the set of (B×T) frames. The B frames corresponding to each time step may be referred as to batch. That is, a batch corresponding to each time step may be sequentially fed to the recurrent neural network model. According to an example embodiment, the history information for a recurrent encoding model and a recurrent decoding model may be initialized to preset values, for example, zeros. In operation501, an encoder or decoder may compute the first feature information of an input batch. The encoder or decoder may compute the first feature information from the input batch at the time step t and the history information at the time step t−1 using the recurrent encoding model. The first feature information may be an one-dimensional (1D) vector, a two-dimensional (2D) matrix or a multi-dimensional tensor for each frame in the input batch depending on a structure of a recurrent neural network. In operation502, the encoder or decoder may quantize the first feature information. The encoder or decoder may compute the quantized first feature information through quantization and dequantization of the first feature information. The quantization may generally be a non-differentiable operation, and thus model parameters may not be updated using error backpropagation required in the training process. Thus, in the training process, a relaxed quantization method such as softmax quantization, may be applied to quantize the first feature information. In operation503, the encoder or decoder may compute the first output batch from the quantized first feature information. The encoder or decoder may compute the updated history information from the quantized first feature information and the history information using the recurrent decoding model, and then compute the first output batch from the updated history information. The first output batch may correspond to an input batch reconstructed by the recurrent decoding model. In operation504, the encoder or decoder may update model parameters of the recurrent encoding model and the recurrent decoding model based on a difference between the first output batch and the input batch. The encoder or decoder may update model parameters of the recurrent encoding model and the recurrent decoding model to minimize a loss function based on the difference between the first output batch and the input batch. For example, the encoder or decoder may determine the first loss function for updating the model parameters of the recurrent encoding and decoding models by a weighted sum of a signal distortion as the difference measure between the first output batch and the input batch and an entropy loss corresponding to an estimated number of bits required to encode the first feature information. The entropy may be calculated using a probability distribution corresponding to histogram of symbols used to quantize the first feature information, and indicate a lower bound of number of bits required for an actual conversion of a bitstream. The entropy loss may be included in an overall loss function for the purpose of controlling a bit rate of the encoder. The signal distortion may be measured using norm-based methods such as mean squared error (MSE). The encoder or decoder may update the model parameters of the recurrent encoding and decoding models such that the first loss function is minimized in the training process. For example, the encoder or decoder may update the model parameters of the recurrent encoding and decoding models by using an error backpropagation based on the first loss function. The encoder or decoder may iteratively perform operations501through504at every time step, from t=0 to t=T−1. The encoder or decoder may iteratively perform on multiple epochs until the recurrent encoding and decoding models are sufficiently trained. In operation505, the encoder or decoder may compute a residual batch by subtracting the first output batch of the trained recurrent encoding and decoding models from the input batch in order to train a nonrecurrent encoding model and a nonrecurrent decoding model. The residual batch may be calculated by subtracting the first output batch reconstructed using the trained recurrent encoding and decoding models from the original input batch. By applying the foregoing process to the entire training database, it is possible to construct a residual database for training the nonrecurrent encoding and decoding models. For subsequent operations, the residual database may be divided into frames of N samples, and then a training process for a nonrecurrent neural network model may be performed on batches of B frames. In operation506, the encoder or decoder may compute the second feature information by encoding the residual batch using the nonrecurrent encoding model. The second feature information may be an 1D vector, a 2D matrix or a multi-dimensional tensor for each frame in the input batch depending on a structure of a nonrecurrent neural network. In operation607, the encoder or decoder may compute the quantized second feature information through quantization and dequantization of the second feature information. The quantization operation may generally be non-differentiable, and thus model parameters may not be updated using an error backpropagation required in the training process. Thus, in the training process, a relaxed quantization method such as softmax quantization, may be applied to quantize the second feature information. In operation508, the encoder or decoder may compute the second output batch from the quantized second feature information using the nonrecurrent decoding model. In operation509, the encoder or decoder may update model parameters of the nonrecurrent encoding and decoding models based on a difference between the residual batch and the second output batch. The encoder or decoder may update the model parameters of the nonrecurrent encoding and decoding models to minimize the second loss function based on the difference between the residual batch and the second output batch. For example, the second loss function for updating the model parameters of the nonrecurrent encoding and decoding models may be determined to be a weighted sum of a signal distortion as the difference measure between the residual batch and the second output batch and an entropy loss corresponding to an estimated number of bits required to encode the second feature information. The signal distortion may be measured using a norm-based method such as MSE. The encoder or decoder may update the model parameters of the nonrecurrent encoding and decoding models such that the second loss function is minimized in the training process. For example, the encoder or decoder may update the model parameters of the nonrecurrent encoding and decoding models through error backpropagation based on the second loss function. The encoder or decoder may iteratively perform on multiple epochs until the nonrecurrent encoding and decoding models are sufficiently trained. FIG.6is a diagram illustrating an example of a configuration of neural network models included in an encoder and a decoder in a nested structure encoding and decoding method according to another example embodiment. According to another example embodiment, the encoder101may compute the first feature information from an input signal601using a nonrecurrent encoding model602. The first feature information may correspond to a feature of the input signal601obtained by the nonrecurrent encoding model602. The encoder101may compute the second feature information from the first feature information and the history information using a recurrent encoding model603. According to another example embodiment, the feature information computed by the nonrecurrent encoding model602may be referred as to the first feature information, and the feature information computed by the recurrent encoding model603may be referred as to the second feature information. The nonrecurrent encoding model602may be used to compute the first feature information from the input signal601, and the recurrent encoding model603may be used to compute the second feature information from the first feature information. To encode the first feature information of the input signal601at the current time step, the recurrent encoding model603may compute the second feature information using the first feature information and the history information. The encoder101may produce a bitstream by quantizing the second feature information, and feed the quantized second feature information obtained through dequantization of the bitstream to a recurrent decoding model604. The encoder101may compute the updated history information from the quantized second feature information and the history information, using the recurrent decoding model604. The updated history information may be used as a history information for the recurrent encoding model603and the recurrent decoding model604to encode first feature information at the next time step. The decoder102may receive the bitstream and reconstruct the quantized second feature information through dequantization. The decoder102may compute the first feature information from the quantized second feature information using a recurrent decoding model606. The decoder102may compute the updated history information from the quantized second feature information and the history information, and compute the first feature information from the updated history information using the recurrent decoding model606. This may be the same process as one performed by the recurrent decoding model604of the encoder101, and thus the recurrent decoding model604of the encoder101and the recurrent decoding model606of the decoder102may decode the quantized second feature information using history information synchronized between encoder101and decoder102. The decoder102may compute an output signal608from the first feature information using a nonrecurrent decoding model607. The decoder102may compute the output signal608by feeding the first feature information to the nonrecurrent decoding model607. FIG.7is a flowchart illustrating an example of an encoding method using a neural network model according to another example embodiment. According to another example embodiment, an input signal of an encoder may correspond to a frame of a predefined number of samples at a specific time step t. An overlap interval may exist across frames. The encoder may operate on a frame-by-frame basis according to another example embodiment. In operation701, the encoder may compute the first feature information of an input signal using a nonrecurrent encoding model. In operation702, the encoder may compute the second feature information from the first feature information using a recurrent encoding model. In operation703, the encoder may convert the second feature information to a bitstream by quantizing the second feature information. The encoder may update the history information using the quantized second feature information and the history information, and compute the first feature information from the updated history information using a recurrent decoding model. FIG.8is a flowchart illustrating an example of a decoding method using a neural network model according to another example embodiment. In operation801, a decoder may reconstruct the quantized second feature information from a bitstream received from an encoder using dequantization. In operation802, the decoder may compute the first feature information from the quantized second feature information. The decoder may compute the first feature information from the quantized second feature information and the history information using a recurrent decoding model. Herein, the history information updated in such a decoding process may be used to compute the first feature information at the next time step. In operation803, the decoder may compute an output signal from the first feature information. Thus, the decoder may reconstruct an input signal by decoding the first feature information using a nonrecurrent neural network model. FIG.9is a flowchart illustrating an example of a method of training a neural network model according to another example embodiment. In an audio database provided for training the encoding and decoding models, each audio materials may be divided into multiple frames of N consecutive audio samples, and then frames are arranged into multiple groups of temporally-consecutive T frames. Groups of T frames may be grouped randomly into multiple sets of B groups. According to another example embodiment, a training process for a neural network model may be iteratively performed on B frames corresponding to each time step in the set of (B×T) frames. The B frames corresponding to each time step may be referred as to batch. That is, a batch corresponding to each time step may be sequentially fed to the neural network model. According to another example embodiment, the history information for a recurrent encoding model and a recurrent decoding model may be initialized to preset values, for example, zeros. In operation901, an encoder or decoder may compute the first feature information of an input batch using a nonrecurrent encoding model. The first feature information may be an 1D vector, a 2D matrix or a multiple-dimensional tensor for each frame in the input batch depending on a structure of a nonrecurrent neural network. In operation902, the encoder or decoder may compute the second feature information from the first feature information using a recurrent encoding model. The recurrent encoding model may compute the second feature information using the history information and the first feature information. The history information and the second feature information may be a 1D vector, a 2D matrix, or a multi-dimensional tensor for each frame in the batch depending on a structure of a recurrent neural network. In operation903, the encoder or decoder may quantize the second feature information. The encoder or decoder may compute the quantized second feature information through quantization and dequantization of the second feature information. The quantization may generally be a non-differentiable operation, and thus model parameters may not be updated through error backpropagation required in the training process. Thus, in the training process, a relaxed quantization method such as softmax quantization, may be applied to quantize the second feature information. In operation904, the encoder or decoder may compute the first feature information from the quantized second feature information using a recurrent decoding model. The encoder or decoder may compute the updated history information using the quantized second feature information and the history information in the recurrent decoding model. The encoder or decoder may then compute the first feature information from the updated history information. In operation905, the encoder or decoder may compute an output batch from the reconstructed first feature information using a nonrecurrent decoding model. In operation906, the encoder or decoder may update model parameters of the nonrecurrent encoding and decoding models and the recurrent encoding and decoding models to minimize a loss function based on a difference between the input batch and the output batch. For example, the loss function for updating the model parameters of the nonrecurrent encoding and decoding models and the recurrent encoding and decoding models may be determined to be a weighted sum of a signal distortion as the difference measure between the input batch and the output batch and an entropy loss corresponding to an estimated number of bits required to encode the second feature information. The signal distortion may be measured using a norm-based method such as MSE. The encoder or decoder may update the model parameters of the nonrecurrent encoding and decoding models and the recurrent encoding and decoding models such that the loss function is minimized in the training process. For example, the encoder or decoder may update the model parameters of the nonrecurrent encoding and decoding models and the recurrent encoding and decoding models through error backpropagation based on the loss function. The encoder or decoder may iteratively perform operations901through906at every time step from t=0 to t=T−1. The encoder or decoder may iteratively perform on multiple epochs until the parameters of the nonrecurrent encoding and decoding models and the recurrent encoding and decoding models are sufficiently trained. According to example embodiments described herein, it is possible to effectively remove long-term redundancy and short-term redundancy when encoding and decoding an audio signal. The units described herein may be implemented using hardware components and software components. For example, the hardware components may include microphones, amplifiers, band-pass filters, audio to digital convertors, non-transitory computer memory and processing devices. A processing device may be implemented using one or more general-purpose or special purpose computers, such as, for example, a processor, a controller and an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a programmable logic unit (PLU), a microprocessor or any other device capable of responding to and executing instructions in a defined manner. The processing device may run an operating system (OS) and one or more software applications that run on the OS. The processing device also may access, store, manipulate, process, and create data in response to execution of the software. For purpose of simplicity, the description of a processing device is used as singular; however, one skilled in the art will appreciated that a processing device may include multiple processing elements and multiple types of processing elements. For example, a processing device may include multiple processors or a processor and a controller. In addition, different processing configurations are possible, such as parallel processors. The software may include a computer program, a piece of code, an instruction, or some combination thereof, to independently or collectively instruct or configure the processing device to operate as desired. Software and data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, computer storage medium or device, or in a propagated signal wave capable of providing instructions or data to or being interpreted by the processing device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. The software and data may be stored by one or more non-transitory computer readable recording mediums. The non-transitory computer readable recording medium may include any data storage device that can store data which can be thereafter read by a computer system or processing device. The methods according to the above-described example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations of the above-described example embodiments. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The program instructions recorded on the media may be those specially designed and constructed for the purposes of example embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of non-transitory computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM discs, DVDs, and/or Blue-ray discs; magneto-optical media such as optical discs; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory (e.g., USB flash drives, memory cards, memory sticks, etc.), and the like. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The above-described devices may be configured to act as one or more software modules in order to perform the operations of the above-described example embodiments, or vice versa. While this disclosure includes specific examples, it will be apparent to one of ordinary skill in the art that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
42,324
11862184
DETAILED DESCRIPTION OF THE INVENTION In the following the encoded audio signal is as an example of a SBR-encoded audio signal but the invention is not limited on encoded audio signals based of this kind. This also holds for the kind of encoded audio signal in which the SBR-encoded audio signal is transcoded or which kind of corresponding signals or spectra are processed in any intermediate step. Here, this is as an example of many possibilities an IGF-encoded audio signal. To transcode SBR data into an IGF representation, at least some of the following steps are done:Replace SBR copy-up content by IGF compliant copy-up material.Insertion of a delay compensation of QMF with respect to MDCT for data synchronization.Mapping of spectral high band envelope derived by SBR (through QMF based energy measurement) onto a MCLT representation.Mapping the underlying SBR time-frequency-grid on that of IGF: the mapping function is adapted according to the different types of windowing schemes, to derive MCLT energies from QMF energies.Advantageously, application of an energy correction factor to eliminate any bias and minimize residual error.Advantageously, translation of remaining SBR sideinfo (e.g. noisefloor, tonality aka. inverse filtering level, etc.) into suitable IGF parameters: e.g. inverse filtering level in SBR is mapped to a suitable whitening level in IGF to provide optimal perceptual quality. FIG.1shows the core signal101of an access unit of the encoded audio signal having a limited first spectral width reaching here from zero to a frequency fxo. The parameters of the encoded audio signal describe the spectrum above this core signal101reaching here to the frequency 2*fxo. This has to be compared with the spectrum shown inFIG.2. Here, an upsampled spectra1comprises the same information content as the core signal ofFIG.1and carry for the frequencies above this core signal zero values. The second spectral width reaches in this example from zero to the frequency of 2*fxo. For transcoding of SBR data into an IGF representation, one has to map QMF energies to MCLT energy values. This is described in detail in the following starting with a comparison of the QMF and MOLT transform: Let x be a discrete audio signal sampled with a sample rate SR. If a QMF transform is applied to the signal x, one obtains XQMF[l,k]=QMF(x,t)∈ where t is the start sample of the transformation, I is the timeslot index and k=0, 1, . . . , m-−1 is a frequency line up to m, the Nyquist frequency line. If a windowed MCLT transform is applied to the signal x the result is XMCLT[b,i]=MDCR(x,b)∈ where b is the startblock of the transformation and i=0, 1, . . . , N−1 are the frequency lines up to the Nyquist frequency line N. Exemplary parameters also used in the following discussion: With the QMF transform a prototype length of 640 samples with a hopsize of 64 samples is used. This results in m=64 for the Nyquist frequency line. If, for example, for the MCLT a long window size of 2048 with 50% overlap is used, hopsize is 1024 and therefore is N=1024 for the Nyquist frequency line. The overlapping windowing, generally, eliminates blocking artifacts. During analysis with such an exemplary configuration 32 QMF timeslots are needed to cover the same amount of samples as the MCLT transform, seeFIG.3. ThisFIG.3also illustrates the data synchronization where the sub-samples of QMF are aligned with the longer window of MOLT. To prepare the QMF energies of the SBR-encoded audio signal for mapping, a window w is applied to temporally consecutive QMF values such as the time domain samples are windowed in the MOLT. This QMF windowing is shown inFIG.4. To map QMF energies properly to MCLT energies, both transforms need to be delay aligned. Then, for the conversion of the QMF and MCLT energies the following formula hold: EMCLT⁡[b]=∑i=16⁢x0N-1⁢XMCLT⁡[b,i]2EQMF⁡[b]=∑l=l0l0+31⁢w⁡[l-l0]⁢∑k=x0m-1⁢XQMF⁡[l,k]2,l=16⁢b where xois the SBR cross-over frequency. The next step is to convert the respective energy values from the QMF transform to the MCLT transform. SBR frames help to define signal features using the granularity of temporal/spectral envelopes. The mapping of spectral envelopes has been investigated as part of mapping technique definition. The information imparted by the temporal resolution of the adaptive SBR grids is translated to the techniques of temporal adaptation in IGF. A time domain signal analyzed with QMF filterbank has a time resolution of a sub-sample. The highest temporal resolution of SBR energies is over a time-slot, i.e. two sub-samples. The tradeoff between time and frequency resolution can be realized from the combination of time-slots and the choice of sub-band grouping. The different types of frames allow a variable number of time/frequency segments in a frame. As such, the signal characteristics are preserved by the envelopes which are quantized in grids. The adaptive resolution of time/frequency in IGF can be realized using the different types of MCLT windows. As experiments have shown, the energies of a QMF sub-band can be collected in accordance to the MCLT block in comparison. This motivates incorporation of block switching during the energy mapping. The energies thus collected into sub-bands can be interpolated over the MCLT frequency bins. Thereafter, the IGF side information can be derived for envelope shaping during the source spectrum transposition. Based on experiments, the QMF block energy can be calculated over 32 overlapping subsamples in a long block. To reduce the error of mapping to MCLT block energy, QMF involves an application of weighing coefficients of the MCLT prototype window. It is expected that by choice of an appropriate MCLT window helps in conservation of the signal features defined by the temporal envelopes of QMF. These calculations are advantageously performed offline and before the usage of the apparatus or the method. FIG.5shows the result of an example measurement in which the logarithmic energies of EQMFand EMDCT were compared (E′(QMF) and E′(MCLT)). This allows to calculate in the logarithmic domain: E′(QMF)+ϕLB≈E′(MCLT),b=1,2, . . . B. This proofs the conversion of the energy values by using a constant scale factor s for the, thus, linear mapping in the linear domain: s EQMF[b]≈EMCLT[b],b=1,2, . . . ,B where the scale factor s is given by: S=10(ϕLB10) and B is the total number of blocks which were measured. The mean offset ϕLBin one embodiment for all blocks by clipping all outliers to a 10% confidence interval: ϕLB=1B⁢∑b=1B⁢clip10⁡(ϕLB⁡[b])=1B⁢∑b=1B⁢clip10⁡(10⁢log10⁡(EMCLT⁡[b]EQMF⁡[b])) This confidence interval allows to clip data samples with an excessive deviation from the mean. Exemplary measurement have shown a bias-free and precise match of energies with approx. 1 dB peak error. Utilizing this mapping, it is possible to convert SBR energy values transmitted in a bitstream containing a SBR-encoded audio signal into corresponding IGF energy values. The constant scale factor in the shown example is less than 20 and about 18 in log domain. These can be fed directly into an IGF decoder, or, alternatively, can be assembled into an IGF output bitstream. Experiments have shown, that the mean offset ϕLBin the log domain has a value lower than 20. It was found that the mean offset ϕLBlies between 16 and 17 or in one case has a value of about 7. So, the mean offset ϕLBhas values between 7 and 17. Further experiments have shown that the mean offset ϕLBdepends on the type of windows used. Obtained values are shown in the following table: Window type, notationϕstd_ϕlong blocks, LB16.02360.8785short blocks, SB7.26060.5661long start, Lstart16.56830.5578long stop, Lstop16.57691.1006 FIG.6shows a stop-start window sequence for illustrating the dependency of the scale factor on the used window sequence. In the shown example, the frame f of the SBR-encoded audio signal contains 32 sub-samples of QMF. The first window type ws(f, 0) of the sequence spans over the complete frame data, i.e. a block of thsub-samples. The following window ws(f, 1) overlaps ws(f, 0) while spanning over th/2 sub-samples of the frame f and th/2 of the following frame f+1. The frames of SBR grids can be availed as blocks of QMF energy grids with the relation—in this shown embodiment—that one frame generates two blocks of QMF subsamples. In the following, an IGF Decoder for decoding a SBR-encoded audio signal is explained using one embodiment. A typically 2:1 SBR decoder is e.g. described in M. Neuendorf et al., “The ISO/MPEG Unified Speech and Audio Coding Standard—Consistent High Quality for All Content Types and at All Bit Rates”, J. Audio Eng. Soc., vol. 61, no. 12, pp. 956-977, December 2013 and shown inFIG.7. An embodiment of an inventive transcoder in form of a block diagram is shown inFIG.8. The SBR-encoded audio signal100comprising access units100′ is fed to a demultiplexer1extracting a core signal101and a set of parameters102allowing to reconstruct the missing parts of the audio signal. The core signal101is fed to the upsampler2which is here embodied by a MDCT splitter and the set of parameters102are fed to the parameter converter which is this depiction is shown as comprising to separate elements. In this example, the set of parameters102especially refer to the spectral envelope provided by the SBR-encoded audio signal. In this example, the time slots0-15of a frame of the SBR-encoded audio signal are transmitted to the upper parameter converter element and the time slots16-31are transmitted to the lower parameter converter element. The number of time slots still refer to the exemplary parameters used for the discussion of the conversion of the parameters from QMF to MCLT. In each subsection of the parameter converter3at least the parameters referring to the spectral envelope are converted which is done via the above explained conversion of the QMF data to the MCLT data. The resulting converted parameters104,104′ are suitable for usage of the intelligent gap filling and are fed to the spectral gap filling processor4comprising two multiplexers in order to be merged with a corresponding upsampled spectrum103,103′ derived by the upsampler2from the core signal101. The result comprises two access units1. AU′ and2. AU′ as output of the multiplexers of the spectral gap filling processor4. Both access units1. AU′ and2. AU′ are fed to an adder5, wherein the second access unit2. AU′ is delayed by a delay element6. The result of the adder5is a transcoded audio signal200which is especially in the shown embodiment an IGF-encoded audio signal having the two access units1. AU and2. AU. The upsampler2is explained using the exemplary embodiment depicted inFIG.10in which the upsampler2is label as MDCT Splitter. The upsampler2comprises an spectrum upsampler20for upsampling the spectrum of the core signal101(having e.g. 1024 lines) of the original SBR-encoded audio signal. The upsampled spectrum110(if the upsampling is done, for example, by the factor2, the resulting signal has 2048 lines) undergoes an inverse modified discrete cosine transform performed by an IMDCT converter21as one example of an inverse transform. The such obtained time domain signal111(consisting of time domain samples) undergoes an overlap-add (designed by OA) and is such split in two signals. Both signals have e.g. 1024 lines and the—here such drawn—lower signal is affected by a delay24of the overlap-add corresponding to 1024 lines. Both signals then undergo a modified discrete cosine transform performed by two MDCT converters23leading to two upsampled spectra103as output of the upsampler2. The effect of the two MDCT converters23is shown inFIG.11. In this picture,1. MDCT refers to the upper MDCT converter23shown inFIGS.3and2. MDCT refers to the lower MDCT converter23. Output of IMDCT refers to the inverse modified discrete cosine transformed upsampled core signal111. Further, there is an overlap add OA provided to the IMDCT converter21with e.g. 2048 samples. For details of the MDCT see, e.g., WO 2014/128197 A1, especially pages 14-16. Alternatively, not a MDCT transform and an IMDCT transform are performed but a Fast Fourier and Inverse Fast Fourier Transform. The apparatus shown inFIG.9allows to decode a—here SBR (Spectral Band Replication)—encoded audio signal100into an audio signal300as one example for the processing of such an encoded audio signal100. For this purpose, the apparatus comprises a demultiplexer1, which generates from a access unit100′ of the SBR-encoded audio signal100the core signal101and a set of parameters102. The set of parameters102describes the spectrum above the core signal, i.e. describes the missing parts. The core signal101is submitted to an upsampler2, here embodied as a MDCT splitter, for upsampling the core signal101. This is due to the fact that the core signal of a SBR-encoded audio signal has a reduced sampling rate compared to the core signal of an IGF-encoded audio signal. The details of an embodiment of the upsampler2was explained with regard toFIG.10. The set of parameters102is submitted to a parameter converter3which is here embodied by two converter elements or units. The access unit100′ comprises at least a frame covering timely consecutive timeslots. Here, there are 32 timeslots. The parameters of the first timeslots covering timeslots0to15are fed to the upper parameter converter unit and the parameter of the second timeslot ranging from 16 to 31 are fed to the lower parameter converter unit to be converted. The parameters of the encoded audio-signal and the converted parameter refer to different filter banks, e.g. Quadrature Mirror Filter (QMF) and Modulated Complex Lapped Transform (MCLT), respectively. Therefore, the parameter converter unit insert a delay compensation into the parameters of the SBR-encoded audio signal for synchronization. Further, the parameter converter unit map a time-frequency grid which is underlying the time slots of the SBR-encoded audio signal using a windowing performed—advantageously beforehand—on the parameter using a window applied to time signals using filter banks of Modulated Complex Lapped Transform. The resulting converted parameters104,104′ are fed to the two components (1. IGF and 2. IGF) of the spectral gap filling processor4for merging the upsampled spectra103,103′ with the corresponding converted parameters104,104′. Corresponding implies in the depicted embodiement, that the converted parameters104derived from the first set of timeslots are merged with the upsampled spectrum provided by “MDCT1.” shown inFIG.10and that the converted parameters104′ derived from the second set of timeslots are merged with the delayed upsampled spectrum provided by “MDCT2.”. The results of these mergers are transformed by two IMDCT converters7using an inverse modified discrete cosine transform into time signals and are overlap-added (delay8and adder9) to the desired audio signal300. FIG.12shows an example for upsampling core signals with a 3:8 ratio. In this case, the upsampler stores the core signals of three timely consecutive access units100′ (this is the above discussed and hence “current” access unit), and the two forgoing access units100″ and100′″. These three core signals are added and afterwards divided in eight upsampled spectra. In the—not shown—case that upsampling of the core signals is done with a 3:4 ratio, the upsampler also stores the core signals of three timely consecutive access units. These core signals are also added but are divided in four upsampled spectra. Similar, two core signals from two access units are needed for one upsampled spectrum if a certain overlap is desired. FIG.13illustrates schematically the overlap-add. The explanation follows the rows from top to bottom. Given are three access units: AU0, AU1, and AU2, each having a core signal with 1024 data points. The corresponding spectra of the core signals are added up by zeros following the spectra of the core signals. The upfilled spectra having 2048 data points. These spectra are transformed into the time domain with signals having 2*2048=4096 data points. For these time signals, overlapping parts of the signals are added up, the overlap referring to a first half of one and a second half of another time signal. The resulting sum time signals have 2048 data as from each of the foregoing time signals just a half is used. Hence, from the three access units AU0, AU1, and AU2, three time signals are obtained. From the time signal stemming from AU0the second half is added with the first half of the time signal obtained from AU1. The second half of the time signal derived from AU1is added with the first half of the time signal obtained from AU2. Due to this, three access units provide in this example of an overlap of 50% two overlap-added time signals, both having 2048 data points. These two overlap-added time signals are afterwards transformed into the frequency domain (using e.g. Fast Fourier Transformation or any other suitable transform), yielding the first and second upsampled spectrum, both having 1024 data points. InFIG.14the inventive apparatus is shown once more. In this depicted embodiment, the encoded audio signal100contains access units from which three are shown: AU0, AU1, and AU2. These access units are fed to the demultiplexer1which extracts the respective core signals CS0, CS1, and CS2and the respective parameters for describing the missing parts of the audio signal P0, P1, and P2. The core signals CS0, CS1, and CS2are submitted to the upsampler2which upsamples the core signals and produces for each core signal to upsampled spectra US1, US2for CS0, US3, US4for CS1, and USS, US6for CS2. The parameter on the other hand are fed to the parameter converter3yielding converted parameters cP0, cP1, and cP2. The spectral gap filling processor4processes the upsampled spectra US1, US2, US3, US4, US5, and US6using the corresponding converted parameters cP0, cP1, and cP2. For example, the first upsampled spectrum US1of the first access unit AU0is processed with a first sub-set of the converted parameters cP0and the second upsampled spectrum US2of the first access unit AU0is processed with a second sub-set of the converted parameters cP0. The output of the spectral gap filling processor4is, e.g., an audio signal or a transcoded audio signal. FIG.15shows the main steps of the inventive method for processing the encoded audio signal100. In a step1000from the encoded audio signal100—or to be more precise: from one access unit of the encoded audio signal100the core signal and a set of parameters are generated or extracted. The following steps can be performed in a arbitrarily given sequence or in parallel. The core signal is upsampled in step1001which yields especially two timely consecutive upsampled spectra. The parameters are converted in step1002into converted parameters being applicable to the upsampled spectra. Finally, the upsampled spectra and the converted parameter—additionally also other parameters obtained from the access unit of the encoded audio signal—are processed in step1003. The output of this processing is, e.g. an audio signal as a time signal or a differently encoded and, thus, transcoded audio signal. Usually, the encoded audio signal also contains further parameters for describing the original audio signal and for reconstructing the missing parts during the decoding of the encoded audio signal. The inventive processing technique helps e.g. in the conversion of SBR side information to IGF for envelope shaping during high frequency (HF) synthesis. Additional control parameters indicate HF spectrum where the noise to tonality ratio, in spite of envelope shaping, does not match the input signal. This nature in audio is observed in signals like woodwind music instruments, or in rooms with reverberation. The higher frequencies in these cases are not harmonic or highly tonal and can be perceived as noisy in comparison to lower frequencies. The formants in the signal are estimated using an inverse prediction error filter at the Encoder. A level of inverse filtering is decided according to match the input signal features. This level is signaled by SBR. As the envelope shaping in the HF spectrum does not help to reduce the tonality of the spectrum completely, a pre-whitening filter with different levels of frequency dependent chirp factor can be applied to the linear prediction error filter for flattening of formants. These anomalous signal characteristics are addressed by SBR using an Inverse Filtering Tool while IGF uses a Whitening Tool. The degree of pre-whitening is mapped to separate levels in the technologies. Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus. Also, aspects of the apparatus for transcoding a SBR-encoded audio signal may be valid for the apparatus for decoding a SBR-encoded audio signal and vice versa. The same holds for the corresponding methods. While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
22,183
11862185
NOTATION AND NOMENCLATURE Throughout this disclosure, including in the claims, the expression performing an operation “on” a signal or data (e.g., filtering, scaling, transforming, or applying gain to, the signal or data) is used in a broad sense to denote performing the operation directly on the signal or data, or on a processed version of the signal or data (e.g., on a version of the signal that has undergone preliminary filtering or pre-processing prior to performance of the operation thereon). Throughout this disclosure, including in the claims, the expression “audio processing unit” or “audio processor” is used in a broad sense, to denote a system, device, or apparatus, configured to process audio data. Examples of audio processing units include, but are not limited to encoders, transcoders, decoders, codecs, pre-processing systems, post-processing systems, and bitstream processing systems (sometimes referred to as bitstream processing tools). Virtually all consumer electronics, such as mobile phones, televisions, laptops, and tablet computers, contain an audio processing unit or audio processor. Throughout this disclosure, including in the claims, the term “couples” or “coupled” is used in a broad sense to mean either a direct or indirect connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections. Moreover, components that are integrated into or with other components are also coupled to each other. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION The MPEG-4 AAC standard contemplates that an encoded MPEG-4 AAC bitstream includes metadata indicative of each type of high frequency reconstruction (“HFR”) processing to be applied (if any is to be applied) by a decoder to decode audio content of the bitstream, and/or which controls such HFR processing, and/or is indicative of at least one characteristic or parameter of at least one HFR tool to be employed to decode audio content of the bitstream. Herein, we use the expression “SBR metadata” to denote metadata of this type which is described or mentioned in the MPEG-4 AAC standard for use with spectral band replication (“SBR”). As appreciated by one skilled in the art, SBR is a form of HFR. SBR is preferably used as a dual-rate system, with the underlying codec operating at half the original sampling-rate, while SBR operates at the original sampling rate. The SBR encoder works in parallel with the underlying core codec, albeit at a higher sampling-rate. Although SBR is mainly a post process in the decoder, important parameters are extracted in the encoder in order to ensure the most accurate high frequency reconstruction in the decoder. The encoder estimates the spectral envelope of the SBR range for a time and frequency range/resolution suitable for the current input signal segments characteristics. The spectral envelope is estimated by a complex QMF analysis and subsequent energy calculation. The time and frequency resolutions of the spectral envelopes can be chosen with a high level of freedom, in order to ensure the best suited time frequency resolution for the given input segment. The envelope estimation needs to consider that a transient in the original, mainly situated in the high frequency region (for instance a high-hat), will be present to a minor extent in the SBR generated highband prior to envelope adjustment, since the highband in the decoder is based on the low band where the transient is much less pronounced compared to the highband. This aspect imposes different requirements for the time frequency resolution of the spectral envelope data, compared to ordinary spectral envelope estimation as used in other audio coding algorithms. Apart from the spectral envelope, several additional parameters are extracted representing spectral characteristics of the input signal for different time and frequency regions. Since the encoder naturally has access to the original signal as well as information on how the SBR unit in the decoder will create the high-band, given the specific set of control parameters, it is possible for the system to handle situations where the lowband constitutes a strong harmonic series and the highband, to be recreated, mainly constitutes random signal components, as well as situations where strong tonal components are present in the original highband without counterparts in the lowband, upon which the highband region is based. Furthermore, the SBR encoder works in close relation to the underlying core codec to assess which frequency range should be covered by SBR at a given time. The SBR data is efficiently coded prior to transmission by exploiting entropy coding as well as channel dependencies of the control data, in the case of stereo signals. The control parameter extraction algorithms typically need to be carefully tuned to the underlying codec at a given bitrate and a given sampling rate. This is due to the fact that a lower bitrate, usually implies a larger SBR range compared to a high bitrate, and different sampling rates correspond to different time resolutions of the SBR frames. An SBR decoder typically includes several different parts. It comprises a bitstream decoding module, a high frequency reconstruction (HFR) module, an additional high frequency components module, and an envelope adjuster module. The system is based around a complex valued QMF filterbank (for high-quality SBR) or a real-valued QMF filterbank (for low-power SBR). Embodiments of the invention are applicable to both high-quality SBR and low-power SBR. In the bitstream extraction module, the control data is read from the bitstream and decoded. The time frequency grid is obtained for the current frame, prior to reading the envelope data from the bitstream. The underlying core decoder decodes the audio signal of the current frame (albeit at the lower sampling rate) to produce time-domain audio samples. The resulting frame of audio data is used for high frequency reconstruction by the HFR module. The decoded lowband signal is then analyzed using a QMF filterbank. The high frequency reconstruction and envelope adjustment is subsequently performed on the subband samples of the QMF filterbank. The high frequencies are reconstructed from the low-band in a flexible way, based on the given control parameters. Furthermore, the reconstructed highband is adaptively filtered on a subband channel basis according to the control data to ensure the appropriate spectral characteristics of the given time/frequency region. The top level of an MPEG-4 AAC bitstream is a sequence of data blocks (“raw_data_block” elements), each of which is a segment of data (herein referred to as a “block”) that contains audio data (typically for a time period of 1024 or 960 samples) and related information and/or other data. Herein, we use the term “block” to denote a segment of an MPEG-4 AAC bitstream comprising audio data (and corresponding metadata and optionally also other related data) which determines or is indicative of one (but not more than one) “raw_data_block” element. Each block of an MPEG-4 AAC bitstream can include a number of syntactic elements (each of which is also materialized in the bitstream as a segment of data). Seven types of such syntactic elements are defined in the MPEG-4 AAC standard. Each syntactic element is identified by a different value of the data element “id_syn_ele.” Examples of syntactic elements include a “single_channel_element( )” a “channel_pair_element( )” and a “fill_element( )” A single channel element is a container including audio data of a single audio channel (a monophonic audio signal). A channel pair element includes audio data of two audio channels (that is, a stereo audio signal). A fill element is a container of information including an identifier (e.g., the value of the above-noted element “id_syn_ele”) followed by data, which is referred to as “fill data.” Fill elements have historically been used to adjust the instantaneous bit rate of bitstreams that are to be transmitted over a constant rate channel. By adding the appropriate amount of fill data to each block, a constant data rate may be achieved. In accordance with embodiments on the invention, the fill data may include one or more extension payloads that extend the type of data (e.g., metadata) capable of being transmitted in a bitstream. A decoder that receives bitstreams with fill data containing a new type of data may optionally be used by a device receiving the bitstream (e.g., a decoder) to extend the functionality of the device. Thus, as can be appreciated by one skilled in the art, fill elements are a special type of data structure and are different from the data structures typically used to transmit audio data (e.g., audio payloads containing channel data). In some embodiments of the invention, the identifier used to identify a fill element may consist of a three bit unsigned integer transmitted most significant bit first (“uimsbf”) having a value of 0x6. In one block, several instances of the same type of syntactic element (e.g., several fill elements) may occur. Another standard for encoding audio bitstreams is the MPEG Unified Speech and Audio Coding (USAC) standard (ISO/IEC 23003-3:2012). The MPEG USAC standard describes encoding and decoding of audio content using spectral band replication processing (including SBR processing as described in the MPEG-4 AAC standard, and also including other enhanced forms of spectral band replication processing). This processing applies spectral band replication tools (sometimes referred to herein as “enhanced SBR tools” or “eSBR tools”) of an expanded and enhanced version of the set of SBR tools described in the MPEG-4 AAC standard. Thus, eSBR (as defined in USAC standard) is an improvement to SBR (as defined in MPEG-4 AAC standard). Herein, we use the expression “enhanced SBR processing” (or “eSBR processing”) to denote spectral band replication processing using at least one eSBR tool (e.g., at least one eSBR tool which is described or mentioned in the MPEG USAC standard) which is not described or mentioned in the MPEG-4 AAC standard. Examples of such eSBR tools are harmonic transposition and QMF-patching additional pre-processing or “pre-flattening.” A harmonic transposer of integer order T maps a sinusoid with frequency ω into a sinusoid with frequency Tω, while preserving signal duration. Three orders, T=2, 3, 4, are typically used in sequence to produce each part of the desired output frequency range using the smallest possible transposition order. If output above the fourth order transposition range is required, it may be generated by frequency shifts. When possible, near critically sampled baseband time domains are created for the processing to minimize computational complexity. The harmonic transposer may either be QMF or DFT based. When using the QMF based harmonic transposer, the bandwidth extension of the core coder time-domain signal is carried out entirely in the QMF domain, using a modified phase-vocoder structure, performing decimation followed by time stretching for every QMF subband. Transposition using several transpositions factors (e.g., T=2, 3, 4) is carried out in a common QMF analysis/synthesis transform stage. Since the QMF based harmonic transposer does not feature signal adaptive frequency domain oversampling, the corresponding flag in the bitstream (sbrOversamplingFlag[ch]) may be ignored. When using the DFT based harmonic transposer, the factor 3 and 4 transposers (3rd and 4th order transposers) are preferably integrated into the factor 2 transposer (2nd order transposer) by means of interpolation to reduce complexity. For each frame (corresponding to coreCoderFrameLength core coder samples), the nominal “full size” transform size of the transposer is first determined by the signal adaptive frequency domain oversampling flag (sbrOversamplingFlag[ch]) in the bitstream. When sbrPatchingMode==1, indicating that linear transposition is to be used to generate the highband, an additional step may be introduced to avoid discontinuities in the shape of the spectral envelope of the high frequency signal being input to the subsequent envelope adjuster. This improves the operation of the subsequent envelope adjustment stage, resulting in a highband signal that is perceived to be more stable. The operation of the additional preprocessing is beneficial for signal types where the coarse spectral envelope of the low band signal being used for high frequency reconstruction displays large variations in level. However, the value of the bitstream element may be determined in the encoder by applying any kind of signal dependent classification. The additional pre-processing is preferably activated through a one bit bitstream element, bs_sbr_preprocessing. When bs_sbr_preprocessing is set to one, the additional processing is enabled. When bs_sbr_preprocessing is set to zero, the additional pre-processing is disabled. The additional processing preferable utilizes a preGain curve that is used by the high frequency generator to scale the lowband, XLow, for each patch. For example, the preGain curve may be calculated according to: preGain(k)=10(meanNrg−lowEnvSlope(k))/20,0≤k<k0 where k0is the first QMF subband in the master frequency band table and lowEnvSlope is calculated using a function that computes coefficients of a best fitting polynomial (in a least-squares sense), such as polyfit( ) For example, polyfit(3,k0,x_lowband,lowEnv,lowEnvSlope); may be employed (using a third degree polynomial) and where lowEnv⁢(k)=1⁢0⁢log1⁢0⁢φk(0,0)numTimeSlots·RATE+6,0≤k<k0 where x_lowband(k)=[0 . . . k0−1], numTimeSlot is the number of SBR envelope time slots that exist within a frame, RATE is a constant indicating the number of QMF subband samples per timeslot (e.g., 2), (φkis a linear prediction filter coefficient (potentially obtained from the covariance method) and where meanNrg=∑k=0k0-1⁢low⁢En⁢v⁡(k)k0. A bitstream generated in accordance with the MPEG USAC standard (sometimes referred to herein as a “USAC bitstream”) includes encoded audio content and typically includes metadata indicative of each type of spectral band replication processing to be applied by a decoder to decode audio content of the USAC bitstream, and/or metadata which controls such spectral band replication processing and/or is indicative of at least one characteristic or parameter of at least one SBR tool and/or eSBR tool to be employed to decode audio content of the USAC bitstream. Herein, we use the expression “enhanced SBR metadata” (or “eSBR metadata”) to denote metadata indicative of each type of spectral band replication processing to be applied by a decoder to decode audio content of an encoded audio bitstream (e.g., a USAC bitstream) and/or which controls such spectral band replication processing, and/or is indicative of at least one characteristic or parameter of at least one SBR tool and/or eSBR tool to be employed to decode such audio content, but which is not described or mentioned in the MPEG-4 AAC standard. An example of eSBR metadata is the metadata (indicative of, or for controlling, spectral band replication processing) which is described or mentioned in the MPEG USAC standard but not in the MPEG-4 AAC standard. Thus, eSBR metadata herein denotes metadata which is not SBR metadata, and SBR metadata herein denotes metadata which is not eSBR metadata. A USAC bitstream may include both SBR metadata and eSBR metadata. More specifically, a USAC bitstream may include eSBR metadata which controls the performance of eSBR processing by a decoder, and SBR metadata which controls the performance of SBR processing by the decoder. In accordance with typical embodiments of the present invention, eSBR metadata (e.g., eSBR-specific configuration data) is included (in accordance with the present invention) in an MPEG-4 AAC bitstream (e.g., in the sbr_extension( ) container at the end of an SBR payload). Performance of eSBR processing, during decoding of an encoded bitstream using an eSBR tool set (comprising at least one eSBR tool), by a decoder regenerates the high frequency band of the audio signal, based on replication of sequences of harmonics which were truncated during encoding. Such eSBR processing typically adjusts the spectral envelope of the generated high frequency band and applies inverse filtering, and adds noise and sinusoidal components in order to recreate the spectral characteristics of the original audio signal. In accordance with typical embodiments of the invention, eSBR metadata is included (e.g., a small number of control bits which are eSBR metadata are included) in one or more of metadata segments of an encoded audio bitstream (e.g., an MPEG-4 AAC bitstream) which also includes encoded audio data in other segments (audio data segments). Typically, at least one such metadata segment of each block of the bitstream is (or includes) a fill element (including an identifier indicating the start of the fill element), and the eSBR metadata is included in the fill element after the identifier.FIG.1is a block diagram of an exemplary audio processing chain (an audio data processing system), in which one or more of the elements of the system may be configured in accordance with an embodiment of the present invention. The system includes the following elements, coupled together as shown: encoder 1, delivery subsystem 2, decoder 3, and post-processing unit 4. In variations on the system shown, one or more of the elements are omitted, or additional audio data processing units are included. In some implementations, encoder 1 (which optionally includes a pre-processing unit) is configured to accept PCM (time-domain) samples comprising audio content as input, and to output an encoded audio bitstream (having format which is compliant with the MPEG-4 AAC standard) which is indicative of the audio content. The data of the bitstream that are indicative of the audio content are sometimes referred to herein as “audio data” or “encoded audio data.” If the encoder is configured in accordance with a typical embodiment of the present invention, the audio bitstream output from the encoder includes eSBR metadata (and typically also other metadata) as well as audio data. One or more encoded audio bitstreams output from encoder 1 may be asserted to encoded audio delivery subsystem 2. Subsystem 2 is configured to store and/or deliver each encoded bitstream output from encoder 1. An encoded audio bitstream output from encoder 1 may be stored by subsystem 2 (e.g., in the form of a DVD or Blu ray disc), or transmitted by subsystem 2 (which may implement a transmission link or network), or may be both stored and transmitted by subsystem 2. Decoder 3 is configured to decode an encoded MPEG-4 AAC audio bitstream (generated by encoder 1) which it receives via subsystem 2. In some embodiments, decoder 3 is configured to extract eSBR metadata from each block of the bitstream, and to decode the bitstream (including by performing eSBR processing using the extracted eSBR metadata) to generate decoded audio data (e.g., streams of decoded PCM audio samples). In some embodiments, decoder 3 is configured to extract SBR metadata from the bitstream (but to ignore eSBR metadata included in the bitstream), and to decode the bitstream (including by performing SBR processing using the extracted SBR metadata) to generate decoded audio data (e.g., streams of decoded PCM audio samples). Typically, decoder 3 includes a buffer which stores (e.g., in a non-transitory manner) segments of the encoded audio bitstream received from subsystem 2. Post-processing unit 4 ofFIG.1is configured to accept a stream of decoded audio data from decoder 3 (e.g., decoded PCM audio samples), and to perform post processing thereon. Post-processing unit may also be configured to render the post-processed audio content (or the decoded audio received from decoder 3) for playback by one or more speakers. FIG.2is a block diagram of an encoder (100) which is an embodiment of the inventive audio processing unit. Any of the components or elements of encoder100may be implemented as one or more processes and/or one or more circuits (e.g., ASICs, FPGAs, or other integrated circuits), in hardware, software, or a combination of hardware and software. Encoder100includes encoder105, stuffer/formatter stage107, metadata generation stage106, and buffer memory109, connected as shown. Typically also, encoder100includes other processing elements (not shown). Encoder100is configured to convert an input audio bitstream to an encoded output MPEG-4 AAC bitstream. Metadata generator106is coupled and configured to generate (and/or pass through to stage107) metadata (including eSBR metadata and SBR metadata) to be included by stage107in the encoded bitstream to be output from encoder100. Encoder105is coupled and configured to encode (e.g., by performing compression thereon) the input audio data, and to assert the resulting encoded audio to stage107for inclusion in the encoded bitstream to be output from stage107. Stage107is configured to multiplex the encoded audio from encoder105and the metadata (including eSBR metadata and SBR metadata) from generator106to generate the encoded bitstream to be output from stage107, preferably so that the encoded bitstream has format as specified by one of the embodiments of the present invention. Buffer memory109is configured to store (e.g., in a non-transitory manner) at least one block of the encoded audio bitstream output from stage107, and a sequence of the blocks of the encoded audio bitstream is then asserted from buffer memory109as output from encoder100to a delivery system. FIG.3is a block diagram of a system including decoder (200) which is an embodiment of the inventive audio processing unit, and optionally also a post-processor (300) coupled thereto. Any of the components or elements of decoder200and post-processor300may be implemented as one or more processes and/or one or more circuits (e.g., ASICs, FPGAs, or other integrated circuits), in hardware, software, or a combination of hardware and software. Decoder200comprises buffer memory201, bitstream payload deformatter (parser)205, audio decoding subsystem202(sometimes referred to as a “core” decoding stage or “core” decoding subsystem), eSBR processing stage203, and control bit generation stage204, connected as shown. Typically also, decoder200includes other processing elements (not shown). Buffer memory (buffer)201stores (e.g., in a non-transitory manner) at least one block of an encoded MPEG-4 AAC audio bitstream received by decoder200. In operation of decoder200, a sequence of the blocks of the bitstream is asserted from buffer201to deformatter205. In variations on theFIG.3embodiment (or theFIG.4embodiment to be described), an APU which is not a decoder (e.g., APU500ofFIG.6) includes a buffer memory (e.g., a buffer memory identical to buffer201) which stores (e.g., in a non-transitory manner) at least one block of an encoded audio bitstream (e.g., an MPEG-4 AAC audio bitstream) of the same type received by buffer201ofFIG.3orFIG.4(i.e., an encoded audio bitstream which includes eSBR metadata). With reference again toFIG.3, deformatter205is coupled and configured to demultiplex each block of the bitstream to extract SBR metadata (including quantized envelope data) and eSBR metadata (and typically also other metadata) therefrom, to assert at least the eSBR metadata and the SBR metadata to eSBR processing stage203, and typically also to assert other extracted metadata to decoding subsystem202(and optionally also to control bit generator204). Deformatter205is also coupled and configured to extract audio data from each block of the bitstream, and to assert the extracted audio data to decoding subsystem (decoding stage)202. The system ofFIG.3optionally also includes post-processor300. Post-processor300includes buffer memory (buffer)301and other processing elements (not shown) including at least one processing element coupled to buffer301. Buffer301stores (e.g., in a non-transitory manner) at least one block (or frame) of the decoded audio data received by post-processor300from decoder200. Processing elements of post-processor300are coupled and configured to receive and adaptively process a sequence of the blocks (or frames) of the decoded audio output from buffer301, using metadata output from decoding subsystem202(and/or deformatter205) and/or control bits output from stage204of decoder200. Audio decoding subsystem202of decoder200is configured to decode the audio data extracted by parser205(such decoding may be referred to as a “core” decoding operation) to generate decoded audio data, and to assert the decoded audio data to eSBR processing stage203. The decoding is performed in the frequency domain and typically includes inverse quantization followed by spectral processing. Typically, a final stage of processing in subsystem202applies a frequency domain-to-time domain transform to the decoded frequency domain audio data, so that the output of subsystem is time domain, decoded audio data. Stage203is configured to apply SBR tools and eSBR tools indicated by the eSBR metadata and the eSBR (extracted by parser205) to the decoded audio data (i.e., to perform SBR and eSBR processing on the output of decoding subsystem202using the SBR and eSBR metadata) to generate the fully decoded audio data which is output (e.g., to post-processor300) from decoder200. Typically, decoder200includes a memory (accessible by subsystem202and stage203) which stores the deformatted audio data and metadata output from deformatter205, and stage203is configured to access the audio data and metadata (including SBR metadata and eSBR metadata) as needed during SBR and eSBR processing. The SBR processing and eSBR processing in stage203may be considered to be post-processing on the output of core decoding subsystem202. Optionally, decoder200also includes a final upmixing subsystem (which may apply parametric stereo (“PS”) tools defined in the MPEG-4 AAC standard, using PS metadata extracted by deformatter205and/or control bits generated in subsystem204) which is coupled and configured to perform upmixing on the output of stage203to generated fully decoded, upmixed audio which is output from decoder200. Alternatively, post-processor300is configured to perform upmixing on the output of decoder200(e.g., using PS metadata extracted by deformatter205and/or control bits generated in subsystem204). In response to metadata extracted by deformatter205, control bit generator204may generate control data, and the control data may be used within decoder200(e.g., in a final upmixing subsystem) and/or asserted as output of decoder200(e.g., to post-processor300for use in post-processing). In response to metadata extracted from the input bitstream (and optionally also in response to control data), stage204may generate (and assert to post-processor300) control bits indicating that decoded audio data output from eSBR processing stage203should undergo a specific type of post-processing. In some implementations, decoder200is configured to assert metadata extracted by deformatter205from the input bitstream to post-processor300, and post-processor300is configured to perform post-processing on the decoded audio data output from decoder200using the metadata. FIG.4is a block diagram of an audio processing unit (“APU”) (210) which is another embodiment of the inventive audio processing unit. APU210is a legacy decoder which is not configured to perform eSBR processing. Any of the components or elements of APU210may be implemented as one or more processes and/or one or more circuits (e.g., ASICs, FPGAs, or other integrated circuits), in hardware, software, or a combination of hardware and software. APU210comprises buffer memory201, bitstream payload deformatter (parser)215, audio decoding subsystem202(sometimes referred to as a “core” decoding stage or “core” decoding subsystem), and SBR processing stage213, connected as shown. Typically also, APU210includes other processing elements (not shown). APU210may represent, for example, an audio encoder, decoder or transcoder. Elements201and202of APU210are identical to the identically numbered elements of decoder200(ofFIG.3) and the above description of them will not be repeated. In operation of APU210, a sequence of blocks of an encoded audio bitstream (an MPEG-4 AAC bitstream) received by APU210is asserted from buffer201to deformatter215. Deformatter215is coupled and configured to demultiplex each block of the bitstream to extract SBR metadata (including quantized envelope data) and typically also other metadata therefrom, but to ignore eSBR metadata that may be included in the bitstream in accordance with any embodiment of the present invention. Deformatter215is configured to assert at least the SBR metadata to SBR processing stage213. Deformatter215is also coupled and configured to extract audio data from each block of the bitstream, and to assert the extracted audio data to decoding subsystem (decoding stage)202. Audio decoding subsystem202of decoder200is configured to decode the audio data extracted by deformatter215(such decoding may be referred to as a “core” decoding operation) to generate decoded audio data, and to assert the decoded audio data to SBR processing stage213. The decoding is performed in the frequency domain. Typically, a final stage of processing in subsystem202applies a frequency domain-to-time domain transform to the decoded frequency domain audio data, so that the output of subsystem is time domain, decoded audio data. Stage213is configured to apply SBR tools (but not eSBR tools) indicated by the SBR metadata (extracted by deformatter215) to the decoded audio data (i.e., to perform SBR processing on the output of decoding subsystem202using the SBR metadata) to generate the fully decoded audio data which is output (e.g., to post-processor300) from APU210. Typically, APU210includes a memory (accessible by subsystem202and stage213) which stores the deformatted audio data and metadata output from deformatter215, and stage213is configured to access the audio data and metadata (including SBR metadata) as needed during SBR processing. The SBR processing in stage213may be considered to be post-processing on the output of core decoding subsystem202. Optionally, APU210also includes a final upmixing subsystem (which may apply parametric stereo (“PS”) tools defined in the MPEG-4 AAC standard, using PS metadata extracted by deformatter215) which is coupled and configured to perform upmixing on the output of stage213to generated fully decoded, upmixed audio which is output from APU210. Alternatively, a post-processor is configured to perform upmixing on the output of APU210(e.g., using PS metadata extracted by deformatter215and/or control bits generated in APU210). Various implementations of encoder100, decoder200, and APU210are configured to perform different embodiments of the inventive method. In accordance with some embodiments, eSBR metadata is included (e.g., a small number of control bits which are eSBR metadata are included) in an encoded audio bitstream (e.g., an MPEG-4 AAC bitstream), such that legacy decoders (which are not configured to parse the eSBR metadata, or to use any eSBR tool to which the eSBR metadata pertains) can ignore the eSBR metadata but nevertheless decode the bitstream to the extent possible without use of the eSBR metadata or any eSBR tool to which the eSBR metadata pertains, typically without any significant penalty in decoded audio quality. However, eSBR decoders configured to parse the bitstream to identify the eSBR metadata and to use at least one eSBR tool in response to the eSBR metadata, will enjoy the benefits of using at least one such eSBR tool. Therefore, embodiments of the invention provide a means for efficiently transmitting enhanced spectral band replication (eSBR) control data or metadata in a backward-compatible fashion. Typically, the eSBR metadata in the bitstream is indicative of (e.g., is indicative of at least one characteristic or parameter of) one or more of the following eSBR tools (which are described in the MPEG USAC standard, and which may or may not have been applied by an encoder during generation of the bitstream):Harmonic transposition; andQMF-patching additional pre-processing (pre-flattening). For example, the eSBR metadata included in the bitstream may be indicative of values of the parameters (described in the MPEG USAC standard and in the present disclosure): sbrPatchingMode[ch], sbrOversamplingFlag[ch], sbrPitchInBins[ch], sbrPitchInBins[ch], and bs_sbr_preprocessing. Herein, the notation X[ch], where X is some parameter, denotes that the parameter pertains to channel (“ch”) of audio content of an encoded bitstream to be decoded. For simplicity, we sometimes omit the expression [ch], and assume the relevant parameter pertains to a channel of audio content. Herein, the notation X[ch][env], where X is some parameter, denotes that the parameter pertains to SBR envelope (“env”) of channel (“ch”) of audio content of an encoded bitstream to be decoded. For simplicity, we sometimes omit the expressions [env] and [ch], and assume the relevant parameter pertains to an SBR envelope of a channel of audio content. During decoding of an encoded bitstream, performance of harmonic transposition during an eSBR processing stage of the decoding (for each channel, “ch”, of audio content indicated by the bitstream) is controlled by the following eSBR metadata parameters: sbrPatchingMode[ch]: sbrOversamplingFlag[ch]; sbrPitchInBinsFlag[ch]; and sbrPitchInBins[ch]. The value “sbrPatchingMode[ch]” indicates the transposer type used in eSBR: sbrPatchingMode[ch]=1 indicates linear transposition patching as described in Section 4.6.18 of the MPEG-4 AAC standard (as used with either high-quality SBR or low-power SBR); sbrPatchingMode[ch]=0 indicates harmonic SBR patching as described in Section 7.5.3 or 7.5.4 of the MPEG USAC standard. The value “sbrOversamplingFlag[ch]” indicates the use of signal adaptive frequency domain oversampling in eSBR in combination with the DFT based harmonic SBR patching as described in Section 7.5.3 of the MPEG USAC standard. This flag controls the size of the DFTs that are utilized in the transposer: 1 indicates signal adaptive frequency domain oversampling enabled as described in Section 7.5.3.1 of the MPEG USAC standard; 0 indicates signal adaptive frequency domain oversampling disabled as described in Section 7.5.3.1 of the MPEG USAC standard. The value “sbrPitchInBinsFlag[ch]” controls the interpretation of the sbrPitchInBins[ch] parameter: 1 indicates that the value in sbrPitchInBins[ch] is valid and greater than zero; 0 indicates that the value of sbrPitchInBins[ch] is set to zero. The value “sbrPitchInBins[ch]” controls the addition of cross product terms in the SBR harmonic transposer. The value sbrPitchinBins[ch] is an integer value in the range [0,127] and represents the distance measured in frequency bins for a 1536-line DFT acting on the sampling frequency of the core coder. In the case that an MPEG-4 AAC bitstream is indicative of an SBR channel pair whose channels are not coupled (rather than a single SBR channel), the bitstream is indicative of two instances of the above syntax (for harmonic or non-harmonic transposition), one for each channel of the sbr_channel_pair_element( ). The harmonic transposition of the eSBR tool typically improves the quality of decoded musical signals at relatively low cross over frequencies. Non-harmonic transposition (that is, legacy spectral patching) typically improves speech signals. Hence, a starting point in the decision as to which type of transposition is preferable for encoding specific audio content is to select the transposition method depending on speech/music detection with harmonic transposition be employed on the musical content and spectral patching on the speed content. Performance of pre-flattening during eSBR processing is controlled by the value of a one-bit eSBR metadata parameter known as “bs_sbr_preprocessing”, in the sense that pre-flattening is either performed or not performed depending on the value of this single bit. When the SBR QMF-patching algorithm, as described in Section 4.6.18.6.3 of the MPEG-4 AAC standard, is used, the step of pre-flattening may be performed (when indicated by the “bs_sbr_preprocessing” parameter) in an effort to avoid discontinuities in the shape of the spectral envelope of a high frequency signal being input to a subsequent envelope adjuster (the envelope adjuster performs another stage of the eSBR processing). The pre-flattening typically improves the operation of the subsequent envelope adjustment stage, resulting in a highband signal that is perceived to be more stable. The overall bitrate requirement for including in an MPEG-4 AAC bitstream eSBR metadata indicative of the above-mentioned eSBR tools (harmonic transposition and pre-flattening) is expected to be on the order of a few hundreds of bits per second because only the differential control data needed to perform eSBR processing is transmitted in accordance with some embodiments of the invention. Legacy decoders can ignore this information because it is included in a backward compatible manner (as will be explained later). Therefore, the detrimental effect on bitrate associated with of inclusion of eSBR metadata is negligible, for a number of reasons, including the following:The bitrate penalty (due to including the eSBR metadata) is a very small fraction of the total bitrate because only the differential control data needed to perform eSBR processing is transmitted (and not a simulcast of the SBR control data); andThe tuning of SBR related control information does not typically depend of the details of the transposition. Examples of when the control data does depend on the operation of the transposer are discussed later in this application. Thus, embodiments of the invention provide a means for efficiently transmitting enhanced spectral band replication (eSBR) control data or metadata in a backward-compatible fashion. This efficient transmission of the eSBR control data reduces memory requirements in decoders, encoders, and transcoders employing aspects of the invention, while having no tangible adverse effect on bitrate. Moreover, the complexity and processing requirements associated with performing eSBR in accordance with embodiments of the invention are also reduced because the SBR data needs to be processed only once and not simulcast, which would be the case if eSBR was treated as a completely separate object type in MPEG-4 AAC instead of being integrated into the MPEG-4 AAC codec in a backward-compatible manner. Next, with reference toFIG.7, we describe elements of a block (“raw_data_block”) of an MPEG-4 AAC bitstream in which eSBR metadata is included in accordance with some embodiments of the present invention.FIG.7is a diagram of a block (a “raw_data_block”) of the MPEG-4 AAC bitstream, showing some of the segments thereof. A block of an MPEG-4 AAC bitstream may include at least one “single_channel_element( )” (e.g., the single channel element shown inFIG.7), and/or at least one “channel_pair_element( )” (not specifically shown inFIG.7although it may be present), including audio data for an audio program. The block may also include a number of “fill_elements” (e.g., fill element 1 and/or fill element 2 ofFIG.7) including data (e.g., metadata) related to the program. Each “single_channel_element( )” includes an identifier (e.g., “ID1” ofFIG.7) indicating the start of a single channel element, and can include audio data indicative of a different channel of a multi-channel audio program. Each “channel_pair_element includes an identifier (not shown inFIG.7) indicating the start of a channel pair element, and can include audio data indicative of two channels of the program. A fill_element (referred to herein as a fill element) of an MPEG-4 AAC bitstream includes an identifier (“ID2” ofFIG.7) indicating the start of a fill element, and fill data after the identifier. The identifier ID2 may consist of a three bit unsigned integer transmitted most significant bit first (“uimsbf”) having a value of 0x6. The fill data can include an extension_payload( ) element (sometimes referred to herein as an extension payload) whose syntax is shown in Table 4.57 of the MPEG-4 AAC standard. Several types of extension payloads exist and are identified through the “extension_type” parameter, which is a four bit unsigned integer transmitted most significant bit first (“uimsbf”). The fill data (e.g., an extension payload thereof) can include a header or identifier (e.g., “header1” ofFIG.7) which indicates a segment of fill data which is indicative of an SBR object (i.e., the header initializes an “SBR object” type, referred to as sbr_extension_data( ) in the MPEG-4 AAC standard). For example, a spectral band replication (SBR) extension payload is identified with the value of ‘1101’ or ‘1110’ for the extension_type field in the header, with the identifier ‘1101’ identifying an extension payload with SBR data and ‘1110’ identifying and extension payload with SBR data with a Cyclic Redundancy Check (CRC) to verify the correctness of the SBR data. When the header (e.g., the extension_type field) initializes an SBR object type, SBR metadata (sometimes referred to herein as “spectral band replication data,” and referred to as sbr_data( ) in the MPEG-4 AAC standard) follows the header, and at least one spectral band replication extension element (e.g., the “SBR extension element” of fill element 1 ofFIG.7) can follow the SBR metadata. Such a spectral band replication extension element (a segment of the bitstream) is referred to as an “sbr_extension( )” container in the MPEG-4 AAC standard. A spectral band replication extension element optionally includes a header (e.g., “SBR extension header” of fill element 1 ofFIG.7). The MPEG-4 AAC standard contemplates that a spectral band replication extension element can include PS (parametric stereo) data for audio data of a program. The MPEG-4 AAC standard contemplates that when the header of a fill element (e.g., of an extension payload thereof) initializes an SBR object type (as does “header1” ofFIG.7) and a spectral band replication extension element of the fill element includes PS data, the fill element (e.g., the extension payload thereof) includes spectral band replication data, and a “bs_extension_id” parameter whose value (i.e., bs_extension_id=2) indicates that PS data is included in a spectral band replication extension element of the fill element. In accordance with some embodiments of the present invention, eSBR metadata (e.g., a flag indicative of whether enhanced spectral band replication (eSBR) processing is to be performed on audio content of the block) is included in a spectral band replication extension element of a fill element. For example, such a flag is indicated in fill element 1 ofFIG.7, where the flag occurs after the header (the “SBR extension header” of fill element 1) of “SBR extension element” of fill element 1. Optionally, such a flag and additional eSBR metadata are included in a spectral band replication extension element after the spectral band replication extension element's header (e.g., in the SBR extension element of fill element 1 inFIG.7, after the SBR extension header). In accordance with some embodiments of the present invention, a fill element which includes eSBR metadata also includes a “bs_extension_id” parameter whose value (e.g., bs_extension_id=3) indicates that eSBR metadata is included in the fill element and that eSBR processing is to be performed on audio content of the relevant block. In accordance with some embodiments of the invention, eSBR metadata is included in a fill element (e.g., fill element 2 ofFIG.7) of an MPEG-4 AAC bitstream other than in a spectral band replication extension element (SBR extension element) of the fill element. This is because fill elements containing an extension_payload( ) with SBR data or SBR data with a CRC do not contain any other extension payload of any other extension type. Therefore, in embodiments where eSBR metadata is stored its own extension payload, a separate fill element is used to store the eSBR metadata. Such a fill element includes an identifier (e.g., “ID2” ofFIG.7) indicating the start of a fill element, and fill data after the identifier. The fill data can include an extension_payload( ) element (sometimes referred to herein as an extension payload) whose syntax is shown in Table 4.57 of the MPEG-4 AAC standard. The fill data (e.g., an extension payload thereof) includes a header (e.g., “header2” of fill element 2 ofFIG.7) which is indicative of an eSBR object (i.e., the header initializes an enhanced spectral band replication (eSBR) object type), and the fill data (e.g., an extension payload thereof) includes eSBR metadata after the header. For example, fill element 2 ofFIG.7includes such a header (“header2”) and also includes, after the header, eSBR metadata (i.e., the “flag” in fill element 2, which is indicative of whether enhanced spectral band replication (eSBR) processing is to be performed on audio content of the block). Optionally, additional eSBR metadata is also included in the fill data of fill element 2 ofFIG.7, after header2. In the embodiments being described in the present paragraph, the header (e.g., header2 ofFIG.7) has an identification value which is not one of the conventional values specified in Table 4.57 of the MPEG-4 AAC standard, and is instead indicative of an eSBR extension payload (so that the header's extension_type field indicates that the fill data includes eSBR metadata). In a first class of embodiments, the invention is an audio processing unit (e.g., a decoder), comprising:a memory (e.g., buffer201ofFIG.3or4) configured to store at least one block of an encoded audio bitstream (e.g., at least one block of an MPEG-4 AAC bitstream);a bitstream payload deformatter (e.g., element205ofFIG.3or element215ofFIG.4) coupled to the memory and configured to demultiplex at least one portion of said block of the bitstream; anda decoding subsystem (e.g., elements202and203ofFIG.3, or elements202and213ofFIG.4), coupled and configured to decode at least one portion of audio content of said block of the bitstream, wherein the block includes:a fill element, including an identifier indicating a start of the fill element (e.g., the “id_syn_ele” identifier having value 0x6, of Table 4.85 of the MPEG-4 AAC standard), and fill data after the identifier, wherein the fill data includes:at least one flag identifying whether enhanced spectral band replication (eSBR) processing is to be performed on audio content of the block (e.g., using spectral band replication data and eSBR metadata included in the block). The flag is eSBR metadata, and an example of the flag is the sbrPatchingMode flag. Another example of the flag is the harmonicSBR flag. Both of these flags indicate whether a base form of spectral band replication or an enhanced form of spectral replication is to be performed on the audio data of the block. The base form of spectral replication is spectral patching, and the enhanced form of spectral band replication is harmonic transposition. In some embodiments, the fill data also includes additional eSBR metadata (i.e., eSBR metadata other than the flag). The memory may be a buffer memory (e.g., an implementation of buffer201ofFIG.4) which stores (e.g., in a non-transitory manner) the at least one block of the encoded audio bitstream. It is estimated that the complexity of performance of eSBR processing (using the eSBR harmonic transposition and pre-flattening) by an eSBR decoder during decoding of an MPEG-4 AAC bitstream which includes eSBR metadata (indicative of these eSBR tools) would be as follows (for typical decoding with the indicated parameters):Harmonic transposition (16 kbps, 14400/28800 Hz)DFT based: 3.68 WMOPS (weighted million operations per second);QMF based: 0.98 WMOPS;QMF-patching pre-processing (pre-flattening): 0.1 WMOPS. It is known that DFT based transposition typically performs better than the QMF based transposition for transients. In accordance with some embodiments of the present invention, a fill element (of an encoded audio bitstream) which includes eSBR metadata also includes a parameter (e.g., a “bs_extension_id” parameter) whose value (e.g., bs_extension_id=3) signals that eSBR metadata is included in the fill element and that eSBR processing is to be performed on audio content of the relevant block, and/or or a parameter (e.g., the same “bs_extension_id” parameter) whose value (e.g., bs_extension_id=2) signals that an sbr_extension( ) container of the fill element includes PS data. For example, as indicated in Table 1 below, such a parameter having the value bs_extension_id=2 may signal that an sbr_extension( ) container of the fill element includes PS data, and such a parameter having the value bs_extension_id=3 may signal that an sbr_extension( ) container of the fill element includes eSBR metadata: TABLE 1bs_extension_idMeaning0Reserved1Reserved2EXTENSION_ID_PS3EXTENSION_ID_ESBR In accordance with some embodiments of the invention, the syntax of each spectral band replication extension element which includes eSBR metadata and/or PS data is as indicated in Table 2 below (in which “sbr_extension( )” denotes a container which is the spectral band replication extension element, “bs_extension_id” is as described in Table 1 above, “ps_data” denotes PS data, and “esbr_data” denotes eSBR metadata): TABLE 2sbr_extension(bs_extension_id, num_bits_left){switch (bs_extension_id) {case EXTENSION_ID_PS:num_bits_left -= ps_data( );Note 1break;case EXTENSION_ID_ESBR:num_bits_left -= esbr_data( );Note 2break;default:bs_fill_bits;num_bits_left = 0;break;}}Note 1:ps_data( ) returns the number of bits read.Note 2:esbr_data( ) returns the number of bits read. In an exemplary embodiment, the esbr_data( ) referred to in Table 2 above is indicative of values of the following metadata parameters:1. the one-bit metadata parameter, “bs_sbr_preprocessing”; and2. for each channel (“ch”) of audio content of the encoded bitstream to be decoded, each of the above-described parameters: “sbrPatchingMode[ch]”; “sbrOversamplingFlag[ch]”; “sbrPitchInBinsFlag[ch]”; and “sbrPitchInBins[ch]”. For example, in some embodiments, the esbr_data( ) may have the syntax indicated in Table 3, to indicate these metadata parameters: TABLE 3SyntaxNo. of bitsesbr_data(id_aac, bs_coupling){bs_sbr_preprocessing;1if (id_aac == ID_SCE) {if (sbrPatchingMode[0] == 0) {1sbrOversamplingFlag[0];1if (sbrPitchInBinsFlag[0])1sbrPitchInBins[0];7elsesbrPitchInBins[0] = 0;} else {sbrOversamplingFlag[0] = 0;sbrPitchInBins[0] = 0;}} else if (id_aac == ID_CPE) {If (bs_coupling) {if (sbrPatchingMode[0,1] == 0) {1sbrOversamplingFlag[0,1];1if (sbrPitchInBinsFlag[0,1])1sbrPitchInBins[0,1];7elsesbrPitchInBins[0,1] = 0;} else {sbrOversamplingFlag[0,1] = 0;sbrPitchInBins[0,1] = 0;}} else { /* bs_coupling == 0) */if (sbrPatchingMode[0] == 0) {1sbrOversamplingFlag[0];1if (sbrPitchInBinsFlag[0])1sbrPitchInBins[0];7elsesbrPitchInBins[0] = 0;} else {sbrOversamplingFlag[0] = 0;sbrPitchInBins[0] = 0;}if (sbrPatchingMode[1] == 0) {1sbrOversamplingFlag[1];1if (sbrPitchInBinsFlag[1])1sbrPitchInBins[1];7elsesbrPitchInBins[1] = 0;} else {sbrOversamplingFlag[1] = 0;sbrPitchInBins[1] = 0;}}}}Note:bs_sbr_preprocessing is defined as described in section 6.2.12 of ISO/IEC 23003-3:2012. sbrPatchingMode[ch], sbrOversamplingFlag[ch], sbrPitchInBinsFlag[ch] and sbrPitchInBins[ch] are defined as described in section 7.5 of ISO/IEC 23003-3:2012. The above syntax enables an efficient implementation of an enhanced form of spectral band replication, such as harmonic transposition, as an extension to a legacy decoder. Specifically, the eSBR data of Table 3 includes only those parameters needed to perform the enhanced form of spectral band replication that are not either already supported in the bitstream or directly derivable from parameters already supported in the bitstream. All other parameters and processing data needed to perform the enhanced form of spectral band replication are extracted from pre-existing parameters in already-defined locations in the bitstream. For example, an MPEG-4 HE-AAC or HE-AAC v2 compliant decoder may be extended to include an enhanced form of spectral band replication, such as harmonic transposition. This enhanced form of spectral band replication is in addition to the base form of spectral band replication already supported by the decoder. In the context of an MPEG-4 HE-AAC or HE-AAC v2 compliant decoder, this base form of spectral band replication is the QMF spectral patching SBR tool as defined in Section 4.6.18 of the MPEG-4 AAC Standard. When performing the enhanced form of spectral band replication, an extended HE-AAC decoder may reuse many of the bitstream parameters already included in the SBR extension payload of the bitstream. The specific parameters that may be reused include, for example, the various parameters that determine the master frequency band table. These parameters include bs_start_freq (parameter that determines the start of master frequency table parameter), bs_stop_freq (parameter that determines the stop of master frequency table), bs_freq_scale (parameter that determines the number of frequency bands per octave), and bs_alter_scale (parameter that alters the scale of the frequency bands). The parameters that may be reused also include parameters that determine the noise band table (bs_noise_bands) and the limiter band table parameters (bs_limiter_bands). Accordingly, in various embodiments, at least some of the equivalent parameters specified in the USAC standard are omitted from the bitstream, thereby reducing control overhead in the bitstream. Typically, where a parameter specified in the AAC standard has an equivalent parameter specified in the USAC standard, the equivalent parameter specified in the USAC standard has the same name as the parameter specified in the AAC standard, e.g. the envelope scalefactor EOrigMapped. However, the equivalent parameter specified in the USAC standard typically has a different value, which is “tuned” for the enhanced SBR processing defined in the USAC standard rather than for the SBR processing defined in the AAC standard. In order to improve the subjective quality for audio content with harmonic frequency structure and strong tonal characteristics, in particular at low bitrates, activation of enhanced SBR is recommended. The values of the corresponding bitstream element (i.e. esbr_data( )), controlling these tools, may be determined in the encoder by applying a signal dependent classification mechanism. Generally, the usage of the harmonic patching method (sbrPatchingMode==1) is preferable for coding music signals at very low bitrates, where the core codec may be considerably limited in audio bandwidth. This is especially true if these signals include a pronounced harmonic structure. Contrarily, the usage of the regular SBR patching method is preferred for speech and mixed signals, since it provides a better preservation of the temporal structure in speech. In order to improve the performance of the harmonic transposer, a pre-processing step can be activated (bs_sbr_preprocessing==1) that strives to avoid the introduction of spectral discontinuities of the signal going in to the subsequent envelope adjuster. The operation of the tool is beneficial for signal types where the coarse spectral envelope of the low band signal being used for high frequency reconstruction displays large variations in level. In order to improve the transient response of the harmonic SBR patching, signal adaptive frequency domain oversampling can be applied (sbrOversamplingFlag==1). Since signal adaptive frequency domain oversampling increases the computational complexity of the transposer, but only brings benefits for frames which contain transients, the use of this tool is controlled by the bitstream element, which is transmitted once per frame and per independent SBR channel. A decoder operating in the proposed enhanced SBR mode typically needs to be able to switch between legacy and enhanced SBR patching. Therefore, delay may be introduced which can be as long as the duration of one core audio frame, depending on decoder setup. Typically, the delay for both legacy and enhanced SBR patching will be similar. In addition to the numerous parameters, other data elements may also be reused by an extended HE-AAC decoder when performing an enhanced form of spectral band replication in accordance with embodiments of the invention. For example, the envelope data and noise floor data may also be extracted from the bs_data_env (envelope scalefactors) and bs_noise_env (noise floor scalefactors) data and used during the enhanced form of spectral band replication. In essence, these embodiments exploit the configuration parameters and envelope data already supported by a legacy HE-AAC or HE-AAC v2 decoder in the SBR extension payload to enable an enhanced form of spectral band replication requiring as little extra transmitted data as possible. The metadata was originally tuned for a base form of HFR (e.g., the spectral translation operation of SBR), but in accordance with embodiments, is used for an enhanced form of HFR (e.g., the harmonic transposition of eSBR). As previously discussed, the metadata generally represents operating parameters (e.g., envelope scale factors, noise floor scale factors, time/frequency grid parameters, sinusoid addition information, variable cross over frequency/band, inverse filtering mode, envelope resolution, smoothing mode, frequency interpolation mode) tuned and intended to be used with the base form of HFR (e.g., linear spectral translation). However, this metadata, combined with additional metadata parameters specific to the enhanced form of HFR (e.g., harmonic transposition), may be used to efficiently and effectively process the audio data using the enhanced form of HFR. Accordingly, extended decoders that support an enhanced form of spectral band replication may be created in a very efficient manner by relying on already defined bitstream elements (for example, those in the SBR extension payload) and adding only those parameters needed to support the enhanced form of spectral band replication (in a fill element extension payload). This data reduction feature combined with the placement of the newly added parameters in a reserved data field, such as an extension container, substantially reduces the barriers to creating a decoder that supports an enhanced for of spectral band replication by ensuring that the bitstream is backwards-compatible with legacy decoder not supporting the enhanced form of spectral band replication. In Table 3, the number in the right column indicates the number of bits of the corresponding parameter in the left column. In some embodiments, the SBR object type defined in MPEG-4 AAC is updated to contain the SBR-Tool and aspects of the enhanced SBR (eSBR) Tool as signaled in the SBR extension element (bs_extension_id==EXTENSION_ID_ESBR). If a decoder detects and supports this SBR extension element, the decoder employs the signaled aspects of the enhanced SBR Tool. The SBR object type updated in this manner is referred to as SBR enhancements. In some embodiments, the invention is a method including a step of encoding audio data to generate an encoded bitstream (e.g., an MPEG-4 AAC bitstream), including by including eSBR metadata in at least one segment of at least one block of the encoded bitstream and audio data in at least one other segment of the block. In typical embodiments, the method includes a step of multiplexing the audio data with the eSBR metadata in each block of the encoded bitstream. In typical decoding of the encoded bitstream in an eSBR decoder, the decoder extracts the eSBR metadata from the bitstream (including by parsing and demultiplexing the eSBR metadata and the audio data) and uses the eSBR metadata to process the audio data to generate a stream of decoded audio data. Another aspect of the invention is an eSBR decoder configured to perform eSBR processing (e.g., using at least one of the eSBR tools known as harmonic transposition or pre-flattening) during decoding of an encoded audio bitstream (e.g., an MPEG-4 AAC bitstream) which does not include eSBR metadata. An example of such a decoder will be described with reference toFIG.5. The eSBR decoder (400) ofFIG.5includes buffer memory201(which is identical to memory201ofFIGS.3and4), bitstream payload deformatter215(which is identical to deformatter215ofFIG.4), audio decoding subsystem202(sometimes referred to as a “core” decoding stage or “core” decoding subsystem, and which is identical to core decoding subsystem202ofFIG.3), eSBR control data generation subsystem401, and eSBR processing stage203(which is identical to stage203ofFIG.3), connected as shown. Typically also, decoder400includes other processing elements (not shown). In operation of decoder400, a sequence of blocks of an encoded audio bitstream (an MPEG-4 AAC bitstream) received by decoder400is asserted from buffer201to deformatter215. Deformatter215is coupled and configured to demultiplex each block of the bitstream to extract SBR metadata (including quantized envelope data) and typically also other metadata therefrom. Deformatter215is configured to assert at least the SBR metadata to eSBR processing stage203. Deformatter215is also coupled and configured to extract audio data from each block of the bitstream, and to assert the extracted audio data to decoding subsystem (decoding stage)202. Audio decoding subsystem202of decoder400is configured to decode the audio data extracted by deformatter215(such decoding may be referred to as a “core” decoding operation) to generate decoded audio data, and to assert the decoded audio data to eSBR processing stage203. The decoding is performed in the frequency domain. Typically, a final stage of processing in subsystem202applies a frequency domain-to-time domain transform to the decoded frequency domain audio data, so that the output of subsystem is time domain, decoded audio data. Stage203is configured to apply SBR tools (and eSBR tools) indicated by the SBR metadata (extracted by deformatter215) and by eSBR metadata generated in subsystem401, to the decoded audio data (i.e., to perform SBR and eSBR processing on the output of decoding subsystem202using the SBR and eSBR metadata) to generate the fully decoded audio data which is output from decoder400. Typically, decoder400includes a memory (accessible by subsystem202and stage203) which stores the deformatted audio data and metadata output from deformatter215(and optionally also subsystem401), and stage203is configured to access the audio data and metadata as needed during SBR and eSBR processing. The SBR processing in stage203may be considered to be post-processing on the output of core decoding subsystem202. Optionally, decoder400also includes a final upmixing subsystem (which may apply parametric stereo (“PS”) tools defined in the MPEG-4 AAC standard, using PS metadata extracted by deformatter215) which is coupled and configured to perform upmixing on the output of stage203to generated fully decoded, upmixed audio which is output from APU210. Parametric stereo is a coding tool that represents a stereo signal using a linear downmix of the left and right channels of the stereo signal and sets of spatial parameters describing the stereo image. Parametric stereo typically employs three types of spatial parameters: (1) inter-channel intensity differences (IID) describing the intensity differences between the channels; (2) inter-channel phase differences (IPD) describing the phase differences between the channels; and (3) inter-channel coherence (ICC) describing the coherence (or similarity) between the channels. The coherence may be measured as the maximum of the cross-correlation as a function of time or phase. These three parameters generally enable a high quality reconstruction of the stereo image. However, the IPD parameters only specify the relative phase differences between the channels of the stereo input signal and do not indicate the distribution of these phase differences over the left and right channels. Therefore, a fourth type of parameter describing an overall phase offset or overall phase difference (OPD) may additionally be used. In the stereo reconstruction process, consecutive windowed segments of both the received downmix signal, s[n], and a decorrelated version of the received downmix, d[n], are processed together with the spatial parameters to generate the left (lk(n)) and right (rk(n)) reconstructed signals according to: lk(n)=H11(k,n)sk(n)+H21(k,n)dk(n) rk(n)=H12(k,n)sk(n)+H22(k,n)dk(n) where H11, H12, H21and H22are defined by the stereo parameters. The signals lk(n) and rk(n) are finally transformed back to the time domain by means of a frequency-to-time transform. Control data generation subsystem401ofFIG.5is coupled and configured to detect at least one property of the encoded audio bitstream to be decoded, and to generate eSBR control data (which may be or include eSBR metadata of any of the types included in encoded audio bitstreams in accordance with other embodiments of the invention) in response to at least one result of the detection step. The eSBR control data is asserted to stage203to trigger application of individual eSBR tools or combinations of eSBR tools upon detecting a specific property (or combination of properties) of the bitstream, and/or to control the application of such eSBR tools. For example, in order to control performance of eSBR processing using harmonic transposition, some embodiments of control data generation subsystem401would include: a music detector (e.g., a simplified version of a conventional music detector) for setting the sbrPatchingMode[ch] parameter (and asserting the set parameter to stage203) in response to detecting that the bitstream is or is not indicative of music; a transient detector for setting the sbrOversamplingFlag[ch] parameter (and asserting the set parameter to stage203) in response to detecting the presence or absence of transients in the audio content indicated by the bitstream; and/or a pitch detector for setting the sbrPitchInBinsFlag[ch] and sbrPitchInBins[ch] parameters (and asserting the set parameters to stage203) in response to detecting the pitch of audio content indicated by the bitstream. Other aspects of the invention are audio bitstream decoding methods performed by any embodiment of the inventive decoder described in this paragraph and the preceding paragraph. Aspects of the invention include an encoding or decoding method of the type which any embodiment of the inventive APU, system or device is configured (e.g., programmed) to perform. Other aspects of the invention include a system or device configured (e.g., programmed) to perform any embodiment of the inventive method, and a computer readable medium (e.g., a disc) which stores code (e.g., in a non-transitory manner) for implementing any embodiment of the inventive method or steps thereof. For example, the inventive system can be or include a programmable general purpose processor, digital signal processor, or microprocessor, programmed with software or firmware and/or otherwise configured to perform any of a variety of operations on data, including an embodiment of the inventive method or steps thereof. Such a general purpose processor may be or include a computer system including an input device, a memory, and processing circuitry programmed (and/or otherwise configured) to perform an embodiment of the inventive method (or steps thereof) in response to data asserted thereto. Embodiments of the present invention may be implemented in hardware, firmware, or software, or a combination of both (e.g., as a programmable logic array). Unless otherwise specified, the algorithms or processes included as part of the invention are not inherently related to any particular computer or other apparatus. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform the required method steps. Thus, the invention may be implemented in one or more computer programs executing on one or more programmable computer systems (e.g., an implementation of any of the elements ofFIG.1, or encoder100ofFIG.2(or an element thereof), or decoder200ofFIG.3(or an element thereof), or decoder210ofFIG.4(or an element thereof), or decoder400ofFIG.5(or an element thereof)) each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion. Each such program may be implemented in any desired computer language (including machine, assembly, or high level procedural, logical, or object oriented programming languages) to communicate with a computer system. In any case, the language may be a compiled or interpreted language. For example, when implemented by computer software instruction sequences, various functions and steps of embodiments of the invention may be implemented by multithreaded software instruction sequences running in suitable digital signal processing hardware, in which case the various devices, steps, and functions of the embodiments may correspond to portions of the software instructions. Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be implemented as a computer-readable storage medium, configured with (i.e., storing) a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein. A number of embodiments of the invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the scope of the claims. Numerous modifications and variations of the present invention are possible in light of the above teachings. For example, in order to facilitate efficient implementations, phase-shifts may be used in combination with the complex QMF analysis and synthesis filter banks. The analysis filterbank is responsible for filtering the time-domain lowband signal generated by the core decoder into a plurality of subbands (e.g., QMF subbands). The synthesis filterbank is responsible for combining the regenerated highband produced by the selected HFR technique (as indicated by the received sbrPatchingMode parameter) with the decoded lowband to produce a wideband output audio signal. A given filterbank implementation operating in a certain sample-rate mode, e.g., normal dual-rate operation or down-sampled SBR mode, should not, however, have phase-shifts that are bitstream dependent. The QMF banks used in SBR are a complex-exponential extension of the theory of cosine modulated filter banks. It can be shown that alias cancellation constraints become obsolete when extending the cosine modulated filterbank with complex-exponential modulation. Thus, for the SBR QMF banks, both the analysis filters, hk(n), and synthesis filters, fk(n), may be defined by: hk(n)=fk(n)=p0(n)⁢exp⁢{i⁢πM⁢(k+12)⁢(n-N2)},0≤n≤N;0≤k<M(1) where p0(n) is a real-valued symmetric or asymmetric prototype filter (typically, a low-pass prototype filter), M denotes the number of channels and N is the prototype filter order. The number of channels used in the analysis filterbank may be different than the number of channel used in the synthesis filterbank. For example, the analysis filterbank may have 32 channels and the synthesis filterbank may have 64 channels. When operating the synthesis filterbank in down-sampled mode, the synthesis filterbank may have only 32 channels. Since the subband samples from the filter bank are complex-valued, an additive possibly channel-dependent phase-shift step may be appended to the analysis filterbank. These extra phase-shifts need to be compensated for before the synthesis filter bank. While the phase-shifting terms in principle can be of arbitrary values without destroying the operation of the QMF analysis/synthesis-chain, they may also be constrained to certain values for conformance verification. The SBR signal will be affected by the choice of the phase factors while the low pass signal coming from the core decoder will not. The audio quality of the output signal is not affected. The coefficients of the prototype filter, p0(n), may be defined with a length, L, of640, as shown in Table 4 below. TABLE 4np0(n)00.00000000001−0.00055252862−0.00056176923−0.00049475184−0.00048752275−0.00048937916−0.00050407147−0.00052265648−0.00054665659−0.000567780210−0.000587093011−0.000613274712−0.000631249313−0.000654033314−0.000677769015−0.000694161416−0.000715773617−0.000725504318−0.000744094119−0.000749059820−0.000768137121−0.000772484822−0.000783433223−0.000777986924−0.000780366425−0.000780144926−0.000775797727−0.000763079328−0.000753000129−0.000731935730−0.000721539131−0.000691793732−0.000665041533−0.000634159434−0.000594611835−0.000556457636−0.000514557237−0.000460632538−0.000409512139−0.000350117540−0.000289698141−0.000209833742−0.000144638043−0.0000617334440.0000134949450.0001094383460.0002043017470.0002949531480.0004026540490.0005107388500.0006239376510.0007458025520.0008608443530.0009885988540.0011250155550.0012577884560.0013902494570.0015443219580.0016868083590.0018348265600.0019841140610.0021461583620.0023017254630.0024625616640.0026201758650.0027870464660.0029469447670.0031125420680.0032739613690.0034418874700.0036008268710.0037603922720.0039207432730.0040819753740.0042264269750.0043730719760.0045209852770.0046606460780.0047932560790.0049137603800.0050393022810.0051407353820.0052461166830.0053471681840.0054196775850.0054876040860.0055475714870.0055938023880.0056220643890.0056455196900.0056389199910.0056266114920.0055917128930.0055404363940.0054753783950.0053838975960.0052715758970.0051382275980.0049839687990.00481094691000.00460395301010.00438018611020.00412516421030.00384564081040.00354012461050.00320918851060.00284467571070.00245085401080.00202741761090.00157846821100.00109023291110.00058322641120.0000276045113−0.0005464280114−0.0011568135115−0.0018039472116−0.0024826723117−0.0031933778118−0.0039401124119−0.0047222596120−0.0055337211121−0.0063792293122−0.0072615816123−0.0081798233124−0.0091325329125−0.0101150215126−0.0111315548127−0.01218499951280.01327182201290.01439046661300.01554055531310.01673247121320.01794333811330.01918724311340.02045317931350.02174675501360.02306801691370.02441609921380.02578758471390.02718594291400.02860721731410.03005026571420.03150176081430.03297540811440.03446209481450.03596975601460.03748128501470.03900536791480.04053491701490.04206490941500.04360975421510.04514884051520.04668430271530.04821657201540.04973857551550.05125561551560.05276307461570.05424527681580.05571736481590.05716164501600.05859156831610.05998374801620.06134551711630.06268578081640.06397158981650.06522471061660.06643675121670.06760759851680.06870438281690.06976302441700.07076287101710.07170026731720.07256825831730.07336202551740.07410036421750.07474525581760.07531373361770.07580083581780.07619924791790.07649921701800.07670934901810.07681739751820.07682300111830.07672049241840.07650507181850.07617483211860.07573057561870.07515762551880.07446643941890.07364060051900.07267746421910.07158263641920.07035330731930.06896640131940.06745250211950.06576906681960.06394448051970.06196027791980.05981665701990.05751526912000.05504600342010.05240938212020.04959786762030.04663033052040.04347687822050.04014582782060.03664181162070.03295839302080.02908240062090.02503075612100.02079970722110.01637012582120.01176238322130.00696368622140.0019765601215−0.0032086896216−0.0085711749217−0.0141288827218−0.0198834129219−0.0258227288220−0.0319531274221−0.0382776572222−0.0447806821223−0.0514804176224−0.0583705326225−0.0654409853226−0.0726943300227−0.0801372934228−0.0877547536229−0.0955533352230−0.1035329531231−0.1116826931232−0.1200077984233−0.1285002850234−0.1371551761235−0.1459766491236−0.1549607071237−0.1640958855238−0.1733808172239−0.1828172548240−0.1923966745241−0.2021250176242−0.2119735853243−0.2219652696244−0.2320690870245−0.2423016884246−0.2526480309247−0.2631053299248−0.2736634040249−0.2843214189250−0.2950716717251−0.3059098575252−0.3168278913253−0.3278113727254−0.3388722693255−0.34999141222560.36115899032570.37237955462580.38363500132590.39492117612600.40623176762610.41756968962620.42891199202630.44025537542640.45159965352650.46293080852660.47424532142670.48552530912680.49677082542690.50798175002700.51912349702710.53022408952720.54125534482730.55220512582740.56307891402750.57385241312760.58454032352770.59511230862780.60557835382790.61591099322800.62612426952810.63619801072820.64612696952830.65590163022840.66551398802850.67496631902860.68423532932870.69332823762880.70223887192890.71094104262900.71944626342910.72774489002920.73582117582930.74368278632940.75131374562950.75870807602960.76586748652970.77277808812980.77942875192990.78583531203000.79197358413010.79784664133020.80344857513030.80876950043040.81381912703050.81857760043060.82304198903070.82722753473080.83110384573090.83469373613100.83797173373110.84095413923120.84362382813130.84598184693140.84803157773150.84978051983160.85119715243170.85230470353180.85310209493190.85357205733200.85373856003210.85357205733220.85310209493230.85230470353240.85119715243250.84978051983260.84803157773270.84598184693280.84362382813290.84095413923300.83797173373310.83469373613320.83110384573330.82722753473340.82304198903350.81857760043360.81381912703370.80876950043380.80344857513390.79784664133400.79197358413410.78583531203420.77942875193430.77277808813440.76586748653450.75870807603460.75131374563470.74368278633480.73582117583490.72774489003500.71944626343510.71094104263520.70223887193530.69332823763540.68423532933550.67496631903560.66551398803570.65590163023580.64612696953590.63619801073600.62612426953610.61591099323620.60557835383630.59511230863640.58454032353650.57385241313660.56307891403670.55220512583680.54125534483690.53022408953700.51912349703710.50798175003720.49677082543730.48552530913740.47424532143750.46293080853760.45159965353770.44025537543780.42891199203790.41756968963800.40623176763810.39492117613820.38363500133830.3723795546384−0.3611589903385−0.3499914122386−0.3388722693387−0.3278113727388−0.3168278913389−0.3059098575390−0.2950716717391−0.2843214189392−0.2736634040393−0.2631053299394−0.2526480309395−0.2423016884396−0.2320690870397−0.2219652696398−0.2119735853399−0.2021250176400−0.1923966745401−0.1828172548402−0.1733808172403−0.1640958855404−0.1549607071405−0.1459766491406−0.1371551761407−0.1285002850408−0.1200077984409−0.1116826931410−0.1035329531411−0.0955533352412−0.0877547536413−0.0801372934414−0.0726943300415−0.0654409853416−0.0583705326417−0.0514804176418−0.0447806821419−0.0382776572420−0.0319531274421−0.0258227288422−0.0198834129423−0.0141288827424−0.0085711749425−0.00320868964260.00197656014270.00696368624280.01176238324290.01637012584300.02079970724310.02503075614320.02908240064330.03295839304340.03664181164350.04014582784360.04347687824370.04663033054380.04959786764390.05240938214400.05504600344410.05751526914420.05981665704430.06196027794440.06394448054450.06576906684460.06745250214470.06896640134480.07035330734490.07158263644500.07267746424510.07364060054520.07446643944530.07515762554540.07573057564550.07617483214560.07650507184570.07672049244580.07682300114590.07681739754600.07670934904610.07649921704620.07619924794630.07580083584640.07531373364650.07474525584660.07410036424670.07336202554680.07256825834690.07170026734700.07076287104710.06976302444720.06870438284730.06760759854740.06643675124750.06522471064760.06397158984770.06268578084780.06134551714790.05998374804800.05859156834810.05716164504820.05571736484830.05424527684840.05276307464850.05125561554860.04973857554870.04821657204880.04668430274890.04514884054900.04360975424910.04206490944920.04053491704930.03900536794940.03748128504950.03596975604960.03446209484970.03297540814980.03150176084990.03005026575000.02860721735010.02718594295020.02578758475030.02441609925040.02306801695050.02174675505060.02045317935070.01918724315080.01794333815090.01673247125100.01554055535110.0143904666512−0.0132718220513−0.0121849995514−0.0111315548515−0.0101150215516−0.0091325329517−0.0081798233518−0.0072615816519−0.0063792293520−0.0055337211521−0.0047222596522−0.0039401124523−0.0031933778524−0.0024826723525−0.0018039472526−0.0011568135527−0.00054642805280.00002760455290.00058322645300.00109023295310.00157846825320.00202741765330.00245085405340.00284467575350.00320918855360.00354012465370.00384564085380.00412516425390.00438018615400.00460395305410.00481094695420.00498396875430.00513822755440.00527157585450.00538389755460.00547537835470.00554043635480.00559171285490.00562661145500.00563891995510.00564551965520.00562206435530.00559380235540.00554757145550.00548760405560.00541967755570.00534716815580.00524611665590.00514073535600.00503930225610.00491376035620.00479325605630.00466064605640.00452098525650.00437307195660.00422642695670.00408197535680.00392074325690.00376039225700.00360082685710.00344188745720.00327396135730.00311254205740.00294694475750.00278704645760.00262017585770.00246256165780.00230172545790.00214615835800.00198411405810.00183482655820.00168680835830.00154432195840.00139024945850.00125778845860.00112501555870.00098859885880.00086084435890.00074580255900.00062393765910.00051073885920.00040265405930.00029495315940.00020430175950.00010943835960.0000134949597−0.0000617334598−0.0001446380599−0.0002098337600−0.0002896981601−0.0003501175602−0.0004095121603−0.0004606325604−0.0005145572605−0.0005564576606−0.0005946118607−0.0006341594608−0.0006650415609−0.0006917937610−0.0007215391611−0.0007319357612−0.0007530001613−0.0007630793614−0.0007757977615−0.0007801449616−0.0007803664617−0.0007779869618−0.0007834332619−0.0007724848620−0.0007681371621−0.0007490598622−0.0007440941623−0.0007255043624−0.0007157736625−0.0006941614626−0.0006777690627−0.0006540333628−0.0006312493629−0.0006132747630−0.0005870930631−0.0005677802632−0.0005466565633−0.0005226564634−0.0005040714635−0.0004893791636−0.0004875227637−0.0004947518638−0.0005617692639−0.0005525280 The prototype filter, p0(n), may also be derived from Table 4 by one or more mathematical operations such as rounding, subsampling, interpolation, and decimation. Although the tuning of SBR related control information does not typically depend of the details of the transposition (as previously discussed), in some embodiments certain elements of the control data may be simulcasted in the eSBR extension container (bs_extension_idEXTENSION_ID_ESBR) to improve the quality of the regenerated signal. Some of the simulcasted elements may include the noise floor data (for example, noise floor scale factors and a parameter indicating the direction, either in the frequency or time direction, of delta coding for each noise floor), the inverse filtering data (for example, a parameter indicating the inverse filtering mode selected from no inverse filtering, a low level of inverse filtering, an intermediate level of inverse filtering, and a strong level of inverse filtering), and the missing harmonics data (for example, a parameter indicating whether a sinusoid should be added to a specific frequency band of the regenerated highband). All of these elements rely on a synthesized emulation of the decoder's transposer performed in the encoder and therefore if properly tuned for the selected transposer may increase the quality of the regenerated signal. Specifically, in some embodiments, the missing harmonics and inverse filtering control data is transmitted in the eSBR extension container (along with the other bitstream parameters of Table 3) and tuned for the harmonic transposer of eSBR. The additional bitrate required to transmit these two classes of metadata for the harmonic transposer of eSBR is relatively low. Therefore, sending tuned missing harmonic and/or inverse filtering control data in the eSBR extension container will increase the quality of audio produced by the transposer while only minimally affecting bitrate. To ensure backward-compatibility with legacy decoders, the parameters tuned for the spectral translation operation of SBR may also be sent in the bitstream as part of the SBR control data using either implicit or explicit signaling. Complexity of a decoder with the SBR enhancements as described in this application must be limited to not significantly increase the overall computational complexity of the implementation. Preferably, the PCU (MOP) for the SBR object type is at or below 4.5 when using the eSBR tool, and the RCU for the SBR object type is at or below 3 when using the eSBR tool. The approximated processing power is given in Processor Complexity Units (PCU), specified in integer numbers of MOPS. The approximated RAM usage is given in RAM Complexity Units (RCU), specified in integer numbers of kWords (1000 words). The RCU numbers do not include working buffers that can be shared between different objects and/or channels. Also, the PCU is proportional to sampling frequency. PCU values are given in MOPS (Million Operations per Second) per channel, and RCU values in kWords per channel. For compressed data, like HE-AAC coded audio, which can be decoded by different decoder configurations, special attention is needed. In this case, decoding can be done in a backward-compatible fashion (AAC only) as well as in an enhanced fashion (AAC+SBR). If compressed data permits both backward-compatible and enhanced decoding, and if the decoder is operating in enhanced fashion such that it is using a post-processor that inserts some additional delay (e.g., the SBR post-processor in HE-AAC), then it must insure that this additional time delay incurred relative to the backwards-compatible mode, as described by a corresponding value of n, is taken into account when presenting the composition unit. In order to ensure that composition time stamps are handled correctly (so that audio remains synchronized with other media), the additional delay introduced by the post-processing given in number of samples (per audio channel) at the output sample rate is 3010 when the decoder operation mode includes the SBR enhancements (including eSBR) as described in this application. Therefore, for an audio composition unit, the composition time applies to the 3011-th audio sample within the composition unit when the decoder operation mode includes the SBR enhancements as described in this application. In order to improve the subjective quality for audio content with harmonic frequency structure and strong tonal characteristics, in particular at low bitrates, the SBR enhancements should be activated. The values of the corresponding bitstream element (i.e. esbr_data( )), controlling these tools, may be determined in the encoder by applying a signal dependent classification mechanism. Generally, the usage of the harmonic patching method (sbrPatchingMode==0) is preferable for coding music signals at very low bitrates, where the core codec may be considerably limited in audio bandwidth. This is especially true if these signals include a pronounced harmonic structure. Contrarily, the usage of the regular SBR patching method is preferred for speech and mixed signals, since it provides a better preservation of the temporal structure in speech. In order to improve the performance of the MPEG-4 SBR transposer, a pre-processing step can be activated (bs_sbr_preprocessing==1) that avoids the introduction of spectral discontinuities of the signal going in to the subsequent envelope adjuster. The operation of the tool is beneficial for signal types where the coarse spectral envelope of the low band signal being used for high frequency reconstruction displays large variations in level. In order to improve the transient response of the harmonic SBR patching (sbrPatchingMode0), signal adaptive frequency domain oversampling can be applied (sbrOversamplingFlag1). Since signal adaptive frequency domain oversampling increases the computational complexity of the transposer, but only brings benefits for frames which contain transients, the use of this tool is controlled by the bitstream element, which is transmitted once per frame and per independent SBR channel. Typical bit rate settings recommendations for HE-AACv2 with SBR enhancements (that is, enabling the harmonic transposer of the eSBR tool) correspond to 20-32 kbps for stereo audio content at sampling rates of either 44.1 kHz or 48 kHz. The relative subjective quality gain of the SBR enhancements increases towards the lower bit rate boundary and a properly configured encoder allows to extend this range to even lower bit rates. The bit rates provided above are recommendations only and may be adapted for specific service requirements. A decoder operating in the proposed enhanced SBR mode typically needs to be able to switch between legacy and enhanced SBR patching. Therefore, delay may be introduced which can be as long as the duration of one core audio frame, depending on decoder setup. Typically, the delay for both legacy and enhanced SBR patching will be similar. It is to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein. Any reference numerals contained in the following claims are for illustrative purposes only and should not be used to construe or limit the claims in any manner whatsoever. Various aspects of the present invention may be appreciated from the following enumerated example embodiments (EEEs): EEE 1. A method for performing high frequency reconstruction of an audio signal, the method comprising:receiving an encoded audio bitstream, the encoded audio bitstream including audio data representing a lowband portion of the audio signal and high frequency reconstruction metadata;decoding the audio data to generate a decoded lowband audio signal;extracting from the encoded audio bitstream the high frequency reconstruction metadata, the high frequency reconstruction metadata including operating parameters for a high frequency reconstruction process, the operating parameters including a patching mode parameter located in a backward-compatible extension container of the encoded audio bitstream, wherein a first value of the patching mode parameter indicates spectral translation and a second value of the patching mode parameter indicates harmonic transposition by phase-vocoder frequency spreading;filtering the decoded lowband audio signal to generate a filtered lowband audio signal;regenerating a highband portion of the audio signal using the filtered lowband audio signal and the high frequency reconstruction metadata, wherein the regenerating includes spectral translation if the patching mode parameter is the first value and the regenerating includes harmonic transposition by phase-vocoder frequency spreading if the patching mode parameter is the second value; andcombining the filtered lowband audio signal with the regenerated highband portion to form a wideband audio signal,wherein the filtering, regenerating, and combining are performed as a post-processing operation with a delay of 3010 samples per audio channel or less. EEE 2. The method of EEE 1 wherein the encoded audio bitstream further includes a fill element with an identifier indicating a start of the fill element and fill data after the identifier, wherein the fill data includes the backward-compatible extension container. EEE 3. The method of EEE 2 wherein the identifier is a three bit unsigned integer transmitted most significant bit first and having a value of 0x6. EEE 4. The method of EEE 2 or EEE 3, wherein the fill data includes an extension payload, the extension payload includes spectral band replication extension data, and the extension payload is identified with a four bit unsigned integer transmitted most significant bit first and having a value of ‘1101’ or ‘1110’, and, optionally,wherein the spectral band replication extension data includes:an optional spectral band replication header,spectral band replication data after the header, anda spectral band replication extension element after the spectral band replication data, and wherein the flag is included in the spectral band replication extension element. EEE 5. The method of any one of EEEs 1-4 wherein the high frequency reconstruction metadata includes envelope scale factors, noise floor scale factors, time/frequency grid information, or a parameter indicating a crossover frequency. EEE 6. The method of any one of EEEs 1-5 wherein the backward-compatible extension container further includes a flag indicating whether additional preprocessing is used to avoid discontinuities in a shape of a spectral envelope of the highband portion when the patching mode parameter equals the first value, wherein a first value of the flag enables the additional preprocessing and a second value of the flag disables the additional preprocessing. EEE 7. The method of EEE 6 wherein the additional preprocessing includes calculating a pre-gain curve using a linear prediction filter coefficient. EEE 8. The method of any one of EEEs 1-5 wherein the backward-compatible extension container further includes a flag indicating whether signal adaptive frequency domain oversampling is to be applied when the patching mode parameter equals the second value, wherein a first value of the flag enables the signal adaptive frequency domain oversampling and a second value of the flag disables the signal adaptive frequency domain oversampling. EEE 9. The method of EEE 8 wherein the signal adaptive frequency domain oversampling is applied only for frames containing a transient. EEE 10. The method of any one of the previous EEEs wherein the harmonic transposition by phase-vocoder frequency spreading is performed with an estimated complexity at or below 4.5 million of operations per second and 3 kWords of memory. EEE 11. A non-transitory computer readable medium containing instructions that when executed by a processor perform the method of any of the EEEs 1-10. EEE 12. A computer program product having instructions which, when executed by a computing device or system, cause said computing device or system to execute the method of any of the EEEs 1-10. EEE 13. An audio processing unit for performing high frequency reconstruction of an audio signal, the audio processing unit comprising:an input interface for receiving an encoded audio bitstream, the encoded audio bitstream including audio data representing a lowband portion of the audio signal and high frequency reconstruction metadata;a core audio decoder for decoding the audio data to generate a decoded lowband audio signal;a deformatter for extracting from the encoded audio bitstream the high frequency reconstruction metadata, the high frequency reconstruction metadata including operating parameters for a high frequency reconstruction process, the operating parameters including a patching mode parameter located in a backward-compatible extension container of the encoded audio bitstream, wherein a first value of the patching mode parameter indicates spectral translation and a second value of the patching mode parameter indicates harmonic transposition by phase-vocoder frequency spreading;an analysis filterbank for filtering the decoded lowband audio signal to generate a filtered lowband audio signal;a high frequency regenerator for reconstructing a highband portion of the audio signal using the filtered lowband audio signal and the high frequency reconstruction metadata, wherein the reconstructing includes a spectral translation if the patching mode parameter is the first value and the reconstructing includes harmonic transposition by phase-vocoder frequency spreading if the patching mode parameter is the second value; anda synthesis filterbank for combining the filtered lowband audio signal with the regenerated highband portion to form a wideband audio signal,wherein the analysis filterbank, high frequency regenerator, and synthesis filterbank are performed in a post-processor with a delay of 3010 samples per audio channel or less. EEE 14. The audio processing unit of EEE 13 wherein the harmonic transposition by phase-vocoder frequency spreading is performed with an estimated complexity at or below 4.5 million of operations per second and 3 kWords of memory.
97,860
11862186
Like reference numerals refer to corresponding parts throughout the drawings. DESCRIPTION OF IMPLEMENTATIONS FIG.1is a block diagram of an operating environment100of a digital assistant according to some implementations. The terms “digital assistant,” “virtual assistant,” “intelligent automated assistant,” “voice-based digital assistant,” or “automatic digital assistant,” refer to any information processing system that interprets natural language input in spoken and/or textual form to deduce user intent (e.g., identify a task type that corresponds to the natural language input), and performs actions based on the deduced user intent (e.g., perform a task corresponding to the identified task type). For example, to act on a deduced user intent, the system can perform one or more of the following: identifying a task flow with steps and parameters designed to accomplish the deduced user intent (e.g., identifying a task type), inputting specific requirements from the deduced user intent into the task flow, executing the task flow by invoking programs, methods, services, APIs, or the like (e.g., sending a request to a service provider); and generating output responses to the user in an audible (e.g., speech) and/or visual form. Specifically, once initiated, a digital assistant system is capable of accepting a user request at least partially in the form of a natural language command, request, statement, narrative, and/or inquiry. Typically, the user request seeks either an informational answer or performance of a task by the digital assistant system. A satisfactory response to the user request is generally either provision of the requested informational answer, performance of the requested task, or a combination of the two. For example, a user may ask the digital assistant system a question, such as “Where am I right now?” Based on the user's current location, the digital assistant may answer, “You are in Central Park near the west gate.” The user may also request the performance of a task, for example, by stating “Please invite my friends to my girlfriend's birthday party next week.” In response, the digital assistant may acknowledge the request by generating a voice output, “Yes, right away,” and then send a suitable calendar invite from the user's email address to each of the user' friends listed in the user's electronic address book or contact list. There are numerous other ways of interacting with a digital assistant to request information or performance of various tasks. In addition to providing verbal responses and taking programmed actions, the digital assistant can also provide responses in other visual or audio forms (e.g., as text, alerts, music, videos, animations, etc.). As shown inFIG.1, in some implementations, a digital assistant system is implemented according to a client-server model. The digital assistant system includes a client-side portion (e.g.,102aand102b) (hereafter “digital assistant (DA) client102”) executed on a user device (e.g.,104aand104b), and a server-side portion106(hereafter “digital assistant (DA) server106”) executed on a server system108. The DA client102communicates with the DA server106through one or more networks110. The DA client102provides client-side functionalities such as user-facing input and output processing and communications with the DA server106. The DA server106provides server-side functionalities for any number of DA clients102each residing on a respective user device104(also called a client device or electronic device). In some implementations, the DA server106includes a client-facing I/O interface112, one or more processing modules114, data and models116, an I/O interface to external services118, a photo and tag database130, and a photo-tag module132. The client-facing I/O interface facilitates the client-facing input and output processing for the digital assistant server106. The one or more processing modules114utilize the data and models116to determine the user's intent based on natural language input and perform task execution based on the deduced user intent. Photo and tag database130stores fingerprints of digital photographs, and, optionally digital photographs themselves, as well as tags associated with the digital photographs. Photo-tag module132creates tags, stores tags in association with photographs and/or fingerprints, automatically tags photographs, and links tags to locations within photographs. In some implementations, the DA server106communicates with external services120(e.g., navigation service(s)122-1, messaging service(s)122-2, information service(s)122-3, calendar service122-4, telephony service122-5, photo service(s)122-6, etc.) through the network(s)110for task completion or information acquisition. The I/O interface to the external services118facilitates such communications. Examples of the user device104include, but are not limited to, a handheld computer, a personal digital assistant (PDA), a tablet computer, a laptop computer, a desktop computer, a cellular telephone, a smartphone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, a game console, a television, a remote control, or a combination of any two or more of these data processing devices or any other suitable data processing devices. More details on the user device104are provided in reference to an exemplary user device104shown inFIG.2. Examples of the communication network(s)110include local area networks (LAN) and wide area networks (WAN), e.g., the Internet. The communication network(s)110may be implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol. The server system108can be implemented on at least one data processing apparatus and/or a distributed network of computers. In some implementations, the server system108also employs various virtual devices and/or services of third party service providers (e.g., third-party cloud service providers) to provide the underlying computing resources and/or infrastructure resources of the server system108. Although the digital assistant system shown inFIG.1includes both a client side portion (e.g., the DA client102) and a server-side portion (e.g., the DA server106), in some implementations, a digital assistant system refers only to the server-side portion (e.g., the DA server106). In some implementations, the functions of a digital assistant can be implemented as a standalone application installed on a user device. In addition, the divisions of functionalities between the client and server portions of the digital assistant can vary in different implementations. For example, in some implementations, the DA client102is a thin-client that provides only user-facing input and output processing functions, and delegates all other functionalities of the digital assistant to the DA server106. In some other implementations, the DA client102is configured to perform or assist one or more functions of the DA server106. FIG.2is a block diagram of a user device104in accordance with some implementations. The user device104includes a memory interface202, one or more processors204, and a peripherals interface206. The various components in the user device104are coupled by one or more communication buses or signal lines. The user device104includes various sensors, subsystems, and peripheral devices that are coupled to the peripherals interface206. The sensors, subsystems, and peripheral devices gather information and/or facilitate various functionalities of the user device104. For example, in some implementations, a motion sensor210(e.g., an accelerometer), a light sensor212, a GPS receiver213, a temperature sensor, and a proximity sensor214are coupled to the peripherals interface206to facilitate orientation, light, and proximity sensing functions. In some implementations, other sensors216, such as a biometric sensor, barometer, and the like, are connected to the peripherals interface206, to facilitate related functionalities. In some implementations, the user device104includes a camera subsystem220coupled to the peripherals interface206. In some implementations, an optical sensor222of the camera subsystem220facilitates camera functions, such as taking photographs and recording video clips. In some implementations, the user device104includes one or more wired and/or wireless communication subsystems224provide communication functions. The communication subsystems224typically includes various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters. In some implementations, the user device104includes an audio subsystem226coupled to one or more speakers228and one or more microphones230to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. In some implementations, the audio subsystem226is coupled to a voice trigger system400. In some implementations, the voice trigger system400and/or the audio subsystem226includes low-power audio circuitry and/or programs (i.e., including hardware and/or software) for receiving and/or analyzing sound inputs, including, for example, one or more analog-to-digital converters, digital signal processors (DSPs), sound detectors, memory buffers, codecs, and the like. In some implementations, the low-power audio circuitry (alone or in addition to other components of the user device104) provides voice (or sound) trigger functionality for one or more aspects of the user device104, such as a voice-based digital assistant or other speech-based service. In some implementations, the low-power audio circuitry provides voice trigger functionality even when other components of the user device104are shut down and/or in a standby mode, such as the processor(s)204, I/O subsystem240, memory250, and the like. The voice trigger system400is described in further detail with respect toFIG.4. In some implementations, an I/O subsystem240is also coupled to the peripheral interface206. In some implementations, the user device104includes a touch screen246, and the I/O subsystem240includes a touch screen controller242coupled to the touch screen246. When the user device104includes the touch screen246and the touch screen controller242, the touch screen246and the touch screen controller242are typically con figured to, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, such as capacitive, resistive, infrared, surface acoustic wave technologies, proximity sensor arrays, and the like. In some implementations, the user device104includes a display that does not include a touch-sensitive surface. In some implementations, the user device104includes a separate touch-sensitive surface. In some implementations, the user device104includes other input controller(s)244. When the user device104includes the other input controller(s)244, the other input controller(s)244are typically coupled to other input/control devices248, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus. The memory interface202is coupled to memory250. In some implementations, memory250includes a non-transitory computer readable medium, such as high-speed random access memory and/or non-volatile memory (e.g., one or more magnetic disk storage devices, one or more flash memory devices, one or more optical storage devices, and/or other non-volatile solid-state memory devices). In some implementations, memory250stores an operating system252, a communications module254, a graphical user interface module256, a sensor processing module258, a phone module260, and applications262, and a subset or superset thereof. The operating system252includes instructions for handling basic system services and for performing hardware dependent tasks. The communications module254facilitates communicating with one or more additional devices, one or more computers and/or one or more servers. The graphical user interface module256facilitates graphic user interface processing. The sensor processing module258facilitates sensor-related processing and functions (e.g., processing voice input received with the one or more microphones228). The phone module260facilitates phone-related processes and functions. The application module262facilitates various functionalities of user applications, such as electronic-messaging, web browsing, media processing, navigation, imaging and/or other processes and functions. In some implementations, the user device104stores in memory250one or more software applications270-1and270-2each associated with at least one of the external service providers. As described above, in some implementations, memory250also stores client-side digital assistant instructions (e.g., in a digital assistant client module264) and various user data266(e.g., user-specific vocabulary data, preference data, and/or other data such as the user's electronic address book or contact list, to-do lists, shopping lists, etc.) to provide the client-side functionalities of the digital assistant. In various implementations, the digital assistant client module264is capable of accepting voice input, text input, touch input, and/or gestural input through various user interfaces (e.g., the I/O subsystem244) of the user device104. The digital assistant client module264is also capable of providing output in audio, visual, and/or tactile forms. For example, output can be provided as voice, sound, alerts, text messages, menus, graphics, videos, animations, vibrations, and/or combinations of two or more of the above. During operation, the digital assistant client module264communicates with the digital assistant server (e.g., the digital assistant server106,FIG.1) using the communication subsystems224. In some implementations, the digital assistant client module264utilizes various sensors, subsystems and peripheral devices to gather additional information from the surrounding environment of the user device104to establish a context associated with a user input. In some implementations, the digital assistant client module264provides the context information or a subset thereof with the user input to the digital assistant server (e.g., the digital assistant server106,FIG.1) to help deduce the user's intent. In some implementations, the context information that can accompany the user input includes sensor information, e.g., lighting, ambient noise, ambient temperature, images or videos of the surrounding environment, etc. In some implementations, the context information also includes the physical state of the device, e.g., device orientation, device location, device temperature, power level, speed, acceleration, motion patterns, cellular signals strength, etc. In some implementations, information related to the software state of the user device106, e.g., running processes, installed programs, past and present network activities, background services, error logs, resources usage, etc., of the user device104is also provided to the digital assistant server (e.g., the digital assistant server106,FIG.1) as context information associated with a user input. In some implementations, the DA client module264selectively provides information (e.g., at least a portion of the user data266) stored on the user device104in response to requests from the digital assistant server. In some implementations, the digital assistant client module264also elicits additional input from the user via a natural language dialogue or other user interfaces upon request by the digital assistant server106(FIG.1). The digital assistant client module264passes the additional input to the digital assistant server106to help the digital assistant server106in intent deduction and/or fulfillment of the user's intent expressed in the user request. In some implementations, memory250may include additional instructions or fewer instructions. Furthermore, various functions of the user device104may be implemented in hardware and/or in firmware, including in one or more signal processing and/or application specific integrated circuits, and the user device104, thus, need not include all modules and applications illustrated inFIG.2. FIG.3Ais a block diagram of an exemplary digital assistant system300(also referred to as the digital assistant) in accordance with some implementations. In some implementations, the digital assistant system300is implemented on a standalone computer system. In some implementations, the digital assistant system300is distributed across multiple computers. In some implementations, some of the modules and functions of the digital assistant are divided into a server portion and a client portion, where the client portion resides on a user device (e.g., the user device104) and communicates with the server portion (e.g., the server system108) through one or more networks, e.g., as shown inFIG.1. In some implementations, the digital assistant system300is an embodiment of the server system108(and/or the digital assistant server106) shown inFIG.1. In some implementations, the digital assistant system300is implemented in a user device (e.g., the user device104,FIG.1), thereby eliminating the need for a client-server system. It should be noted that the digital assistant system300is only one example of a digital assistant system, and that the digital assistant system300may have more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of the components. The various components shown inFIG.3Amay be implemented in hardware, software, firmware, including one or more signal processing and/or application specific integrated circuits, or a combination of thereof. The digital assistant system300includes memory302, one or more processors304, an input/output (I/O) interface306, and a network communications interface308. These components communicate with one another over one or more communication buses or signal lines310. In some implementations, memory302includes a non-transitory computer readable medium, such as high-speed random access memory and/or a non-volatile computer readable storage medium (e.g., one or more magnetic disk storage devices, one or more flash memory devices, one or more optical storage devices, and/or other non-volatile solid-state memory devices). The I/O interface306couples input/output devices316of the digital assistant system300, such as displays, a keyboards, touch screens, and microphones, to the user interface module322. The I/O interface306, in conjunction with the user interface module322, receives user inputs (e.g., voice input, keyboard inputs, touch inputs, etc.) and process them accordingly. In some implementations, when the digital assistant is implemented on a standalone user device, the digital assistant system300includes any of the components and I/O and communication interfaces described with respect to the user device104inFIG.2(e.g., one or more microphones230). In some implementations, the digital assistant system300represents the server portion of a digital assistant implementation, and interacts with the user through a client-side portion residing on a user device (e.g., the user device104shown inFIG.2). In some implementations, the network communications interface308includes wired communication port(s)312and/or wireless transmission and reception circuitry314. The wired communication port(s) receive and send communication signals via one or more wired interfaces, e.g., Ethernet, Universal Serial Bus (USB), FIREWIRE, etc. The wireless circuitry314typically receives and sends RF signals and/or optical signals from/to communications networks and other communications devices. The wireless communications may use any of a plurality of communications standards, protocols and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol. The network communications interface308enables communication between the digital assistant system300with networks, such as the Internet, an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices. In some implementations, the non-transitory computer readable storage medium of memory302stores programs, modules, instructions, and data structures including all or a subset of: an operating system318, a communications module320, a user interface module322, one or more applications324, and a digital assistant module326. The one or more processors304execute these programs, modules, and instructions, and reads/writes from/to the data structures. The operating system318(e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communications between various hardware, firmware, and software components. The communications module320facilitates communications between the digital assistant system300with other devices over the network communications interface308. For example, the communication module320may communicate with the communications module254of the device104shown inFIG.2. The communications module320also includes various software components for handling data received by the wireless circuitry314and/or wired communications port312. In some implementations, the user interface module322receives commands and/or inputs from a user via the I/O interface306(e.g., from a keyboard, touch screen, and/or microphone), and provides user interface objects on a display. The applications324include programs and/or modules that are configured to be executed by the one or more processors304. For example, if the digital assistant system is implemented on a standalone user device, the applications324may include user applications, such as games, a calendar application, a navigation application, or an email application. If the digital assistant system300is implemented on a server farm, the applications324may include resource management applications, diagnostic applications, or scheduling applications, for example. Memory302also stores the digital assistant module (or the server portion of a digital assistant)326. In some implementations, the digital assistant module326includes the following sub-modules, or a subset or superset thereof: an input/output processing module328, a speech-to-text (STT) processing module330, a natural language processing module332, a dialogue flow processing module334, a task flow processing module336, a service processing module338, and a photo module132. Each of these processing modules has access to one or more of the following data and models of the digital assistant326, or a subset or superset thereof: ontology360, vocabulary index344, user data348, categorization module349, disambiguation module350, task flow models354, service models356, photo tagging module358, search module360, and local tag/photo storage362. In some implementations, using the processing modules (e.g., the input/output processing module328, the STT processing module330, the natural language processing module332, the dialogue flow processing module334, the task flow processing module336, and/or the service processing module338), data, and models implemented in the digital assistant module326, the digital assistant system300performs at least some of the following: identifying a user's intent expressed in a natural language input received from the user; actively eliciting and obtaining information needed to fully deduce the user's intent (e.g., by disambiguating words, names, intentions, etc.); determining the task flow for fulfilling the deduced intent; and executing the task flow to fulfill the deduced intent. In some implementations, the digital assistant also takes appropriate actions when a satisfactory response was not or could not be provided to the user for various reasons. In some implementations, as discussed below, the digital assistant system300identifies, from a natural language input, a user's intent to tag a digital photograph, and processes the natural language input so as to tag the digital photograph with appropriate information. In some implementations, the digital assistant system300performs other tasks related to photographs as well, such as searching for digital photographs using natural language input, auto-tagging photographs, and the like. As shown inFIG.3B, in some implementations, the I/O processing module328interacts with the user through the I/O devices316inFIG.3Aor with a user device (e.g., a user device104inFIG.1) through the network communications interface308inFIG.3Ato obtain user input (e.g., a speech input) and to provide responses to the user input. The I/O processing module328optionally obtains context information associated with the user input from the user device, along with or shortly after the receipt of the user input. The context information includes user-specific data, vocabulary, and/or preferences relevant to the user input. In some implementations, the context information also includes software and hardware states of the device (e.g., the user device104inFIG.1) at the time the user request is received, and/or information related to the surrounding environment of the user at the time that the user request was received. In some implementations, the I/O processing module328also sends follow-up questions to, and receives answers from, the user regarding the user request. In some implementations, when a user request is received by the I/O processing module328and the user request contains a speech input, the I/O processing module328forwards the speech input to the speech-to-text (STT) processing module330for speech-to-text conversions. In some implementations, the speech-to-text processing module330receives speech input (e.g., a user utterance captured in a voice recording) through the I/O processing module328. In some implementations, the speech-to-text processing module330uses various acoustic and language models to recognize the speech input as a sequence of phonemes, and ultimately, a sequence of words or tokens written in one or more languages. The speech-to-text processing module330is implemented using any suitable speech recognition techniques, acoustic models, and language models, such as Hidden Markov Models, Dynamic Time Warping (DTW)-based speech recognition, and other statistical and/or analytical techniques. In some implementations, the speech-to-text processing can be performed at least partially by a third party service or on the user's device. Once the speech-to-text processing module330obtains the result of the speech-to-text processing (e.g., a sequence of words or tokens), it passes the result to the natural language processing module332for intent deduction. The natural language processing module332(“natural language processor”) of the digital assistant326takes the sequence of words or tokens (“token sequence”) generated by the speech-to-text processing module330, and attempts to associate the token sequence with one or more “actionable intents” recognized by the digital assistant. As used herein, an “actionable intent” represents a task that can be performed by the digital assistant326and/or the digital assistant system300(FIG.3A), and has an associated task flow implemented in the task flow models354. The associated task flow is a series of programmed actions and steps that the digital assistant system300takes in order to perform the task. The scope of a digital assistant system's capabilities is dependent on the number and variety of task flows that have been implemented and stored in the task flow models354, or in other words, on the number and variety of “actionable intents” that the digital assistant system300recognizes. The effectiveness of the digital assistant system300, however, is also dependent on the digital assistant system's ability to deduce the correct “actionable intent(s)” from the user request expressed in natural language. In some implementations, in addition to the sequence of words or tokens obtained from the speech-to-text processing module330, the natural language processor332also receives context information associated with the user request (e.g., from the I/O processing module328). The natural language processor332optionally uses the context information to clarify, supplement, and/or further define the information contained in the token sequence received from the speech-to-text processing module330. The context information includes, for example, user preferences, hardware and/or software states of the user device, sensor information collected before, during, or shortly after the user request, prior interactions (e.g., dialogue) between the digital assistant and the user, and the like. In some implementations, the natural language processing is based on an ontology360. The ontology360is a hierarchical structure containing a plurality of nodes, each node representing either an “actionable intent” or a “property” relevant to one or more of the “actionable intents” or other “properties.” As noted above, an “actionable intent” represents a task that the digital assistant system300is capable of performing (e.g., a task that is “actionable” or can be acted on). A “property” represents a parameter associated with an actionable intent or a sub-aspect of another property. A linkage between an actionable intent node and a property node in the ontology360defines how a parameter represented by the property node pertains to the task represented by the actionable intent node. In some implementations, the ontology360is made up of actionable intent nodes and property nodes. Within the ontology360, each actionable intent node is linked to one or more property nodes either directly or through one or more intermediate property nodes. Similarly, each property node is linked to one or more actionable intent nodes either directly or through one or more intermediate property nodes. For example, the ontology360shown inFIG.3Cincludes a “restaurant reservation” node, which is an actionable intent node. Property nodes “restaurant,” “date/time” (for the reservation), and “party size” are each directly linked to the “restaurant reservation” node (i.e., the actionable intent node). In addition, property nodes “cuisine,” “price range,” “phone number,” and “location” are sub-nodes of the property node “restaurant,” and are each linked to the “restaurant reservation” node (i.e., the actionable intent node) through the intermediate property node “restaurant.” For another example, the ontology360shown inFIG.3Calso includes a “set reminder” node, which is another actionable intent node. Property nodes “date/time” (for the setting the reminder) and “subject” (for the reminder) are each linked to the “set reminder” node. Since the property “date/time” is relevant to both the task of making a restaurant reservation and the task of setting a reminder, the property node “date/time” is linked to both the “restaurant reservation” node and the “set reminder” node in the ontology360. An actionable intent node, along with its linked concept nodes, may be described as a “domain.” In the present discussion, each domain is associated with a respective actionable intent, and refers to the group of nodes (and the relationships therebetween) associated with the particular actionable intent. For example, the ontology360shown inFIG.3Cincludes an example of a restaurant reservation domain362and an example of a reminder domain364within the ontology360. The restaurant reservation domain includes the actionable intent node “restaurant reservation,” property nodes “restaurant,” “date/time,” and “party size,” and sub-property nodes “cuisine,” “price range,” “phone number,” and “location.” The reminder domain364includes the actionable intent node “set reminder,” and property nodes “subject” and “date/time.” In some implementations, the ontology360is made up of many domains. Each domain may share one or more property nodes with one or more other domains. For example, the “date/time” property node may be associated with many other domains (e.g., a scheduling domain, a travel reservation domain, a movie ticket domain, etc.), in addition to the restaurant reservation domain362and the reminder domain364. WhileFIG.3Cillustrates two exemplary domains within the ontology360, the ontology360may include other domains (or actionable intents), such as “initiate a phone call,” “find directions,” “schedule a meeting,” “send a message,” and “provide an answer to a question,” “tag a photo,” and so on. For example, a “send a message” domain is associated with a “send a message” actionable intent node, and may further include property nodes such as “recipient(s),” “message type,” and “message body.” The property node “recipient” may be further defined, for example, by the sub-property nodes such as “recipient name” and “message address.” In some implementations, the ontology360includes all the domains (and hence actionable intents) that the digital assistant is capable of understanding and acting upon. In some implementations, the ontology360may be modified, such as by adding or removing domains or nodes, or by modifying relationships between the nodes within the ontology360. In some implementations, nodes associated with multiple related actionable intents may be clustered under a “super domain” in the ontology360. For example, a “travel” super-domain may include a cluster of property nodes and actionable intent nodes related to travels. The actionable intent nodes related to travels may include “airline reservation,” “hotel reservation,” “car rental,” “get directions,” “find points of interest,” and so on. The actionable intent nodes under the same super domain (e.g., the “travels” super domain) may have many property nodes in common. For example, the actionable intent nodes for “airline reservation,” “hotel reservation,” “car rental,” “get directions,” “find points of interest” may share one or more of the property nodes “start location,” “destination,” “departure date/time,” “arrival date/time,” and “party size.” In some implementations, each node in the ontology360is associated with a set of words and/or phrases that are relevant to the property or actionable intent represented by the node. The respective set of words and/or phrases associated with each node is the so-called “vocabulary” associated with the node. The respective set of words and/or phrases associated with each node can be stored in the vocabulary index344(FIG.3B) in association with the property or actionable intent represented by the node. For example, returning toFIG.3B, the vocabulary associated with the node for the property of “restaurant” may include words such as “food,” “drinks,” “cuisine,” “hungry,” “eat,” “pizza,” “fast food,” “meal,” and so on. For another example, the vocabulary associated with the node for the actionable intent of “initiate a phone call” may include words and phrases such as “call,” “phone,” “dial,” “ring,” “call this number,” “make a call to,” and so on. The vocabulary index344optionally includes words and phrases in different languages. In some implementations, the natural language processor332shown inFIG.3Breceives the token sequence (e.g., a text string) from the speech-to-text processing module330, and determines what nodes are implicated by the words in the token sequence. In some implementations, if a word or phrase in the token sequence is found to be associated with one or more nodes in the ontology360(via the vocabulary index344), the word or phrase will “trigger” or “activate” those nodes. When multiple nodes are “triggered,” based on the quantity and/or relative importance of the activated nodes, the natural language processor332will select one of the actionable intents as the task (or task type) that the user intended the digital assistant to perform. In some implementations, the domain that has the most “triggered” nodes is selected. In some implementations, the domain having the highest confidence value (e.g., based on the relative importance of its various triggered nodes) is selected. In some implementations, the domain is selected based on a combination of the number and the importance of the triggered nodes. In some implementations, additional factors are considered in selecting the node as well, such as whether the digital assistant system300has previously correctly interpreted a similar request from a user. In some implementations, the digital assistant system300also stores names of specific entities in the vocabulary index344, so that when one of these names is detected in the user request, the natural language processor332will be able to recognize that the name refers to a specific instance of a property or sub-property in the ontology. In some implementations, the names of specific entities are names of businesses, restaurants, people, movies, and the like. In some implementations, the digital assistant system300can search and identify specific entity names from other data sources, such as the user's address book or contact list, a movies database, a musicians database, and/or a restaurant database. In some implementations, when the natural language processor332identifies that a word in the token sequence is a name of a specific entity (such as a name in the user's address book or contact list), that word is given additional significance in selecting the actionable intent within the ontology for the user request. For example, when the words “Mr. Santo” are recognized from the user request, and the last name “Santo” is found in the vocabulary index344as one of the contacts in the user's contact list, then it is likely that the user request corresponds to a “send a message” or “initiate a phone call” domain. For another example, when the words “ABC Café” are found in the user request, and the term “ABC Café” is found in the vocabulary index344as the name of a particular restaurant in the user's city, then it is likely that the user request corresponds to a “restaurant reservation” domain. User data348includes user-specific information, such as user-specific vocabulary, user preferences, user address, user's default and secondary languages, user's contact list, and other short-term or long-term information for each user. The natural language processor332can use the user-specific information to supplement the information contained in the user input to further define the user intent. For example, for a user request “invite my friends to my birthday party,” the natural language processor332is able to access user data348to determine who the “friends” are and when and where the “birthday party” would be held, rather than requiring the user to provide such information explicitly in his/her request. In some implementations, natural language processor332includes categorization module349. In some implementations, the categorization module349determines whether each of the one or more terms in a text string (e.g., corresponding to a speech input associated with a digital photograph) is one of an entity, an activity, or a location, as discussed in greater detail below. In some implementations, the categorization module349classifies each term of the one or more terms as one of an entity, an activity, or a location. Once the natural language processor332identifies an actionable intent (or domain) based on the user request, the natural language processor332generates a structured query to represent the identified actionable intent. In some implementations, the structured query includes parameters for one or more nodes within the domain for the actionable intent, and at least some of the parameters are populated with the specific information and requirements specified in the user request. For example, the user may say “Make me a dinner reservation at a sushi place at 7.” In this case, the natural language processor332may be able to correctly identify the actionable intent to be “restaurant reservation” based on the user input. According to the ontology, a structured query for a “restaurant reservation” domain may include parameters such as {Cuisine}, {Time}, {Date}, {Party Size}, and the like. Based on the information contained in the user's utterance, the natural language processor332may generate a partial structured query for the restaurant reservation domain, where the partial structured query includes the parameters {Cuisine=“Sushi”} and {Time=“7 pm”}. However, in this example, the user's utterance contains insufficient information to complete the structured query associated with the domain. Therefore, other necessary parameters such as {Party Size} and {Date} are not specified in the structured query based on the information currently available. In some implementations, the natural language processor332populates some parameters of the structured query with received context information. For example, if the user requested a sushi restaurant “near me,” the natural language processor332may populate a {location} parameter in the structured query with GPS coordinates from the user device104. In some implementations, the natural language processor332passes the structured query (including any completed parameters) to the task flow processing module336(“task flow processor”). The task flow processor336is configured to perform one or more of: receiving the structured query from the natural language processor332, completing the structured query, and performing the actions required to “complete” the user's ultimate request. In some implementations, the various procedures necessary to complete these tasks are provided in task flow models354. In some implementations, the task flow models354include procedures for obtaining additional information from the user, and task flows for performing actions associated with the actionable intent. As described above, in order to complete a structured query, the task flow processor336may need to initiate additional dialogue with the user in order to obtain additional information, and/or disambiguate potentially ambiguous utterances. When such interactions are necessary, the task flow processor336invokes the dialogue processing module334(“dialogue processor”) to engage in a dialogue with the user. In some implementations, the dialogue processing module334determines how (and/or when) to ask the user for the additional information, and receives and processes the user responses. In some implementations, the questions are provided to and answers are received from the users through the I/O processing module328. For example, the dialogue processing module334presents dialogue output to the user via audio and/or visual output, and receives input from the user via spoken or physical (e.g., touch gesture) responses. Continuing with the example above, when the task flow processor336invokes the dialogue processor334to determine the “party size” and “date” information for the structured query associated with the domain “restaurant reservation,” the dialogue processor334generates questions such as “For how many people?” and “On which day?” to pass to the user. Once answers are received from the user, the dialogue processing module334populates the structured query with the missing information, or passes the information to the task flow processor336to complete the missing information from the structured query. In some cases, the task flow processor336may receive a structured query that has one or more ambiguous properties. For example, a structured query for the “send a message” domain may indicate that the intended recipient is “Bob,” and the user may have multiple contacts named “Bob.” The task flow processor336will request that the dialogue processor334disambiguate this property of the structured query. In turn, the dialogue processor334may ask the user “Which Bob?”, and display (or read) a list of contacts named “Bob” from which the user may choose. In some implementations, dialogue processor334includes disambiguation module350. In some implementations, disambiguation module350disambiguates one or more ambiguous terms (e.g., one or more ambiguous terms in a text string corresponding to a speech input associated with a digital photograph). In some implementations, disambiguation module350identifies that a first term of the one or more teens has multiple candidate meanings, prompts a user for additional information about the first term, receives the additional information from the user in response to the prompt and identifies the entity, activity, or location associated with the first term in accordance with the additional information. In some implementations, disambiguation module350disambiguates pronouns. In such implementations, disambiguation module350identifies one of the one or more terms as a pronoun and determines a noun to which the pronoun refers. In some implementations, disambiguation module350determines a noun to which the pronoun refers by using a contact list associated with a user of the electronic device. Alternatively, or in addition, disambiguation module350determines a noun to which the pronoun refers as a name of an entity, an activity, or a location identified in a previous speech input associated with a previously tagged digital photograph. Alternatively, or in addition, disambiguation module350determines a noun to which the pronoun refers as a name of a person identified based on a previous speech input associated with a previously tagged digital photograph. In some implementations, disambiguation module350accesses information obtained from one or more sensors (e.g., proximity sensor214, light sensor212, GPS receiver213, temperature sensor215, and motion sensor210) of a handheld electronic device (e.g., user device104) for determining a meaning of one or more of the terms. In some implementations, disambiguation module350identifies two terms each associated with one of an entity, an activity, or a location. For example, a first of the two terms refers to a person, and a second of the two terms refers to a location. In some implementations, disambiguation module350identifies three terms each associated with one of an entity, an activity, or a location. Once the task flow processor336has completed the structured query for an actionable intent, the task flow processor336proceeds to perform the ultimate task associated with the actionable intent. Accordingly, the task flow processor336executes the steps and instructions in the task flow model according to the specific parameters contained in the structured query. For example, the task flow model for the actionable intent of “restaurant reservation” may include steps and instructions for contacting a restaurant and actually requesting a reservation for a particular party size at a particular time. For example, using a structured query such as: {restaurant reservation, restaurant=ABC Café, date=Mar. 12, 2012, time=7 pm, party size=5}, the task flow processor336may perform the steps of: (1) logging onto a server of the ABC Café or a restaurant reservation system that is configured to accept reservations for multiple restaurants, such as the ABC Café, (2) entering the date, time, and party size information in a form on the website, (3) submitting the form, and (4) making a calendar entry for the reservation in the user's calendar. In another example, described in greater detail below, the task flow processor336executes steps and instructions associated with tagging or searching for digital photographs in response to a voice input, e.g., in conjunction with photo module132. In some implementations, the task flow processor336employs the assistance of a service processing module338(“service processor”) to complete a task requested in the user input or to provide an informational answer requested in the user input. For example, the service processor338can act on behalf of the task flow processor336to make a phone call, set a calendar entry, invoke a map search, invoke or interact with other user applications installed on the user device, and invoke or interact with third party services (e.g. a restaurant reservation portal, a social networking website or service, a banking portal, etc.,). In some implementations, the protocols and application programming interfaces (API) required by each service can be specified by a respective service model among the service models356. The service processor338accesses the appropriate service model for a service and generates requests for the service in accordance with the protocols and APIs required by the service according to the service model. For example, if a restaurant has enabled an online reservation service, the restaurant can submit a service model specifying the necessary parameters for making a reservation and the APIs for communicating the values of the necessary parameters to the online reservation service. When requested by the task flow processor336, the service processor338can establish a network connection with the online reservation service using the web address stored in the service models356, and send the necessary parameters of the reservation (e.g., time, date, party size) to the online reservation interface in a format according to the API of the online reservation service. In some implementations, the natural language processor332, dialogue processor334, and task flow processor336are used collectively and iteratively to deduce and define the user's intent, obtain information to further clarify and refine the user intent, and finally generate a response (e.g., provide an output to the user, or complete a task) to fulfill the user's intent. In some implementations, after all of the tasks needed to fulfill the user's request have been performed, the digital assistant326formulates a confirmation response, and sends the response back to the user through the I/O processing module328. If the user request seeks an informational answer, the confirmation response presents the requested information to the user. In some implementations, the digital assistant also requests the user to indicate whether the user is satisfied with the response produced by the digital assistant326. Attention is now directed toFIG.4, which is a block diagram illustrating components of a voice trigger system400, in accordance with some implementations. (The voice trigger system400is not limited to voice, and implementations described herein apply equally to non-voice sounds.) The voice trigger system400is composed of various components, modules, and/or software programs within the electronic device104. In some implementations, the voice trigger system400includes a noise detector402, a sound-type detector404, a trigger sound detector406, and a speech-based service408, and an audio subsystem226, each coupled to an audio bus401. In some implementations, more or fewer of these modules are used. The sound detectors402,404, and406may be referred to as modules, and may include hardware (e.g., circuitry, memory, processors, etc.), software (e.g., programs, software-on-a-chip, firmware, etc.), and/or any combinations thereof for performing the functionality described herein. In some implementations, the sound detectors are communicatively, programmatically, physically, and/or operationally coupled to one another (e.g., via a communications bus), as illustrated inFIG.4by the broken lines. (For ease of illustration,FIG.4shows each sound detector coupled only to adjacent sound detectors. It will be understood that the each sound detector can be coupled to any of the other sound detectors as well.) In some implementations, the audio subsystem226includes a codec410, an audio digital signal processor (DSP)412, and a memory buffer414. In some implementations, the audio subsystem226is coupled to one or more microphones230(FIG.2) and one or more speakers228(FIG.2). The audio subsystem226provides sound inputs to the sound detectors402,404,406and the speech-based service408(as well as other components or modules, such as a phone and/or baseband subsystem of a phone) for processing and/or analysis. In some implementations, the audio subsystem226is coupled to an external audio system416that includes at least one microphone418and at least one speaker420. In some implementations, the speech-based service408is a voice-based digital assistant, and corresponds to one or more components or functionalities of the digital assistant system described above with reference toFIGS.1-3C. In some implementations, the speech-based service is a speech-to-text service, a dictation service, or the like. In some implementations, the noise detector402monitors an audio channel to determine whether a sound input from the audio subsystem226satisfies a predetermined condition, such as an amplitude threshold. The audio channel corresponds to a stream of audio information received by one or more sound pickup devices, such as the one or more microphones230(FIG.2). The audio channel refers to the audio information regardless of its state of processing or the particular hardware that is processing and/or transmitting the audio information. For example, the audio channel may refer to analog electrical impulses (and/or the circuits on which they are propagated) from the microphone230, as well as a digitally encoded audio stream resulting from processing of the analog electrical impulses (e.g., by the audio subsystem226and/or any other audio processing system of the electronic device104). In some implementations, the predetermined condition is whether the sound input is above a certain volume for a predetermined amount of time. In some implementations, the noise detector uses time-domain analysis of the sound input, which requires relatively little computational and battery resources as compared to other types of analysis (e.g., as performed by the sound-type detector404, the trigger word detector406, and/or the speech-based service408). In some implementations, other types of signal processing and/or audio analysis are used, including, for example, frequency-domain analysis. If the noise detector402determines that the sound input satisfies the predetermined condition, it initiates an upstream sound detector, such as the sound-type detector404(e.g., by providing a control signal to initiate one or more processing routines, and/or by providing power to the upstream sound detector). In some implementations, the upstream sound detector is initiated in response to other conditions being satisfied. For example, in some implementations, the upstream sound detector is initiated in response to determining that the device is not being stored in an enclosed space (e.g., based on a light detector detecting a threshold level of light). The sound-type detector404monitors the audio channel to determine whether a sound input corresponds to a certain type of sound, such as sound that is characteristic of a human voice, whistle, clap, etc. The type of sound that the sound-type detector404is configured to recognize will correspond to the particular trigger sound(s) that the voice trigger is configured to recognize. In implementations where the trigger sound is a spoken word or phrase, the sound-type detector404includes a “voice activity detector” (VAD). In some implementations, the sound-type detector404uses frequency-domain analysis of the sound input. For example, the sound-type detector404generates a spectrogram of a received sound input (e.g., using a Fourier transform), and analyzes the spectral components of the sound input to determine whether the sound input is likely to correspond to a particular type or category of sounds (e.g., human speech). Thus, in implementations where the trigger sound is a spoken word or phrase, if the audio channel is picking up ambient sound (e.g., traffic noise) but not human speech, the VAD will not initiate the trigger sound detector406. In some implementations, the sound-type detector404remains active for as long as predetermined conditions of any downstream sound detector (e.g., the noise detector402) are satisfied. For example, in some implementations, the sound-type detector404remains active as long as the sound input includes sound above a predetermined amplitude threshold (as determined by the noise detector402), and is deactivated when the sound drops below the predetermined threshold. In some implementations, once initiated, the sound-type detector404remains active until a condition is met, such as the expiration of a timer (e.g., for 1, 2, 5, or 10 seconds, or any other appropriate duration), the expiration of a certain number of on/off cycles of the sound-type detector404, or the occurrence of an event (e.g., the amplitude of the sound falls below a second threshold, as determined by the noise detector402and/or the sound-type detector404). As mentioned above, if the sound-type detector404determines that the sound input corresponds to a predetermined type of sound, it initiates an upstream sound detector (e.g., by providing a control signal to initiate one or more processing routines, and/or by providing power to the upstream sound detector), such as the trigger sound detector406. The trigger sound detector406is configured to determine whether a sound input includes at least part of certain predetermined content (e.g., at least part of the trigger word, phrase, or sound). In some implementations, the trigger sound detector406compares a representation of the sound input (an “input representation”) to one or more reference representations of the trigger word. If the input representation matches at least one of the one or more reference representations with an acceptable confidence, the trigger sound detector406initiates the speech-based service408(e.g., by providing a control signal to initiate one or more processing routines, and/or by providing power to the upstream sound detector). In some implementations, the input representation and the one or more reference representations are spectrograms (or mathematical representations thereof), which represent how the spectral density of a signal varies with time. In some implementations, the representations are other types of audio signatures or voiceprints. In some implementations, initiating the speech-based service408includes bringing one or more circuits, programs, and/or processors out of a standby mode, and invoking the sound-based service. The sound-based service is then ready to provide more comprehensive speech recognition, speech-to-text processing, and/or natural language processing. In some implementations, the voice-trigger system400includes voice authentication functionality, so that it can determine if a sound input corresponds to a voice of a particular person, such as an owner/user of the device. For example, in some implementations, the sound-type detector404uses a voice printing technique to determine that the sound input was uttered by an authorized user. Voice authentication and voice printing are described in more detail in U.S. patent application Ser. No. 13/053,144, assigned to the assignee of the instant application, which is hereby incorporated by reference in its entirety. In some implementations, voice authentication is included in any of the sound detectors described herein (e.g., the noise detector402, the sound-type detector404, the trigger sound detector406, and/or the speech-based service408). In some implementations, voice authentication is implemented as a separate module from the sound detectors listed above (e.g., as voice authentication module428,FIG.4), and may be operationally positioned after the noise detector402, after the sound-type detector404, after the trigger sound detector406, or at any other appropriate position. In some implementations, the trigger sound detector406remains active for as long as conditions of any downstream sound detector(s) (e.g., the noise detector402and/or the sound-type detector404) are satisfied. For example, in some implementations, the trigger sound detector406remains active as long as the sound input includes sound above a predetermined threshold (as detected by the noise detector402). In some implementations, it remains active as long as the sound input includes sound of a certain type (as detected by the sound-type detector404). In some implementations, it remains active as long as both the foregoing conditions are met. In some implementations, once initiated, the trigger sound detector406remains active until a condition is met, such as the expiration of a timer (e.g., for 1, 2, 5, or 10 seconds, or any other appropriate duration), the expiration of a certain number of on/off cycles of the trigger sound detector406, or the occurrence of an event (e.g., the amplitude of the sound falls below a second threshold). In some implementations, when one sound detector initiates another detector, both sound detectors remain active. However, the sound detectors may be active or inactive at various times, and it is not necessary that all of the downstream (e.g., the lower power and/or sophistication) sound detectors be active (or that their respective conditions are met) in order for upstream sound detectors to be active. For example, in some implementations, after the noise detector402and the sound-type detector404determine that their respective conditions are met, and the trigger sound detector406is initiated, one or both of the noise detector402and the sound-type detector404are deactivated and/or enter a standby mode while the trigger sound detector406operates. In other implementations, both the noise detector402and the sound-type detector404(or one or the other) stay active while the trigger sound detector406operates. In various implementations, different combinations of the sound detectors are active at different times, and whether one is active or inactive may depend on the state of other sound detectors, or may be independent of the state of other sound detectors. WhileFIG.4describes three separate sound detectors, each configured to detect different aspects of a sound input, more or fewer sound detectors are used in various implementations of the voice trigger. For example, in some implementations, only the trigger sound detector406is used. In some implementations, the trigger sound detector406is used in conjunction with either the noise detector402or the sound-type detector404. In some implementations, all of the detectors402-406are used. In some implementations, additional sound detectors are included as well. Moreover, different combinations of sound detectors may be used at different times. For example, the particular combination of sound detectors and how they interact may depend on one or more conditions, such as the context or operating state of a device. As a specific example, if a device is plugged in (and thus not relying exclusively on battery power), the trigger sound detector406is active, while the noise detector402and the sound-type detector404remain inactive. In another example, if the device is in a pocket or backpack, all sound detectors are inactive. By cascading sound detectors as described above, where the detectors that require more power are invoked only when necessary by detectors that require lower power, power efficient voice triggering functionality can be provided. As described above, additional power efficiency is achieved by operating one or more of the sound detectors according to a duty cycle. For example, in some implementations, the noise detector402operates according to a duty cycle so that it performs effectively continuous noise detection, even though the noise detector is off for at least part of the time. In some implementations, the noise detector402is on for 10 milliseconds and off for 90 milliseconds. In some implementations, the noise detector402is on for 20 milliseconds and off for 500 milliseconds. Other on and off durations are also possible. In some implementations, if the noise detector402detects a noise during its “on” interval, the noise detector402will remain on in order to further process and/or analyze the sound input. For example, the noise detector402may be configured to initiate an upstream sound detector if it detects sound above a predetermined amplitude for a predetermined amount of time (e.g., 100 milliseconds). Thus, if the noise detector402detects sound above a predetermined amplitude during its 10 millisecond “on” interval, it will not immediately enter the “off” interval. Instead, the noise detector402remains active and continues to process the sound input to determine whether it exceeds the threshold for the full predetermined duration (e.g., 100 milliseconds). In some implementations, the sound-type detector404operates according to a duty cycle. In some implementations, the sound-type detector404is on for 20 milliseconds and off for 100 milliseconds. Other on and off durations are also possible. In some implementations, the sound-type detector404is able to determine whether a sound input corresponds to a predetermined type of sound within the “on” interval of its duty cycle. Thus, the sound-type detector404will initiate the trigger sound detector406(or any other upstream sound detector) if the sound-type detector404determines, during its “on” interval, that the sound is of a certain type. Alternatively, in some implementations, if the sound-type detector404detects, during the “on” interval, sound that may correspond to the predetermined type, the detector will not immediately enter the “off” interval. Instead, the sound-type detector404remains active and continues to process the sound input and determine whether it corresponds to the predetermined type of sound. In some implementations, if the sound detector determines that the predetermined type of sound has been detected, it initiates the trigger sound detector406to further process the sound input and determine if the trigger sound has been detected. Similar to the noise detector402and the sound-type detector404, in some implementations, the trigger sound detector406operates according to a duty cycle. In some implementations, the trigger sound detector406is on for 50 milliseconds and off for 50 milliseconds. Other on and off durations are also possible. If the trigger sound detector406detects, during its “on” interval, that there is sound that may correspond to a trigger sound, the detector will not immediately enter the “off” interval. Instead, the trigger sound detector406remains active and continues to process the sound input and determine whether it includes the trigger sound. In some implementations, if such a sound is detected, the trigger sound detector406remains active to process the audio for a predetermined duration, such as 1, 2, 5, or 10 seconds, or any other appropriate duration. In some implementations, the duration is selected based on the length of the particular trigger word or sound that it is configured to detect. For example, if the trigger phrase is “Hey, SIRI,” the trigger word detector is operated for about 2 seconds to determine whether the sound input includes that phrase. In some implementations, some of the sound detectors are operated according to a duty cycle, while others operate continuously when active. For example, in some implementations, only the first sound detector is operated according to a duty cycle (e.g., the noise detector402inFIG.4), and upstream sound detectors are operated continuously once they are initiated. In some other implementations, the noise detector402and the sound-type detector404are operated according to a duty cycle, while the trigger sound detector406is operated continuously. Whether a particular sound detector is operated continuously or according to a duty cycle depends on one or more conditions, such as the context or operating state of a device. In some implementations, if a device is plugged in and not relying exclusively on battery power, all of the sound detectors operate continuously once they are initiated. In other implementations, the noise detector402(or any of the sound detectors) operates according to a duty cycle if the device is in a pocket or backpack (e.g., as determined by sensor and/or microphone signals), but operates continuously when it is determined that the device is likely not being stored. In some implementations, whether a particular sound detector is operated continuously or according to a duty cycle depends on the battery charge level of the device. For example, the noise detector402operates continuously when the battery charge is above 50%, and operates according to a duty cycle when the battery charge is below 50%. In some implementations, the voice trigger includes noise, echo, and/or sound cancellation functionality (referred to collectively as noise cancellation). In some implementations, noise cancellation is performed by the audio subsystem226(e.g., by the audio DSP412). Noise cancellation reduces or removes unwanted noise or sounds from the sound input prior to it being processed by the sound detectors. In some cases, the unwanted noise is background noise from the user's environment, such as a fan or the clicking from a keyboard. In some implementations, the unwanted noise is any sound above, below, or at predetermined amplitudes or frequencies. For example, in some implementations, sound above the typical human vocal range (e.g., 3,000 Hz) is filtered out or removed from the signal. In some implementations, multiple microphones (e.g., the microphones230) are used to help determine what components of received sound should be reduced and/or removed. For example, in some implementations, the audio subsystem226uses beam forming techniques to identify sounds or portions of sound inputs that appear to originate from a single point in space (e.g., a user's mouth). The audio subsystem226then focuses on this sound by removing from the sound input sounds that are received equally by all microphones (e.g., ambient sound that does not appear to originate from any particular direction). In some implementations, the DSP412is configured to cancel or remove from the sound input sounds that are being output by the device on which the digital assistant is operating. For example, if the audio subsystem226is outputting music, radio, a podcast, a voice output, or any other audio content (e.g., via the speaker228), the DSP412removes any of the outputted sound that was picked up by a microphone and included in the sound input. Thus, the sound input is free of the outputted audio (or at least contains less of the outputted audio). Accordingly, the sound input that is provided to the sound detectors will be cleaner, and the triggers more accurate. Aspects of noise cancellation are described in more detail in U.S. Pat. No. 7,272,224, assigned to the assignee of the instant application, which is hereby incorporated by reference in its entirety. In some implementations, different sound detectors require that the sound input be filtered and/or preprocessed in different ways. For example, in some implementations, the noise detector402is configured to analyze time-domain audio signal between 60 and 20,000 Hz, and the sound-type detector is configured to perform frequency-domain analysis of audio between 60 and 3,000 Hz. Thus, in some implementations, the audio DSP412(and/or other audio DSPs of the device104) preprocesses received audio according to the respective needs of the sound detectors. In some implementations, on the other hand, the sound detectors are configured to filter and/or preprocess the audio from the audio subsystem226according to their specific needs. In such cases, the audio DSP412may still perform noise cancellation prior to providing the sound input to the sound detectors. In some implementations, the context of the electronic device is used to help determine whether and how to operate the voice trigger. For example, it may be unlikely that users will invoke a speech-based service, such as a voice-based digital assistant, when the device is stored in their pocket, purse, or backpack. Also, it may be unlikely that users will invoke a speech-based service when they are at a loud rock concert. For some users, it is unlikely that they will invoke a speech-based service at certain times of the day (e.g., late at night). On the other hand, there are also contexts in which it is more likely that a user will invoke a speech-based service using a voice trigger. For example, some users will be more likely to use a voice trigger when they are driving, when they are alone, when they are at work, or the like. Various techniques are used to determine the context of a device. In various implementations, the device uses information from any one or more of the following components or information sources to determine the context of a device: GPS receivers, light sensors, microphones, proximity sensors, orientation sensors, inertial sensors, cameras, communications circuitry and/or antennas, charging and/or power circuitry, switch positions, temperature sensors, compasses, accelerometers, calendars, user preferences, etc. The context of the device can then be used to adjust how and whether the voice trigger operates. For example, in certain contexts, the voice trigger will be deactivated (or operated in a different mode) as long as that context is maintained. For example, in some implementations, the voice trigger is deactivated when the phone is in a predetermined orientation (e.g., lying face-down on a surface), during predetermined time periods (e.g., between 10:00 PM and 8:00 AM), when the phone is in a “silent” or a “do not disturb” mode (e.g., based on a switch position, mode setting, or user preference), when the device is in a substantially enclosed space (e.g., a pocket, bag, purse, drawer, or glove box), when the device is near other devices that have a voice trigger and/or speech-based services (e.g., based on proximity sensors, acoustic/wireless/infrared communications), and the like. In some implementations, instead of being deactivated, the voice trigger system400is operated in a low-power mode (e.g., by operating the noise detector402according to a duty cycle with a 10 millisecond “on” interval and a 5 second “off” interval). In some implementations, an audio channel is monitored more infrequently when the voice trigger system400is operated in a low-power mode. In some implementations, a voice trigger uses a different sound detector or combination of sound detectors when it is in a low-power mode than when it is in a normal mode. (The voice trigger may be capable of numerous different modes or operating states, each of which may use a different amount of power, and different implementations will use them according to their specific designs.) On the other hand, when the device is in some other contexts, the voice trigger will be activated (or operated in a different mode) so long as that context is maintained. For example, in some implementations, the voice trigger remains active while it is plugged into a power source, when the phone is in a predetermined orientation (e.g., lying face-up on a surface), during predetermined time periods (e.g., between 8:00 AM and 10:00 PM), when the device is travelling and/or in a car (e.g., based on GPS signals, BLUETOOTH connection or docking with a vehicle, etc.), and the like. Aspects of detect lining when a device is in a vehicle are described in more detail in U.S. Provisional Patent Application No. 61/657,744, assigned to the assignee of the instant application, which is hereby incorporated by reference in its entirety. Several specific examples of how to determine certain contexts are provided below. In various embodiments, different techniques and/or information sources are used to detect these and other contexts. As noted above, whether or not the voice trigger system400is active (e.g., listening) can depend on the physical orientation of a device. In some implementations, the voice trigger is active when the device is placed “face-up” on a surface (e.g., with the display and/or touchscreen surface visible), and/or is inactive when it is “face-down.” This provides a user with an easy way to activate and/or deactivate the voice trigger without requiring manipulation of settings menus, switches, or buttons. In some implementations, the device detects whether it is face-up or face-down on a surface using light sensors (e.g., based on the difference in incident light on a front and a back face of the device104), proximity sensors, magnetic sensors, accelerometers, gyroscopes, tilt sensors, cameras, and the like. In some implementations, other operating modes, settings, parameters, or preferences are affected by the orientation and/or position of the device. In some implementations, the particular trigger sound, word, or phrase of the voice trigger is listening for depends on the orientation and/or position of the device. For example, in some implementations, the voice trigger listens for a first trigger word, phrase, or sound when the device is in one orientation (e.g., laying face-up on a surface), and a different trigger word, phrase, or sound when the device is in another orientation (e.g., laying face-down). In some implementations, the trigger phrase for a face-down orientation is longer and/or more complex than for a face-up orientation. Thus, a user can place a device face-down when they are around other people or in a noisy environment so that the voice trigger can still be operational while also reducing false accepts, which may be more frequent for shorter or simpler trigger words. As a specific example, a face-up trigger phrase may be “Hey, SIRI,” while a face-down trigger phrase may be “Hey, SIRI, this is Andrew, please wake up.” The longer trigger phrase also provides a larger voice sample for the sound detectors and/or voice authenticators to process and/or analyze, thus increasing the accuracy of the voice trigger and decreasing false accepts. In some implementations, the device104detects whether it is in a vehicle (e.g., a car). A voice trigger is particularly beneficial for invoking a speech-based service when the user is in a vehicle, as it helps reduce the physical interactions that are necessary to operate the device and/or the speech based service. Indeed, one of the benefits of a voice-based digital assistant is that it can be used to perform tasks where looking at and touching a device would be impractical or unsafe. Thus, the voice trigger may be used when the device is in a vehicle so that the user does not have to touch the device in order to invoke the digital assistant. In some implementations, the device determines that it is in a vehicle by detecting that it has been connected to and/or paired with a vehicle, such as through BLUETOOTH communications (or other wireless communications) or through a docking connector or cable. In some implementations, the device determines that it is in a vehicle by determining the device's location and/or speed (e.g., using GPS receivers, accelerometers, and/or gyroscopes). If it is determined that the device is likely in a vehicle, because it is travelling above 20 miles per hour and is determined to be travelling along a road, for example, then the voice trigger remains active and/or in a high-power or more sensitive state. In some implementations, the device detects whether the device is stored (e.g., in a pocket, purse, bag, a drawer, or the like) by determining whether it is in a substantially enclosed space. In some implementations, the device uses light sensors (e.g., dedicated ambient light sensors and/or cameras) to determine that it is stored. For example, in some implementations, the device is likely being stored if light sensors detect little or no light. In some implementations, the time of day and/or location of the device are also considered. For example, if the light sensors detect low light levels when high light levels would be expected (e.g., during the day), the device may be in storage and the voice trigger system400not needed. Thus, the voice trigger system400will be placed in a low-power or standby state. In some implementations, the difference in light detected by sensors located on opposite faces of a device can be used to determine its position, and hence whether or not it is stored. Specifically, users are likely to attempt to activate a voice trigger when the device is resting on a table or surface rather than when it is being stored in a pocket or bag. But when a device is lying face-down (or face-up) on a surface such as a table or desk, one surface of the device will be occluded so that little or no light reaches that surface, while the other surface will be exposed to ambient light. Thus, if light sensors on the front and back face of a device detect significantly different light levels, the device determines that it is not being stored. On the other hand, if light sensors on opposite faces detect the same or similar light levels, the device determines that it is being stored in a substantially enclosed space. Also, if the light sensors both detect a low light level during the daytime (or when the device would expect the phone to be in a bright environment, the device determines with a greater confidence that it is being stored. In some implementations, other techniques are used (instead of or in addition to light sensors) to determine whether the device is stored. For example, in some implementations, the device emits one or more sounds (e.g., tones, clicks, pings, etc.) from a speaker or transducer (e.g., speaker228), and monitors one or more microphones or transducers (e.g., microphone230) to detect echoes of the omitted sound(s). (In some implementations, the device emits inaudible signals, such as sound outside of the human hearing range.) From the echoes, the device determines characteristics of the surrounding environment. For example, a relatively large environment (e.g., a room or a vehicle) will reflect the sound differently than a relatively small, enclosed environment (e.g., a pocket, purse, bag, a drawer, or the like). In some implementations, the voice trigger system400is operates differently if it is near other devices (such as other devices that have voice triggers and/or speech-based services) than if it is not near other devices. This may be useful, for example, to shut down or decrease the sensitivity of the voice trigger system400when many devices are close together so that if one person utters a trigger word, other surrounding devices are not triggered as well. In some implementations, a device determines proximity to other devices using RFID, near-field communications, infrared/acoustic signals, or the like. As noted above, voice triggers are particularly useful when a device is being operated in a hands-free mode, such as when the user is driving. In such cases, users often use external audio systems, such as wired or wireless headsets, watches with speakers and/or microphones, a vehicle's built-in microphones and speakers, etc., to free themselves from having to hold a device near their face to make a call or dictate text inputs. For example, wireless headsets and vehicle audio systems may connect to an electronic device using BLUETOOTH communications, or any other appropriate wireless communication. However, it may be inefficient for a voice trigger to monitor audio received via a wireless audio accessory because of the power required to maintain an open audio channel with the wireless accessory. In particular, a wireless headset may hold enough charge in its battery to provide a few hours of continuous talk-time, and it is therefore preferable to reserve the battery for when the headset is needed for actual communication, instead of using it to simply monitor ambient audio and wait for a possible trigger sound. Moreover, wired external headset accessories may require significantly more power than on-board microphones alone, and keeping the headset microphone active will deplete the device's battery charge. This is especially true considering that the ambient audio received by the wireless or wired headset will typically consist mostly of silence or irrelevant sounds. Thus, in some implementations, the voice trigger system400monitors audio from the microphone230on the device even when the device is coupled to an external microphone (wired or wireless). Then, when the voice trigger detects the trigger word, the device initializes an active audio link with the external microphone in order to receive subsequent sound inputs (such as a command to a voice-based digital assistant) via the external microphone rather than the on-device microphone230. When certain conditions are met, though, an active communication link can be maintained between an external audio system416(which may be communicatively coupled to the device104via wires or wirelessly) and the device so that the voice trigger system400can listen for a trigger sound via the external audio system416instead of (or in addition to) the on-device microphone230. For example, in some implementations, characteristics of the motion of the electronic device and/or the external audio system416(e.g., as determined by accelerometers, gyroscopes, etc. on the respective devices) are used to determine whether the voice trigger system400should monitor ambient sound using the on-device microphone230or an external microphone418. Specifically, the difference between the motion of the device and the external audio system416provides information about whether the external audio system416is actually in use. For example, if both the device and a wireless headset are moving (or not moving) substantially identically, it may be determined that the headset is not in use or is not being worn. This may occur, for example, because both devices are near to each other and idle (e.g., sitting on a table or stored in a pocket, bag, purse, drawer, etc.). Accordingly, under these conditions, the voice trigger system400monitors the on-device microphone, because it is unlikely that the headset is actually being used. If there is a difference in motion between the wireless headset and the device, however, it is determined that the headset is being worn by a user. These conditions may occur, for example, because the device has been set down (e.g., on a surface or in a bag), while the headset is being worn on the user's head (which will likely move at least a small amount, even when the wearer is relatively still). Under these conditions, because it is likely that the headset is being worn, the voice trigger system400maintains an active communication link and monitors the microphone418of the headset instead of (or in addition to) the on-device microphone230. And because this technique focuses on the difference in the motion of the device and the headset, motion that is common to both devices can be canceled out. This may be useful, for example, when a user is using a headset in a moving vehicle, where the device (e.g., a cellular phone) is resting in a cup holder, empty seat, or in the user's pocket, and the headset is worn on the user's head. Once the motion that is common to both devices is cancelled out (e.g., the vehicle's motion), the relative motion of the headset as compared to the device (if any) can be determined in order to determine whether the headset is likely in use (or, whether the headset is not being worn). While the above discussion refers to wireless headsets, similar techniques are applied to wired headsets as well. Because people's voices vary greatly, it may be necessary or beneficial to tune a voice trigger to improve its accuracy in recognizing the voice of a particular user. Also, people's voices may change over time, for example, because of illnesses, natural voice changes relating to aging or hormonal changes, and the like. Thus, in some implementations, the voice trigger system400is able to adapt its voice and/or sound recognition profiles for a particular user or group of users. As described above, sound detectors (e.g., the sound-type detector404and/or the trigger sound detector406) may be configured to compare a representation of a sound input (e.g., the sound or utterance provided by a user) to one or more reference representations. For example, if an input representation matches the reference representation to a predetermined confidence level, the sound detector will determine that the sound input corresponds to a predetermined type of sound (e.g., the sound-type detector404), or that the sound input includes predetermined content (e.g., the trigger sound detector406). In order to tune the voice trigger system400, in some implementations, the device adjusts the reference representation to which the input representation is compared. In some implementations, the reference representation is adjusted (or created) as part of a voice enrollment or “training” procedure, where a user outputs the trigger sound several times so that the device can adjust (or create) the reference representation. The device can then create a reference representation using that person's actual voice. In some implementations, the device uses trigger sounds that are received under normal use conditions to adjust the reference representation. For example, after a successful voice triggering event (e.g., where the sound input was found to satisfy all of the triggering criteria) the device will use information from the sound input to adjust and/or tune the reference representation. In some implementations, only sound inputs that were determined to satisfy all or some of the triggering criteria with a certain confidence level are used to adjust the reference representation. Thus, when the voice trigger is less confident that a sound input corresponds to or includes a trigger sound, that voice input may be ignored for the purposes of adjusting the reference representation. On the other hand, in some implementations, sound inputs that satisfied the voice trigger system400to a lower confidence are used to adjust the reference representation. In some implementations, the device104iteratively adjusts the reference representation (using these or other techniques) as more and more sound inputs are received so that slight changes in a user's voice over time can be accommodated. For example, in some implementations, the device104(and/or associated devices or services) adjusts the reference representation after each successful triggering event. In some implementations, the device104analyzes the sound input associated with each successful triggering event and determines if the reference representations should be adjusted based on that input (e.g., if certain conditions are met), and only adjusts the reference representation if it is appropriate to do so. In some implementations, the device104maintains a moving average of the reference representation over time. In some implementations, the voice trigger system400detects sounds that do not satisfy one or more of the triggering criteria (e.g., as determined by one or more of the sound detectors), but that may actually be attempts by an authorized user to do so. For example, voice trigger system400may be configured to respond to a trigger phrase such as “Hey, SIRI”, but if a user's voice has changed (e.g., due to sickness, age, accent/inflection changes, etc.), the voice trigger system400may not recognize the user's attempt to activate the device. (This may also occur when the voice trigger system400has not been properly tuned for that user's particular voice, such as when the voice trigger system400is set to default conditions and/or the user has not performed an initialization or training procedure to customize the voice trigger system400for his or her voice.) If the voice trigger system400does not respond to the user's first attempt to active the voice trigger, the user is likely to repeat the trigger phrase. The device detects that these repeated sound inputs are similar to one another, and/or that they are similar to the trigger phrase (though not similar enough to cause the voice trigger system400to activate the speech-based service). If such conditions are met, the device determines that the sound inputs correspond to valid attempts to activate the voice trigger system400. Accordingly, in some implementations, the voice trigger system400uses those received sound inputs to adjust one or more aspects of the voice trigger system400so that similar utterances by the user will be accepted as valid triggers in the future. In some implementations, these sound inputs are used to adapt the voice trigger system400only if a certain conditions or combinations of conditions are met. For example, in some implementations, the sound inputs are used to adapt the voice trigger system400when a predetermined number of sound inputs are received in succession (e.g., 2, 3, 4, 5, or any other appropriate number), when the sound inputs are sufficiently similar to the reference representation, when the sound inputs are sufficiently similar to each other, when the sound inputs are close together (e.g., when they are received within a predetermined time period and/or at or near a predetermined interval), and/or any combination of these or other conditions. In some cases, the voice trigger system400may detect one or more sound inputs that do not satisfy one or more of the triggering criteria, followed by a manual initiation of the speech-based service (e.g., by pressing a button or icon). In some implementations, the voice trigger system400determines that, because speech-based service was initiated shortly after the sound inputs were received, the sound inputs actually corresponded to failed voice triggering attempts. Accordingly, the voice trigger system400uses those received sound inputs to adjust one or more aspects of the voice trigger system400so that utterances by the user will be accepted as valid triggers in the future, as described above. While the adaptation techniques described above refer to adjusting a reference representation, other aspects of the trigger sound detecting techniques may be adjusted in the same or similar manner in addition to or instead of adjusting the reference representation. For example, in some implementations, the device adjusts how sound inputs are filtered and/or what filters are applied to sound inputs, such as to focus on and/or eliminate certain frequencies or ranges of frequencies of a sound input. In some implementations, the device adjusts an algorithm that is used to compare the input representation with the reference representation. For example, in some implementations, one or more terms of a mathematical function used to determine the difference between an input representation and a reference representation are changed, added, or removed, or a different mathematical function is substituted. In some implementations, adaptation techniques such as those described above require more resources than the voice trigger system400is able to or is configured to provide. In particular, the sound detectors may not have, or have access to, the amount or the types of processors, data, or memory that are necessary to perform the iterative adaptation of a reference representation and/or a sound detection algorithm (or any other appropriate aspect of the voice trigger system400). Thus, in some implementations, one or more of the above described adaptation techniques are performed by a more powerful processor, such as an application processor (e.g., the processor(s)204), or by a different device (e.g., the server system108). However, the voice trigger system400is designed to operate even when the application processor is in a standby mode. Thus, the sound inputs which are to be used to adapt the voice trigger system400are received when the application processor is not active and cannot process the sound input. Accordingly, in some implementations, the sound input is stored by the device so that it can be further processed and/or analyzed after it is received. In some implementations, the sound input is stored in the memory buffer414of the audio subsystem226. In some implementations, the sound input is stored in system memory (e.g., memory250,FIG.2) using direct memory access (DMA) techniques (including, for example, using a DMA engine so that data can be copied or moved without requiring the application processor to be initiated). The stored sound input is then provided to or accessed by the application processor (or the server system108, or another appropriate device) once it is initiated so that the application processor can execute one or more of the adaptation techniques described above. In some implementations, FIGS.5-7are flow diagrams representing methods for operating a voice trigger, according to certain implementations. The methods are, optionally, governed by instructions that are stored in a computer memory or non-transitory computer readable storage medium (e.g., memory250of client device104, memory302associated with the digital assistant system300) and that are executed by one or more processors of one or more computer systems of a digital assistant system, including, but not limited to, the server system108, and/or the user device104a. The computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the computer readable storage medium may include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted by one or more processors. In various implementations, some operations in each method may be combined and/or the order of some operations may be changed from the order shown in the figures. Also, in some implementations, operations shown in separate figures and/or discussed in association with separate methods may be combined to form other methods, and operations shown in the same figure and/or discussed in association with the same method may be separated into different methods. Moreover, in some implementations, one or more operations in the methods are performed by modules of the digital assistant system300and/or an electronic device (e.g., the user device104), including, for example, the natural language processing module332, the dialogue flow processing module334, the audio subsystem226, the noise detector402, the sound-type detector404, the trigger sound detector406, the speech-based service408, and/or any sub modules thereof FIG.5illustrates a method500of operating a voice trigger system (e.g., the voice trigger system400,FIG.4), according to some implementations. In some implementations, the method500is performed at an electronic device including one or more processors and memory storing instructions for execution by the one or more processors (e.g., the electronic device104). The electronic device receives a sound input (502). The sound input may correspond to a spoken utterance (e.g., a word, phrase, or sentence), a human generated sound (e.g., whistle, tongue click, finger snap, clap, etc.), or any other sound (e.g., an electronically generated chirp, a mechanical noise maker, etc.). In some implementations, the electronic device receives the sound input via the audio subsystem226(including, for example, the codec410, audio DSP412, and buffer414, as well as the microphones230and418, described in reference toFIG.4). In some implementations, the electronic device determines whether the sound input satisfies a predetermined condition (504). In some implementations, the electronic device applies time-domain analysis to the sound input to determine whether the sound input satisfies the predetermined condition. For example, the electronic device analyzes the sound input over a time period in order to determine whether the sound amplitude reaches a predetermined level. In some implementations, the threshold is satisfied if the amplitude (e.g., the volume) of the sound input meets and/or exceeds a predetermined threshold. In some implementations, it is satisfied if the sound input meets and/or exceeds a predetermined threshold for a predetermined amount of time. As discussed in more detail below, in some implementations, determining whether the sound input satisfies the predetermined condition (504) is performed by a third sound detector (e.g., the noise detector402). (The third sound detector is used in this case to differentiate the sound detector from other sound detectors (e.g., the first and second sound detectors that are discussed below), and does not necessarily indicate any operational position or order of the sound detectors.) The electronic device determines whether the sound input corresponds to a predetermined type of sound (506). As noted above, sounds are categorized as different “types” based on certain identifiable characteristics of the sounds. Determining whether the sound input corresponds to a predetermined type includes determining whether the sound input includes or exhibits the characteristics of a particular type. In some implementations, the predetermined type of sound is a human voice. In such implementations, determining whether the sound input corresponds to a human voice includes determining whether the sound input includes frequencies characteristic of a human voice (508). As discussed in more detail below, in some implementations, determining whether the sound input corresponds to a predetermined type of sound (506) is performed by a first sound detector (e.g., the sound-type detector404). Upon a determination that the sound input corresponds to the predetermined type of sound, the electronic device determines whether the sound input includes predetermined content (510). In some implementations, the predetermined content corresponds to one or more predetermined phonemes (512). In some implementations, the one or more predetermined phonemes constitute at least one word. In some implementations, the predetermined content is a sound (e.g., a whistle, click, or clap). In some implementations, as discussed below, determining whether the sound input includes predetermined content (510) is performed by a second sound detector (e.g., the trigger sound detector406). Upon a determination that the sound input includes the predetermined content, the electronic device initiates a speech-based service (514). In some implementations, the speech-based service is a voice-based digital assistant, as described in detail above. In some implementations, the speech-based service is a dictation service in which speech inputs are converted into text and included in and/or displayed in a text input field (e.g., of an email, text message, word processing or note-taking application, etc.). In implementations where the speech-based service is a voice-based digital assistant, once the voice-based digital assistant is initiated, a prompt is issued to the user (e.g., a sound or a speech prompt) indicating that the user may provide a voice input and/or command to the digital assistant. In some implementations, initiating the voice-based digital assistant includes activating an application processor (e.g., the processor(s)204,FIG.2), initiating one or more programs or modules (e.g., the digital assistant client module264,FIG.2), and/or establishing a connection to remote servers or devices (e.g., the digital assistant server106,FIG.1). In some implementations, the electronic device determines whether the sound input corresponds to a voice of a particular user (516). For example, one or more voice authentication techniques are applied to the sound input to determine whether it corresponds to the voice of an authorized user of the device. Voice authentication techniques are described in greater detail above. In some implementations, voice authentication is performed by one of the sound detectors (e.g., the trigger sound detector406). In some implementations, voice authentication is performed by a dedicated voice authentication module (including any appropriate hardware and/or software). In some implementations, the sound-based service is initiated in response to a determination that the sound input includes the predetermined content and the sound input corresponds to the voice of the particular user. Thus, for example, the sound-based service (e.g., a voice-based digital assistant) will only be initiated when the trigger word or phrase is spoken by an authorized user. This reduces the possibility that the service can be invoked by an unauthorized user, and may be particularly useful when multiple electronic devices are in close proximity, as one user's utterance of a trigger sound will not activate another user's voice trigger. In some implementations, where the speech-based service is a voice-based digital assistant, in response to determining that the sound input includes the predetermined content but does not correspond to the voice of the particular user, the voice-based digital assistant is initiated in a limited access mode. In some implementations, the limited access mode allows the digital assistant to access only a subset of the data, services, and/or functionality that the digital assistant can otherwise provide. In some implementations, the limited access mode corresponds to a write-only mode (e.g., so that an unauthorized user of the digital assistant cannot access data from calendars, task lists, contacts, photographs, emails, text messages, etc.). In some implementations, the limited access mode corresponds to a sandboxed instance of a speech-based service, so that the speech-based service will not read from or write to a user's data, such as user data266on the device104(FIG.2), or on any other device (e.g., user data348,FIG.3A, which may be stored on a remote server, such as the server system108,FIG.1). In some implementations, in response to a determination that the sound input includes the predetermined content and the sound input corresponds to the voice of the particular user, the voice-based digital assistant outputs a prompt including a name of the particular user. For example, when a particular user is identified via voice authentication, the voice-based digital assistant may output a prompt such as “What can I help you with, Peter?”, instead of a more generic prompt such as a tone, beep, or non-personalized voice prompt. As noted above, in some implementations, a first sound detector determines whether the sound input corresponds to a predetermined type of sound (at step506), and a second sound detector determines whether the sound detector includes the predetermined content (at step510). In some implementations, the first sound detector consumes less power while operating than the second sound detector, for example, because the first sound detector uses a less processor-intensive technique than the second sound detector. In some implementations, the first sound detector is the sound-type detector404, and the second sound detector is the trigger sound detector406, both of which are discussed above with respect toFIG.4. In some implementations, when they are operating, the first and/or the second sound detector periodically monitor an audio channel according to a duty cycle, as described above with reference toFIG.4. In some implementations, the first and/or the sound detector performs frequency-domain analysis of the sound input. For example, these sound detectors perform a Laplace, Z-, or Fourier transform to generate a frequency spectrum or to determine the spectral density of the sound input or a portion thereof. In some implementations, the first sound detector is a voice-activity detector that is configured to determine whether the sound input includes frequencies that are characteristic of a human voice (or other features, aspects, or properties of the sound input that are characteristic of a human voice). In some implementations, the second sound detector is off or inactive until the first sound detector detects a sound input of the predetermined type. Accordingly, in some implementations, the method500includes initiating the second sound detector in response to determining that the sound input corresponds to the predetermined type. (In other implementations, the second sound detector is initiated in response to other conditions, or is continuously operated regardless of a determination from the first sound detector.) In some implementations, initiating the second sound detector includes activating hardware and/or software (including, for example, circuits, processors, programs, memory, etc.). In some implementations, the second sound detector is operated (e.g., is active and is monitoring an audio channel) for at least a predetermined amount of time after it is initiated. For example, when the first sound detector determines that the sound input corresponds to a predetermined type (e.g., includes a human voice), the second sound detector is operated in order to determine if the sound input also includes the predetermined content (e.g., the trigger word). In some implementations, the predetermined amount of time corresponds to a duration of the predetermined content. Thus, if the predetermined content is the phrase “Hey, SIRI,” the predetermined amount of time will be long enough to determine if that phrase was uttered (e.g., 1 or 2 seconds, or any another appropriate duration). If the predetermined content is longer, such as the phrase “Hey, SIRI, please wake up and help me out,” the predetermined time will be longer (e.g., 5 seconds, or another appropriate duration). In some implementations, the second sound detector operates as long as the first sound detector detects sound corresponding to the predetermined type. In such implementations, for example, as long as the first sound detector detects human speech in a sound input, the second sound detector will process the sound input to determine if it includes the predetermined content. As noted above, in some implementations, a third sound detector (e.g., the noise detector402) determines whether the sound input satisfies a predetermined condition (at step504). In some implementations, the third sound detector consumes less power while operating than the first sound detector. In some implementations, the third sound detector periodically monitors an audio channel according to a duty cycle, as discussed above with respect toFIG.4. Also, in some implementations, the third sound detector performs time-domain analysis of the sound input. In some implementations, the third sound detector consumes less power than the first sound detector because time-domain analysis is less processor intensive than the frequency-domain analysis applied by the second sound detector. Similar to the discussion above with respect to initiating the second sound detector (e.g., a trigger sound detector406) in response to a determination by the first sound detector (e.g., the sound-type detector404), in some implementations, the first sound detector is initiated in response to a determination by the third sound detector (e.g., the noise detector402). For example, in some implementations, the sound-type detector404is initiated in response to a determination by the noise detector402that the sound input satisfies a predetermined condition (e.g., is above a certain volume for a sufficient duration). In some implementations, initiating the first sound detector includes activating hardware and/or software (including, for example, circuits, processors, programs, memory, etc.). In other implementations, the first sound detector is initiated in response to other conditions, or is continuously operated. In some implementations, the device stores at least a portion of the sound input in memory (518). In some implementations, the memory is the buffer414of the audio subsystem226(FIG.4). The stored sound input allows non-real-time processing of the sound input by the device. For example, in some implementations, one or more of the sound detectors read and/or receive the stored sound input in order to process the stored sound input. This may be particularly useful where an upstream sound detector (e.g., the trigger sound detector406) is not initiated until part-way through receipt of a sound input by the audio subsystem226. In some implementations, the stored portion of the sound input is provided to the speech-based service once the speech-based service is initiated (520). Thus, the speech-based service can transcribe, process, or otherwise operate on the stored portion of the sound input even if the speech-based service is not fully operational until after that portion of sound input has been received. In some implementations, the stored portion of the sound input is provided to an adaptation module of the electronic device. In various implementations, steps (516)-(520) are performed at different positions within the method500. For example, in some implementations, one or more of steps (516)-(520) are performed between steps (502) and (504), between steps (510) and (514), or at any other appropriate position. FIG.6illustrates a method600of operating a voice trigger system (e.g., the voice trigger system400,FIG.4), according to some implementations. In some implementations, the method600is performed at an electronic device including one or more processors and memory storing instructions for execution by the one or more processors (e.g., the electronic device104). The electronic device determines whether it is in a predetermined orientation (602). In some implementations, the electronic device detects its orientation using light sensors (including cameras), microphones, proximity sensors, magnetic sensors, accelerometers, gyroscopes, tilt sensors, and the like. For example, the electronic device determines whether it is resting face-down or face-up on a surface by comparing the amount or brightness of light incident on a sensor of a front-facing camera and the amount or brightness of light incident on a sensor of a rear-facing camera. If the amount and/or brightness detected by the front-facing camera is sufficiently greater than that detected by the rear-facing camera, the electronic device will determine that it is facing up. On the other hand, if the amount and/or brightness detected by the rear-facing camera is sufficiently greater than that of the front-facing camera, the device will determine that it is facing down. Upon a determination that the electronic device is in the predetermined orientation, the electronic device activates a predetermined mode of a voice trigger (604). In some implementations, the predetermined orientation corresponds to a display screen of the device being substantially horizontal and facing down, and the predetermined mode is a standby mode (606). For example, in some implementations, if a smartphone or tablet is placed on a table or desk so that the screen is facing down, the voice trigger is placed in a standby mode (e.g., turned off) to prevent inadvertent activation of the voice trigger. On the other hand, in some implementations, the predetermined orientation corresponds to a display screen of the device being substantially horizontal and facing up, and the predetermined mode is a listening mode (608). Thus, for example, if a smartphone or tablet is placed on a table or desk so that the screen is facing up, the voice trigger is placed in a listening mode so that it can respond to the user when it detects the trigger. FIG.7illustrates a method700of operating a voice trigger (e.g., the voice trigger system400,FIG.4), according to some implementations. In some implementations, the method700is performed at an electronic device including one or more processors and memory storing instructions for execution by the one or more processors (e.g., the electronic device104). The electronic device operates a voice trigger (e.g., the voice trigger system400) in a first mode (702). In some implementations, the first mode is a normal listening mode. The electronic device determines whether it is in a substantially enclosed space by detecting that one or more of a microphone and a camera of the electronic device is occluded (704). In some implementations, a substantially enclosed space includes a pocket, purse, bag, drawer, glovebox, briefcase, or the like. As described above, in some implementations, a device detects that a microphone is occluded by emitting one or more sounds (e.g., tones, clicks, pings, etc.) from a speaker or transducer, and monitoring one or more microphones or transducers to detect echoes of the omitted sound(s). For example, a relatively large environment (e.g., a room or a vehicle) will reflect the sound differently than a relatively small, substantially enclosed environment (e.g., a purse or pocket). Thus, if the device detects that the microphone (or the speaker that emitted the sounds) is occluded based on the echoes (or lack thereof), the device determines that it is in a substantially enclosed space. In some implementations, the device detects that a microphone is occluded by detecting that the microphone is picking up a sound characteristic of an enclosed space. For example, when a device is in a pocket, the microphone may detect a characteristic rustling noise due to the microphone coming into contact or close proximity with the fabric of the pocket. In some implementations, a device detects that a camera is occluded based on the level of light received by a sensor, or by determining whether it can achieve a focused image. For example, if a camera sensor detects a low level of light during a time when a high level of light would be expected (e.g., during daylight hours), then the device determines that the camera is occluded, and that the device is in a substantially enclosed space. As another example, the camera may attempt to achieve an in-focus image on its sensor. Usually, this will be difficult if the camera is in an extremely dark place (e.g., a pocket or backpack), or if it is too close to the object on which it is attempting to focus (e.g., the inside of a purse or backpack). Thus, if the camera is unable to achieve an in-focus image, it determines that the device is in a substantially enclosed space. Upon a determination that the electronic device is in a substantially enclosed space, the electronic device switches the voice trigger to a second mode (706). In some implementations, the second mode is a standby mode (708). In some implementations, when in the standby mode, the voice trigger system400will continue to monitor ambient audio, but will not respond to received sounds regardless of whether they would otherwise trigger the voice trigger system400. In some implementations, in the standby mode, the voice trigger system400is deactivated, and does not process audio to detect trigger sounds. In some implementations, the second mode includes operating one or more sound detectors of a voice trigger system400according to a different duty cycle than the first mode. In some implementations, the second mode includes operating a different combination of sound detectors than the first mode. In some implementations, the second mode corresponds to a more sensitive monitoring mode, so that the voice trigger system400can detect and respond to a trigger sound even though it is in a substantially enclosed space. In some implementations, once the voice trigger is switched to the second mode, the device periodically determines whether the electronic device is still in a substantially enclosed space by detecting whether one or more of a microphone and a camera of the electronic device is occluded (e.g., using any of the techniques described above with respect to step (704)). If the device remains in a substantially enclosed space, the voice trigger system400will be kept in the second mode. In some implementations, if the device is removed from a substantially enclosed space, the electronic device will return the voice trigger to the first mode. In accordance with some implementations,FIG.8shows a functional block diagram of an electronic device800configured in accordance with the principles of the invention as described above. The functional blocks of the device may be implemented by hardware, software, or a combination of hardware and software to carry out the principles of the invention. It is understood by persons of skill in the art that the functional blocks described inFIG.8may be combined or separated into sub-blocks to implement the principles of the invention as described above. Therefore, the description herein may support any possible combination or separation or further definition of the functional blocks described herein. As shown inFIG.8, the electronic device800includes a sound receiving unit802configured to receive sound input. The electronic device800also includes a processing unit806coupled to the speech receiving unit802. In some implementations, the processing unit806includes a noise detecting unit808, a sound type detecting unit810, a trigger sound detecting unit812, a service initiating unit814, and a voice authenticating unit816. In some implementations, the noise detecting unit808corresponds to the noise detector402, discussed above, and is configured to perform any operations described above with reference to the noise detector402. In some implementations, the sound type detecting unit810corresponds to the sound-type detector404, discussed above, and is configured to perform any operations described above with reference to the sound-type detector404. In some implementations, the trigger sound detecting unit812corresponds to the trigger sound detector406, discussed above, and is configured to perform any operations described above with reference to the trigger sound detector406. In some implementations, the voice authenticating unit816corresponds to the voice authentication module428, discussed above, and is configured to perform any operations described above with reference to the voice authentication module428. The processing unit806is configured to: determine whether at least a portion of the sound input corresponds to a predetermined type of sound (e.g., with the sound type detecting unit810); upon a determination that at least a portion of the sound input corresponds to the predetermined type, determine whether the sound input includes predetermined content (e.g., with the trigger sound detecting unit812); and upon a determination that the sound input includes the predetermined content, initiate a speech-based service (e.g., with the service initiating unit814). In some implementations, the processing unit806is also configured to, prior to determining whether the sound input corresponds to a predetermined type of sound, determine whether the sound input satisfies a predetermined condition (e.g., with the noise detecting unit808). In some implementations, the processing unit806is also configured to determine whether the sound input corresponds to a voice of a particular user (e.g., with the voice authenticating unit816). In accordance with some implementations,FIG.9shows a functional block diagram of an electronic device900configured in accordance with the principles of the invention as described above. The functional blocks of the device may be implemented by hardware, software, or a combination of hardware and software to carry out the principles of the invention. It is understood by persons of skill in the art that the functional blocks described inFIG.9may be combined or separated into sub-blocks to implement the principles of the invention as described above. Therefore, the description herein may support any possible combination or separation or further definition of the functional blocks described herein. As shown inFIG.9, the electronic device900includes a voice trigger unit902. The voice trigger unit902can be operated in various different modes. In a first mode, the voice trigger unit receives sound inputs and determines if they satisfy certain criteria (e.g., a listening mode). In a second mode, the voice trigger unit902does not receive and/or does not process sound inputs (e.g., a standby mode). The electronic device900also includes a processing unit906coupled to the voice trigger unit902. In some implementations, the processing unit906includes an environment detecting unit908, which may include and/or interface with one or more sensors (e.g., including a microphone, a camera, an accelerometer, a gyroscope, etc.) and a mode switching unit910. In some implementations, the processing unit906is configured to: determine whether the electronic device is in a substantially enclosed space by detecting that one or more of a microphone and a camera of the electronic device is occluded (e.g., with the environment detecting unit908); and upon a determination that the electronic device is in a substantially enclosed space, switching the voice trigger to a second mode (e.g., with the mode switching unit910). In some implementations, the processing unit is configured to: determine whether the electronic device is in a predetermined orientation (e.g., with the environment detecting unit908); and upon a determination that the electronic device is in the predetermined orientation, activate a predetermined mode of a voice trigger (e.g., with the mode switching unit910). The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosed implementations to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain the principles and practical applications of the disclosed ideas, to thereby enable others skilled in the art to best utilize them with various modifications as are suited to the particular use contemplated. It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first sound detector could be termed a second sound detector, and, similarly, a second sound detector could be termed a first sound detector, without changing the meaning of the description, so long as all occurrences of the “first sound detector” are renamed consistently and all occurrences of the “second sound detector” are renamed consistently. The first sound detector and the second sound detector are both sound detectors, but they are not the same sound detector. The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “upon a determination that” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
129,355
11862187
DETAILED DESCRIPTION Reference will now be made to embodiments, examples of which are illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide an understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are used only to distinguish one element from another. For example, a first electronic device could be termed a second electronic device, and, similarly, a second electronic device could be termed a first electronic device, without departing from the scope of the various described embodiments. The first electronic device and the second electronic device are both electronic devices, but they are not the same electronic device. The terminology used in the description of the various embodiments described herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context. A system is provided for extracting a sound source and frequency data from a mixed audio source. Many uses of audio content benefit from the ability to isolate a one or more sound sources (e.g., a vocal track or instrumental source) from audio content. For example, isolating a sound source is used in karaoke applications and in lyric determination applications. Similarly, there are benefits to estimating a fundamental frequency of a single sound source from audio content. For example, it is possible to determine when other audio content (e.g., cover songs) are related to the audio content by matching the fundamental frequencies of the audio content. Dependencies between the tasks of sound source isolation and frequency determination allow improved performance when the tasks are performed jointly. For example, jointly performed tasks are accomplished using a model that is trained to both isolate the sound source and determine one or more frequencies within the model. The weights of the model are trained to reflect the impact that the individual tasks have on each other. For example, instead of creating a model that is trained (e.g., optimized) to individually isolate sound source and a separate model that is trained (e.g., optimized) to individually determine frequencies, a joint model is created where these two tasks are optimized together. A neural network model is trained to simultaneously isolate a sound source and determine frequencies over time of the sound source. In some embodiments, the neural network model comprises an Artificial Neural Network (ANN). FIG.1is a block diagram illustrating a media content delivery system100, in accordance with some embodiments. The media content delivery system100includes one or more electronic devices102(e.g., electronic device102-1to electronic device102-m, where m is an integer greater than one), one or more media content servers104, and/or one or more content distribution networks (CDNs)106. The one or more media content servers104are associated with (e.g., at least partially compose) a media-providing service. The one or more CDNs106store and/or provide one or more content items (e.g., to electronic devices102). In some embodiments, the CDNs106are included in the media content servers104. One or more networks112communicably couple the components of the media content delivery system100. In some embodiments, the one or more networks112include public communication networks, private communication networks, or a combination of both public and private communication networks. For example, the one or more networks112can be any network (or combination of networks) such as the Internet, other wide area networks (WAN), local area networks (LAN), virtual private networks (VPN), metropolitan area networks (MAN), peer-to-peer networks, and/or ad-hoc connections. In some embodiments, an electronic device102is associated with one or more users. In some embodiments, an electronic device102is a personal computer, mobile electronic device, wearable computing device, laptop computer, tablet computer, mobile phone, feature phone, smart phone, digital media player, a speaker, television (TV), digital versatile disk (DVD) player, and/or any other electronic device capable of presenting media content (e.g., controlling playback of media items, such as music tracks, videos, etc.). Electronic devices102may connect to each other wirelessly and/or through a wired connection (e.g., directly through an interface, such as an HDMI interface). In some embodiments, an electronic device102is a headless client. In some embodiments, electronic devices102-1and102-mare the same type of device (e.g., electronic device102-1and electronic device102-mare both speakers). Alternatively, electronic device102-1and electronic device102-minclude two or more different types of devices. In some embodiments, electronic devices102-1and102-msend and receive media-control information through network(s)112. For example, electronic devices102-1and102-msend media control requests (e.g., requests to play music, movies, videos, or other media items, or playlists thereof) to media content server104through network(s)112. Additionally, electronic devices102-1and102-m, in some embodiments, also send indications of media content items to media content server104through network(s)112. In some embodiments, the media content items are uploaded to electronic devices102-1and102-mbefore the electronic devices forward the media content items to media content server104. In some embodiments, electronic device102-1communicates directly with electronic device102-m(e.g., as illustrated by the dotted-line arrow), or any other electronic device102. As illustrated inFIG.1, electronic device102-1is able to communicate directly (e.g., through a wired connection and/or through a short-range wireless signal, such as those associated with personal-area-network (e.g., BLUETOOTH/BLE) communication technologies, radio-frequency-based near-field communication technologies, infrared communication technologies, etc.) with electronic device102-m. In some embodiments, electronic device102-1communicates with electronic device102-mthrough network(s)112. In some embodiments, electronic device102-1uses the direct connection with electronic device102-mto stream content (e.g., data for media items) for playback on the electronic device102-m. In some embodiments, electronic device102-1and/or electronic device102-minclude a media application222(FIG.2) that allows a respective user of the respective electronic device to upload (e.g., to media content server104), browse, request (e.g., for playback at the electronic device102), and/or present media content (e.g., control playback of music tracks, videos, etc.). In some embodiments, one or more media content items are stored locally by an electronic device102(e.g., in memory212of the electronic device102,FIG.2). In some embodiments, one or more media content items are received by an electronic device102in a data stream (e.g., from the CDN106and/or from the media content server104). The electronic device(s)102are capable of receiving media content (e.g., from the CDN106) and presenting the received media content. For example, electronic device102-1may be a component of a network-connected audio/video system (e.g., a home entertainment system, a radio/alarm clock with a digital display, or an infotainment system of a vehicle). In some embodiments, the CDN106sends media content to the electronic device(s)102. In some embodiments, the CDN106stores and provides media content (e.g., media content requested by the media application222of electronic device102) to electronic device102via the network(s)112. For example, content (also referred to herein as “media items,” “media content items,” and “content items”) is received, stored, and/or served by the CDN106. In some embodiments, content includes audio (e.g., music, spoken word, podcasts, etc.), video (e.g., short-form videos, music videos, television shows, movies, clips, previews, etc.), text (e.g., articles, blog posts, emails, etc.), image data (e.g., image files, photographs, drawings, renderings, etc.), games (e.g., 2- or 3-dimensional graphics-based computer games, etc.), or any combination of content types (e.g., web pages that include any combination of the foregoing types of content or other content not explicitly listed). In some embodiments, content includes one or more audio media items (also referred to herein as “audio items,” “tracks,” and/or “audio tracks”). In some embodiments, media content server104receives media requests (e.g., commands) from electronic devices102. In some embodiments, media content server104and/or CDN106stores one or more playlists (e.g., information indicating a set of media content items). For example, a playlist is a set of media content items defined by a user and/or defined by an editor associated with a media-providing service. The description of the media content server104as a “server” is intended as a functional description of the devices, systems, processor cores, and/or other components that provide the functionality attributed to the media content server104. It will be understood that the media content server104may be a single server computer, or may be multiple server computers. Moreover, the media content server104may be coupled to CDN106and/or other servers and/or server systems, or other devices, such as other client devices, databases, content delivery networks (e.g., peer-to-peer networks), network caches, and the like. In some embodiments, the media content server104is implemented by multiple computing devices working together to perform the actions of a server system (e.g., cloud computing). FIG.2is a block diagram illustrating an electronic device102(e.g., electronic device102-1and/or electronic device102-m,FIG.1), in accordance with some embodiments. The electronic device102includes one or more central processing units (CPU(s), i.e., processors or cores)202, one or more network (or other communications) interfaces210, memory212, and one or more communication buses214for interconnecting these components. The communication buses214optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. In some embodiments, the electronic device102includes a user interface204, including output device(s)206and/or input device(s)208. In some embodiments, the input devices208include a keyboard, mouse, or track pad. Alternatively, or in addition, in some embodiments, the user interface204includes a display device that includes a touch-sensitive surface, in which case the display device is a touch-sensitive display. In electronic devices that have a touch-sensitive display, a physical keyboard is optional (e.g., a soft keyboard may be displayed when keyboard entry is needed). In some embodiments, the output devices (e.g., output device(s)206) include a speaker252(e.g., speakerphone device) and/or an audio jack250(or other physical output connection port) for connecting to speakers, earphones, headphones, or other external listening devices. Furthermore, some electronic devices102use a microphone254and voice recognition device to supplement or replace the keyboard. Optionally, the electronic device102includes an audio input device (e.g., a microphone) to capture audio (e.g., speech from a user). Optionally, the electronic device102includes a location-detection device240, such as a global navigation satellite system (GNSS) (e.g., GPS (global positioning system), GLONASS, Galileo, BeiDou) or other geo-location receiver, and/or location-detection software for determining the location of the electronic device102(e.g., module for finding a position of the electronic device102using trilateration of measured signal strengths for nearby devices). In some embodiments, the one or more network interfaces210include wireless and/or wired interfaces for receiving data from and/or transmitting data to other electronic devices102, a media content server104, a CDN106, and/or other devices or systems. In some embodiments, data communications are carried out using any of a variety of custom or standard wireless protocols (e.g., NFC, RFID, IEEE 802.15.4, Wi-Fi, ZigBee, 6LoWPAN, Thread, Z-Wave, Bluetooth, ISA100.11a, WirelessHART, MiWi, etc.). Furthermore, in some embodiments, data communications are carried out using any of a variety of custom or standard wired protocols (e.g., USB, Firewire, Ethernet, etc.). For example, the one or more network interfaces210include a wireless interface260for enabling wireless data communications with other electronic devices102, and/or or other wireless (e.g., Bluetooth-compatible) devices (e.g., for streaming audio data to the electronic device102of an automobile). Furthermore, in some embodiments, the wireless interface260(or a different communications interface of the one or more network interfaces210) enables data communications with other WLAN-compatible devices (e.g., electronic device(s)102) and/or the media content server104(via the one or more network(s)112,FIG.1). In some embodiments, electronic device102includes one or more sensors including, but not limited to, accelerometers, gyroscopes, compasses, magnetometer, light sensors, near field communication transceivers, barometers, humidity sensors, temperature sensors, proximity sensors, range finders, and/or other sensors/devices for sensing and measuring various environmental conditions. Memory212includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory212may optionally include one or more storage devices remotely located from the CPU(s)202. Memory212, or alternately, the non-volatile memory solid-state storage devices within memory212, includes a non-transitory computer-readable storage medium. In some embodiments, memory212or the non-transitory computer-readable storage medium of memory212stores the following programs, modules, and data structures, or a subset or superset thereof:an operating system216that includes procedures for handling various basic system services and for performing hardware-dependent tasks;network communication module(s)218for connecting the electronic device102to other computing devices (e.g., other electronic device(s)102, and/or media content server104) via the one or more network interface(s)210(wired or wireless) connected to one or more network(s)112;a user interface module220that receives commands and/or inputs from a user via the user interface204(e.g., from the input devices208) and provides outputs for playback and/or display on the user interface204(e.g., the output devices206);a media application222(e.g., an application for accessing a media-providing service of a media content provider associated with media content server104) for uploading, browsing, receiving, processing, presenting, and/or requesting playback of media (e.g., media items). In some embodiments, media application222includes a media player, a streaming media application, and/or any other appropriate application or component of an application. In some embodiments, media application222is used to monitor, store, and/or transmit (e.g., to media content server104) data associated with user behavior. In some embodiments, media application222also includes the following modules (or sets of instructions), or a subset or superset thereof:a media content selection module224for selecting one or more media content items and/or sending, to the media content server, an indication of the selected media content item(s);a media content browsing module226for providing controls and/or user interfaces enabling a user to navigate, select for playback, and otherwise control or interact with media content, whether the media content is stored or played locally or remotely;a content items module228for processing uploaded media items and storing media items for playback and/or for forwarding to the media content server;a sound source determination module230for separating a sound source from mixture audio (e.g., that includes vocal and non-vocal portions); anda frequency determination module232for tracking and/or determining one or more pitches (e.g., frequencies) of the mixture audio; andother applications236, such as applications for word processing, calendaring, mapping, weather, stocks, time keeping, virtual digital assistant, presenting, number crunching (spreadsheets), drawing, instant messaging, e-mail, telephony, video conferencing, photo management, video management, a digital music player, a digital video player, 2D gaming, 3D (e.g., virtual reality) gaming, electronic book reader, and/or workout support. FIG.3is a block diagram illustrating a media content server104, in accordance with some embodiments. The media content server104typically includes one or more central processing units/cores (CPUs)302, one or more network interfaces304, memory306, and one or more communication buses308for interconnecting these components. Memory306includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid-state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory306optionally includes one or more storage devices remotely located from one or more CPUs302. Memory306, or, alternatively, the non-volatile solid-state memory device(s) within memory306, includes a non-transitory computer-readable storage medium. In some embodiments, memory306, or the non-transitory computer-readable storage medium of memory306, stores the following programs, modules and data structures, or a subset or superset thereof:an operating system310that includes procedures for handling various basic system services and for performing hardware-dependent tasks;a network communication module312that is used for connecting the media content server104to other computing devices via one or more network interfaces304(wired or wireless) connected to one or more networks112;one or more server application modules314including, but not limited to, one or more of:a neural network module316for training and/or storing a neural network, the neural network module316including, but not limited to, one or more of:a training module318for training the neural network (e.g., using training data);a sound source determination module320for isolating a sound source from mixture audio (e.g., that includes vocal and non-vocal portions); anda frequency determination module326for determining frequency data associated with the isolated sound source.a media request processing module322for processing requests for media content and facilitating access to requested media items by electronic devices (e.g., the electronic device102) including, optionally, streaming media content to such devices;one or more server data module(s)330for handling the storage of and/or access to media items and/or metadata relating to the media items; in some embodiments, the one or more server data module(s)330include:a media content database332for storing media items; anda metadata database334for storing metadata relating to the media items. In some embodiments, the media content server104includes web or Hypertext Transfer Protocol (HTTP) servers, File Transfer Protocol (FTP) servers, as well as web pages and applications implemented using Common Gateway Interface (CGI) script, PHP Hyper-text Preprocessor (PHP), Active Server Pages (ASP), Hyper Text Markup Language (HTML), Extensible Markup Language (XML), Java, JavaScript, Asynchronous JavaScript and XML (AJAX), XHP, Javelin, Wireless Universal Resource File (WURFL), and the like. In some embodiments, the sound source determination module230and the frequency determination module232are jointly trained (e.g., within a common model, such as model509,FIG.5B). Each of the above identified modules stored in memory212and306corresponds to a set of instructions for performing a function described herein. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory212and306optionally store a subset or superset of the respective modules and data structures identified above. Furthermore, memory212and306optionally store additional modules and data structures not described above. In some embodiments, memory212stores a subset or superset of the respective modules and data structures described with regard to memory306. In some embodiments, memory306stores a subset or superset of the respective modules and data structures described with regard to memory212. AlthoughFIG.3illustrates the media content server104in accordance with some embodiments,FIG.3is intended more as a functional description of the various features that may be present in one or more media content servers than as a structural schematic of the embodiments described herein. In practice, and as recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. For example, some items shown separately inFIG.3could be implemented on single servers and single items could be implemented by one or more servers. In some embodiments, media content database332and/or metadata database334are stored on devices (e.g., CDN106) that are accessed by media content server104. The actual number of servers used to implement the media content server104, and how features are allocated among them, will vary from one implementation to another and, optionally, depends in part on the amount of data traffic that the server system handles during peak usage periods as well as during average usage periods. FIGS.4,5A, and5Billustrate three different approaches for jointly performing sound source isolation and estimation of frequency data associated with the isolated sound source.FIG.4illustrates a first model400for determining a sound source representation and a frequency representation, where the frequency representation is determined using the sound source representation as an input.FIG.5Aillustrates a second model500for determining a frequency representation and a sound source representation, where the sound source representation is determined using the frequency representation and a mixture audio representation as inputs.FIG.5Billustrates a third model509for determining two sound source representations and two frequency representations. FIG.4is a block diagram illustrating a model400for a “Source to Pitch” approach to jointly determining a sound source representation403and a frequency representation405, in accordance with some embodiments. For example, model400receives mixture audio401, which is a representation of an audio item (e.g., a representation of mixed audio) that includes multiple portions (e.g., lead vocal, backup vocal, guitar, bass, piano, and drum portions). The model400separates (e.g., using a neural network) a first sound source portion (e.g., a vocal portion) from the audio item to generate a sound source representation403. The model400uses the separated sound source portion from the audio item to determine (e.g., using a neural network) frequencies that are present in the separated sound source portion. The model outputs the determined frequency data as frequency representation405. In some embodiments, the model400comprises two neural networks (e.g., each neural network comprises a U-Net). For example, model400includes neural network402and neural network404. In some embodiments, the system uses the neural network(s) to determine, from mixture audio401(e.g., a mixture of vocal and non-vocal content), a sound source representation403. For example, the sound source representation403includes a vocal track that has been separated from the non-vocal (e.g., instrumental) portions of the mixture audio401. In some embodiments, the mixture audio401is stored in media content database332. In some embodiments, the mixture audio is stored as content items228. In some embodiments, the neural network processes content as it is added to the database and/or in response to a request to process a particular media content item. In some embodiments, the model400uses the output (e.g., sound source representation403) from the first neural network402as an input for a second neural network404. In some embodiments, the second neural network404determines a frequency representation405. In some embodiments, frequency representation405represents one or more pitches that are included in the mixture audio401. Examples of a sound source representation and a frequency representation are shown inFIG.6. In some embodiments, each neural network (e.g., neural network402and neural network404) in the model400performs decoding and encoding in a U-net. For example, decoding includes downsampling (e.g., by performing convolutions) the input to the neural network and encoding includes upsampling the downsampled result to generate the output of the neural network. In some embodiments, the model400first determines (e.g., using neural network402) sound source representation403from the mixture audio. For example, the model400separates source audio (e.g., a vocal track) from the mixture audio. In some embodiments, determining the source (e.g., vocals) is performed separately from determining frequencies. For example, the neural network402is trained separately from the neural network404. The second neural network404receives, as an input, the output of the first neural network402. For example, the sound source representation403output by the first neural network is fed to the second neural network. In some embodiments, the neural networks (e.g., the weights for each neural network) are trained separately. For example, the optimization for network402is performed separately from the optimization for network404. FIG.5Ais a block diagram illustrating a model500for a “Pitch to Source” approach to jointly determining a frequency representation503and a sound source representation505. In some embodiments, model500includes two neural networks: a first neural network502and a second neural network504. In some embodiments, the two neural networks are optimized jointly (e.g., together). For example, the weights (e.g., and outputs) for each neural network are calculated and/or updated simultaneously (e.g., during training of the neural network). In some embodiments, the first neural network502receives mixture audio501(e.g., a representation of mixture audio501, such as FFT602(FIG.6)) as an input. For example, mixture audio501includes vocal and non-vocal portions. The first neural network outputs a frequency (e.g., pitch) representation503of the mixture audio501. In some embodiments, the frequency representation503is fed (e.g., as an input) to the second neural network504. In some embodiments, mixture audio501is also fed (e.g., as an input) to the second neural network504. For example, frequency representation503and mixture audio501are provided over separate channels as inputs to the second neural network504. The second neural network504uses the frequency representation503input and the mixture audio501input to generate (e.g., and output) a sound source representation505. As explained above, the weights of neural network504are trained simultaneously with the weights of neural network502. In some embodiments, frequency representation503represents one or more pitches that are present in the mixture audio501and sound source representation505represents sound sources that have been separated from the mixture audio (e.g., vocals that have been extracted from mixture audio501). FIG.5Bis a block diagram illustrating a model509for a “Source to Pitch to Source to Pitch” approach to jointly determining a frequency representation and a sound source representation. The model509includes two iterations of sound source and frequency determinations. In some embodiments, the first iteration (e.g., including neural network511and neural network513) uses the mixed audio to calculate a pitch output (e.g., first frequency representation514) and a sound source separation output (e.g., first sound source representation512). For example, the first iteration first performs a separation using neural network511to extract first sound source representation512. Then, pitch tracking is performed using neural network513on the first sound source representation512(e.g., as an input to neural network513). In some embodiments, the second iteration performs, using neural network515, a second sound source separation to output second sound source representation516. For example, the second iteration uses the already determined first sound source representation512as an input to the neural network515. The separated sound sources (e.g., first sound source representation512) are further refined using neural network515to generate a cleaner version of separated sound sources (e.g., second sound source representation516). In some embodiments, the first sound source representation includes noise from the mixture audio (e.g., the first sound source representation is not a completely clean version of the separated sound source track), and the second sound source representation is generated by removing at least a portion of the noise in the first sound source representation. For example, the second sound source representation is a cleaner version of the first sound source representation. In some embodiments, the neural network515uses the first sound source representation512and the first frequency representation514as inputs to generate (e.g., output) the second sound source representation516. In some embodiments, the second sound source representation516is fed as an input to neural network517and a second frequency representation518is output. In some embodiments, the second sound source representation516and the second frequency representation518are cleaner versions of a separated sound source and pitch tracking, respectively, than the first sound source representation512and the first frequency representation514. In some embodiments, the neural networks511,513,515, and517are simultaneously (e.g., jointly) optimized. For example, each neural network includes a set of weights. The set of weights for the neural networks are jointly determined during training of the model509. In some embodiments, the weights for each neural network are distinct. The neural network511is optimized to output a first sound source representation512that the model will also use for pitch tracking (to determine first frequency representation514) and that will be used for the second iteration (e.g., to generate the second sound source representation and the second frequency representation). By training the model509(e.g., the plurality of neural networks in the model) simultaneously, the outputs of model509(e.g., second sound source representation516and second frequency representation518) are optimized. For example, joint learning optimizes both source separation (e.g., to generate sound source representations) and pitch tracking (e.g., to generate frequency representations) because information about the pitch and sound sources are learned at the same time, and this information can be used to generate better (e.g., more accurate) sound source representation(s) and/or frequency representation(s). In some embodiments, each of the representations (e.g., sound source representations and frequency representations) corresponds to a matrix (e.g., that can be illustrated by a fast Fourier transform diagram, as described with reference toFIG.6). In some embodiments, network515receives the matrices (e.g., over different channels) as two separate inputs. For example, neural network515receive a matrix representing first sound source representation512over a first channel and receives a matrix representing first frequency representation514over a second channel. In some embodiments, more than two iterations are performed. For example, a third sound source representation and/or a third frequency representation are determined using additional neural networks. In some embodiments, the order of the neural networks is changed. For example, a first frequency representation514is used as an input for a neural network determining a first sound source representation512(e.g., determining, using a first neural network, a frequency representation before determining, using a second neural network, a sound source representation). In some embodiments, the model509(e.g., and/or model(s)400or500) is repeatedly retrained with additional data. For example, a first training set of data is used to train model509. Mixture audio510is then classified (e.g., to determine second sound source representation516and second frequency representation518) using the trained model509. In some embodiments, the model509is retrained (e.g., to adjust the weights of the neural networks in the model) using a second training set of data. In some embodiments, the second training set of data comprises data provided by a user. For example, a user determined (e.g., by the electronic device) to have good pitch control (e.g., based on prior data and/or performances by the user) sings (e.g., while performing karaoke) an audio content item. The frequencies of the user's voice are recorded and stored (e.g., by the electronic device102and/or server system104) as frequency data associated with the audio content item. The stored frequency data is used in the second training set of data (e.g., to update the weights of the neural network). FIG.6illustrates representations of a media content item. In some embodiments, a media content item600is represented by a mixture audio matrix (“Ym”). For example, the mixture audio matrix is transformed into a fast Fourier transform (FFT) spectrogram (e.g., mixture audio representation602). The mixture audio representation602represents, over a period of time, a distribution of frequencies and amplitudes of audio signals for the mixture audio (e.g., including vocal and non-vocal sources). In some embodiments, the non-vocal sources comprise instruments. The vocal representation604(“YV”) is generated, from mixture audio representation602, by separating audio, from the mixture audio, that corresponds to a vocal source. The separated audio that corresponds to a vocal source is illustrated by an FFT spectrogram shown in vocal representation604. In some embodiments, frequency representation606is generated from mixture audio representation602. In some embodiments, frequency representation606corresponds to pitches of vocal sources represented in vocal representation604. For example, frequency representation606provides a likelihood that a particular frequency (or frequencies) is dominant at respective points in time. Frequency representation606illustrates an amplitude and/or volume of pitch values over time. To represent a plurality of frequencies in the frequency representation, within the matrix for the frequency representation606, for a respective time, more than one value in the matrix is greater than zero. In some embodiments, vocal representation604and frequency representation606are generated using model500(FIG.5A) or model509(FIG.5B). For example, vocal representation604corresponds to second sound source representation516and frequency representation606corresponds to second frequency representation518, as generated using model509. In some embodiments, frequency representation606includes a plurality of dominant frequencies (e.g., each dominant frequency corresponding to a distinct vocal source). For example, mixture audio600includes a plurality of distinct vocal sources (e.g., multiple vocalists). Frequency representation606illustrates at least3distinct sources of the pitches. FIGS.7A-7Bare flow diagrams illustrating a method700for identifying a first sequence of characters based on a generated probability matrix, in accordance with some embodiments. Method700may be performed (702) at a first electronic device (e.g., server104and/or electronic device102-1, the electronic device having one or more processors and memory storing instructions for execution by the one or more processors. In some embodiments, the method700is performed by executing instructions stored in the memory (e.g., memory306,FIG.3and/or memory212,FIG.2) of the electronic device. In some embodiments, the method700is performed by a combination of the server system (e.g., including media content server104and CDN106) and an electronic device (e.g., a client device). In some embodiments, the server system provides tracks (e.g., media items) for playback to the electronic device(s)102of the media content delivery system100. Referring now toFIG.7A, in performing the method700, the electronic device receives (704) a first audio content item that includes a plurality of sound sources. In some embodiments, the plurality of sound sources includes one or more vocal sources and/or one or more instrumental sources. The electronic device generates (706) a representation (e.g., a magnitude spectrogram) of the first audio content item. For example, as shown inFIG.6, the representation of mixture audio (Ym)602illustrates a magnitude spectrogram of the first audio content item600. In some embodiments, the representation of the first audio content item is generated by an optical spectrometer, a bank of band-pass filters, by Fourier transform, or by a wavelet transform. The electronic device determines (708), from the representation of the first audio content item, a representation of an isolated sound source, and frequency data associated with the isolated sound source. In some embodiments, the isolated sound source is a sound source of the plurality of sound sources included in the first audio content item. For example, as shown inFIG.6, a representation of the isolated sound source (e.g., vocals) is represented by Yv604. The frequency data is represented by Sv606. The determining includes using a neural network to jointly determine the representation of the isolated sound source and the frequency data associated with the isolated sound source. For example, as shown inFIGS.5A-5B, models500and509include one or more neural networks used to determine the one or more sound source representations and the one or more frequency representations. In some embodiments, the isolated sound source comprises (710) a vocal source. For example, the electronic device separates a vocal track from the mixed audio item. In some embodiments, the isolated sound source comprises (712) an instrumental (e.g., a non-vocal, drums, guitar, bass, etc.) source. For example, the electronic device separates an instrumental source from a vocal source of the mixture audio. In some embodiments, the neural network comprises (714) a plurality of U-nets. For example, as shown inFIGS.4,5A and5B, each neural network corresponds to a U-net, including encoding and decoding stages. In some embodiments, the neural network comprises (716) a first source network, a first pitch network, a second source network, and a second pitch network. The second source network is fed a concatenation of an output of the first source network with an output of the first pitch network, and the output of the second source network is fed to the second pitch network. For example, model509shown inFIG.5Billustrates that the first source network511outputs first sound source representation512(e.g., the output of the first source network). The first pitch network513outputs first frequency representation514. These outputs (e.g., first sound source representation512and first frequency representation514) are fed as inputs to the second source network515. The output of the second source network515(e.g., second sound source representation516) is fed to the second pitch network517to generate the second frequency representation518. In some embodiments, the neural network comprises (718) a pitch network and a source network, and an output of the pitch network is fed to the source network. For example,FIG.5Aillustrates a neural network model500having a first (e.g., pitch) network502and a second (e.g., source) network504that is fed an output (e.g., frequency representation503) from the first source network. In some embodiments, the first electronic device determines (720) that a portion of a second audio content item matches the first audio content item by determining frequency data associated with (e.g., for) a representation of the second audio content item and comparing the frequency data associated with (e.g., of) the second audio content item with the frequency data of the first audio content item. For example, the first electronic device receives a second audio content item (e.g., distinct from mixture audio510), and uses the model509to determine one or more frequency representations (e.g., and/or one or more sound source representations) for the second audio content item. In some embodiments, the second audio content item (e.g., and/or third audio content item) is received from content items228or media content database332. In some embodiments, the second audio content item is provided by a user (e.g., uploaded to the electronic device). The electronic device compares the frequency representation(s) determined for the second audio content item with the frequency representation(s) determined for the first audio content item. For example, two media content items are identified as matching when the items share one or more pitches (e.g., over a predefined time period). Without matching a vocal representation, instrumental cover songs (e.g., a cello playing a song without singing lyrics) are identified as matching the original song that also included vocals (e.g., instead of purely instrumentals). In some embodiments, the first electronic device determines (722) that a portion of a third audio content item matches the first audio content item by determining a representation of the isolated sound source for the third audio content item and comparing the representation of the isolated sound source for the third audio content item with the representation of the isolated sound source of the first audio content item. For example, the first electronic device receives a third audio content item (e.g., distinct from mixture audio510), and uses the model509to determine one or more sound source representations and one or more frequency representations for the third audio content item. The electronic device compares the sound source representation(s) and the frequency representation(s) determined for the third audio content item with the sound source representation(s) and frequency representation(s) determined for the first audio content item. The electronic device determines that the first audio content item and the third audio content item are related in accordance with a determination that at least a portion of the sound source representation(s) of the first and third audio content items match and/or at least a portion of the frequency representation(s) of the first and third audio content items match, enabling the electronic device to identify the third audio content item as a cover song that includes a different sound source (e.g., a different artist than the first audio content item). In some embodiments, the electronic device determines (e.g., classifies) the first audio content item corresponds to a particular genre based on the sound source representation and/or frequency representation. In some embodiments, the electronic device aligns the frequency representation (e.g., Sv,FIG.6) with playback of the first audio content item. For example, the electronic device displays a pitch tracking tool to provide a user with pitch information for respective portions of the audio content item. The pitch tracking tool enables a user to sing along (e.g., in a karaoke setting) with playback of the first audio content item and receive feedback on how the user's vocal input compares with the determined frequencies (e.g., pitches) of the first audio content item (e.g., as determined by the neural network). For example, the frequency representation (as determined by the neural network) corresponds to a target pitch that the user should attempt to match while singing along. In some embodiments, generating the representation of the first audio content item comprises determining a first set of weights for a source network of a source to pitch network, feeding a pitch network of the source to pitch network an output of the source network of the source to pitch network, and determining a second set of weights for the pitch network of the source to pitch network. For example, a “Source to Pitch” network is shown in model400inFIG.4. The output of the source network of the source to pitch network (e.g., network402) is used as an input to the pitch network of the source to pitch network (e.g., network404). In some embodiments, the source network is the same as the first source network. In some embodiments, the neural network model is trained (e.g., before determining the representation of the isolated sound source and frequency data). For example, training the neural net includes generating a first set of weights corresponding to the isolated sound source, generating a second set of weights corresponding to the frequency data, and using the first set of weights and the second set of weights as input to a second source representation model. In some embodiments, the first set of weights, second set of weights, third set of weights, and fourth set of weights are determined concurrently. In some embodiments, the sets of weights are optimized. In some embodiments, the neural network is retrained using additional (e.g., different) training data. AlthoughFIGS.7A-7Billustrate a number of logical stages in a particular order, stages which are not order dependent may be reordered and other stages may be combined or broken out. Some reordering or other groupings not specifically mentioned will be apparent to those of ordinary skill in the art, so the ordering and groupings presented herein are not exhaustive. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software, or any combination thereof. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles and their practical applications, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.
49,305
11862188
DETAILED DESCRIPTION As noted above, our method for cough detection makes use of an audio feature set (or representation) can be described as a multidimensional vector or embedding, e.g., a 512 or 1024 dimensional vector, which in some sense represents non-semantic, paralinguistic representation of speech.FIG.1shows the manner in which this feature set is obtained. In particular, a speech data set consisting of a plurality of speech audio clips is obtained, for example the AudioSet mentioned previously. A self-supervised triplet loss model may be trained in a self-supervised manner on this speech set and configured to generate an audio feature set14(multidimensional vector, e.g., vector of dimension 512 or 1024), which is a general representation of non-semantic, paralinguistic speech. As noted above, one possible example of this collection of speech samples10is known as AudioSet. Additional, and/or alternative sets of speech samples may be used, and could include tens of thousands or more speech samples from a plurality of people of different ages and speaking different languages, or all the same language, e.g., English. Once the feature set14is obtained as perFIG.1, it is then used in a cough detection process or methodology which is outlined inFIG.2.FIGS.6and7provide more details on specific embodiments of the methodology ofFIG.2. Referring toFIG.2, our method provides for detecting a cough in an audio stream20. This audio stream20will typically be in the form of a digital sound recording, e.g., captured by the microphone of a device such as a smartphone, or intelligent home assistant, personal computer, etc. This audio stream is provided to a computer system which includes executable code stored in memory that performs certain processing steps, indicated at blocks22,26,30and34. In particular, at block22there is a pre-processing step performed. Basically, this step converts the audio stream20into an input audio sequence in the form of a plurality of time-separated audio segments, e.g., segments of 1 second duration, possibly with some overlap between the segments. The pre-processing step can include sub-steps such as computing a frequency spectrum for the audio segments, providing Mel-spectrum scaling or conversion to Mel-spectrographs (described below) or other steps depending on the implementation. The result of the pre-processing step is the plurality of time separated segments24, e.g., N such segments, with the value of N being dependent on the length or duration of the audio stream. N can vary from 1 to a thousand, 10,000 or even more, for example where the duration of the audio stream is on the order of hours or even days. At step26, there is a step of generating an embedding for each of the segments of the input audio sequence using the audio feature set learned in a self-supervised triplet loss manner from a plurality of speech audio clips from a speech dataset (i.e., the feature set14ofFIG.1). The manner of generating this embedding is described inFIGS.6and7and described in more detail below. Generally speaking, a TRILL embedding model is applied to input segments and the result is a matrix of embeddings281 . . . N, e.g., each of dimension 512 or 1024, where N is the number of time-separated audio segments as explained above. Non-semantic aspects of the speech signal (e.g., speaker identity, language, and emotional state) generally change more slowly than the phonetic and lexical aspects that are used to convey meaning. Therefore, a good representation may be expected for non-semantic downstream tasks to be considerably more stable in time. To take advantage of this intuition, temporal proximity may be utilized as a self-supervision signal. More formally, consider a large, unlabeled speech collection represented as a sequence of spectrogram context windows X=x1, x2, . . . , xN, where each xi∈F×T. A map g may be learned, g:F×T→dfrom spectrogram context windows to d-dimensional space such that ∥g(xi)−g(xj)∥≤∥g(xi)−g(xk)∥ when |i−j|≤|i−k|. Such a relationship may be expressed as a learning objective using triplet loss based metric learning as follows. First, a large collection of example triplets of the form z=(xi, x1, xk) (the so-called anchor, positive, and negative examples), may be sampled from X, where |i−j|≤τ and |i−k|>τ for some suitably chosen time scale τ. The loss incurred by each triplet may be determined as: ℒ⁡(z)=∑i=1N⁢[g⁡(xi)-g⁡(xj)22-g⁡(xi)-g⁡(xk)22+δ]+(Eqn.⁢1) where ∥⋅∥22is the L2norm, [⋅]+is a standard hinge loss, and δ is a nonnegative margin hyperparameter. The standard within-batch, semi-hard negative mining technique may be applied. The TRILL model may be trained on the subset of AudioSet training set clips possessing the speech label. The time scale τ may be set to 10 seconds, the maximum duration of each AudioSet clip. This can make the training task a primarily same clip/different clip discrimination. Also, for example, (i) log Mel spectrogram context windows with F=64 Mel bands and T=96 frames representing 0.96 seconds of input audio (STFT computed with 25 ms windows with step 10 ms) may be taken as input; and (ii) a variant of the standard ResNet-50 architecture followed by a d=512 dimensional embedding layer may be employed. Since the ResNet's final average pooling operation may destroy the sub second temporal structure, representations defined by earlier convolutional blocks may be additionally considered. Once these embeddings28are obtained they are supplied to a cough detection inference model (e.g., fully connected layers of a neural network trained to recognize coughs) which then generates a probability Pi (cough) for each of the i=1 . . . N audio segments, indicated at32. At step34, these cough probabilities, along with other information, are used to generate cough metrics for the N audio segments which describe things such as the duration of a cough episode, type of cough, characterization of the cough. The cough metrics can consist of metrics for each particular cough that was detected, as well as metrics for cough episodes, e.g., discrete time periods where a person is coughing at some minimum rate. In one embodiment of implementation of the method, the method of detecting coughs ofFIG.2takes into consideration the possibility that it is desirable to only analyze coughs of a particular individual, and thus be able to detect that a cough came from a particular individual (e.g., referred to as the “user” here), for example where the audio stream is in recording sounds in an environment in which there is more than one person present and the purpose of the cough detection is to detect coughs (and perhaps classify or characterize the coughs) of a particular person, here the user, and disregard other coughs or coughing sounds from other persons who may happen to be present while the recording is made. A cough identification enrollment40and verification42process shown inFIG.3is used in this situation. The verification process42assumes that there is a known user that has been enrolled in some form of procedural calibration where they are instructed to cough a few times. The enrollment process40results in an “anchor” TRILL embedding cluster which serves as the basis for determining whether future coughs originated from the user or some other source. The theory behind the procedure ofFIG.3works due to the assumption that coughs from the same person sound more similar than coughs from different people. Since TRILL embeddings summarize sound properties, it is also assumed that TRILL cough embeddings from the same person are more similar to each other than TRILL cough embeddings from different people. The similarity metric section below summarizes how the similarity of two embeddings can be measured. Much of the acoustic properties of a cough are specific to an individual's unique vocal chords. In fact, prior research shows that the last ˜100 ms of a cough, often called the ‘voiced region’ is unique to an individual while the ‘explosive region’ at the cough onset is less unique to a person. While the procedure ofFIG.3describes performing cough-id verification using TRILL embeddings, the task can be done fairly intuitively by simply looking at side-by-side audio spectrograms of a cough from the same person (FIG.8A) and different people (FIG.8B). In these spectrograms, the x axis represents time and they axis represents frequency (from low to high). The spectrograms ofFIGS.8A and8Bare known as “Mel spectrograms”, which are known methods in signal and acoustic processing for representing a sound signal. To create such spectrograms, a digitally represented audio signal is mapped from the time domain to the frequency domain using the fast Fourier transform; this is performed on overlapping windowed segments of the audio signal. The y-axis (frequency) is converted to a log scale and the color dimension (amplitude) to decibels to form the spectrogram. The y-axis (frequency) is mapped onto the Mel scale to form the Mel spectrogram. The Mel scale is a perceptual scale of pitches judged by listeners to be equal in distance from one another. The reference point between this scale and normal frequency measurement is defined by assigning a perceptual pitch of 1000 Mels to a 1000 Hz tone, 40 dB above the listener's threshold. Above about 500 Hz, increasingly large intervals are judged by listeners to produce equal pitch increments. As a result, four octaves on the hertz scale above 500 Hz are judged to comprise about two octaves on the Mel scale. The voiced region of the cough is not always visible, but when it is it shows as a stack of horizontal ‘bars’ in the upper frequencies near the cough offset. Because this region is based on vocal cord resonance properties it is typically the case that this pattern is similar for all of an individual's coughs regardless of the volume or duration or cause of the cough. As mentioned above, the procedure ofFIG.3includes an enrollment process40and a verification process42. The initial calibration or “enrollment” process40includes a step in which the user is instructed to generate an audio stream50in order to conduct a calibration procedure. In this “enrollment” audio stream50, the user is instructed to cough n times, and the coughs are recorded, e.g., in the smartphone using the audio recording app. n is typically a value between 5 and 10. At step52a TRILL embedding for each detected cough is generated using the audio feature set (see step26ofFIGS.2and7). At step54a similarity or distance is determined between each pairwise combination of the n coughs. This results in “n choose 2” distances where K=2, we call this set the intra-enrollment distances. At step56standard statistics are computed from the intra-enrollment distances which may look like the box-whisker plot300shown inFIG.4, where the boundaries302,304of the box along the axis306represent the range of distances which are computed and the solid line308represents some average or median of the distances. Since intra-enrollment distances are all from the same person it is assumed that the distances between them is relatively low as they should sound similar, the embeddings for each cough do not different substantially from each other, and therefore the intra-cough distances are relatively low. Also at step45a verification threshold is automatically chosen based on the intra-enrollment distance. The logic for choosing the threshold can vary, but for simplicity, this threshold may be generally chosen to be the right (greater than) highest value of the intra-enrollment distance in box-whisker plot,310. In the example ofFIG.4it would be set at say 3.1. At step58the n enrollment TRILL embeddings are stored for future reference as well as the automatically selected verification threshold. The verification process42requires enrollment (procedure40) to have been completed and is triggered whenever a cough is detected in an audio stream, step60. At step62, the distance is measured between the newly detected cough TRILL embedding (vector) and all of the n enrollment cough embeddings, resulting in n distances. At step64the median distance from this set is selected (or computed) which represents the distance between the user's enrollment coughs and the newly inferred, unverified cough. At step66a test is performed: if this inferred cough distance is less than the verification threshold (computed in the enrollment process40at step56), branch68is taken and at step70it is determined that the cough originated from the user, otherwise at step72it is determined that the cough originated from another, unverified source (e.g., a different person in the room where the audio recording was made). If the cough originated from another unverified source the cough statistics, characterization or identification steps may be disregarded, for example. The verification threshold allows the verification to be binary (either the cough is from the user or not). The confidence in the classification can be determined from the magnitude of the inferred cough distance. As the inferred cough distance approaches 0, the classification increases in confidence. Conversely as the inferred cough distance approaches infinity, the confidence approaches 0. We recognize there are several potential issues with the procedure ofFIG.3. It is possible for a user's cough acoustics to change over time, perhaps due to an illness, aging, or a change in the room acoustics. This means that enrollment procedure40ofFIG.3will likely need to happen periodically or re-trigger if inferred coughs are nearly always exceeding the verification threshold. There are many ways an app (e.g., one resident on a smartphone which is used for the cough detection method) could determine if enrollment needs to be redone, some smarter than others. For example, there could be a pop-up that is shown when a cough is detected (some probability of the time) asking the user: “did you just cough?”. If the user's answer disagrees with the cough-id verification algorithm some number of times, the enrollment could be retriggered. A significant component to the procedure ofFIG.3is the task of measuring the similarity between two TRILL cough embeddings, which we have called the “distance” in this discussion. Since the embeddings are fixed in length (e.g. 512), standard vector norm mathematics can be used (i.e., L2, L1, L∞, etc.). The most straightforward metric, L2or Euclidean Distance, is used and defined below (where p and q are TRILL embedding vectors with length n). Learned Similarity Metric L2distance gives equal weight to the n entries in the embedding, however it may be the case that some subset of the indices in the TRILL embedding are especially useful for the cough-id task, while others may be better suited for perhaps the cough detection task. If this were the case, a weighted distance metric which associates higher weight to the TRILL embedding indices that are useful for the tasks would be ideal. This weighted distance metric could be learned from some cough-id dataset to best minimize the distance between same coughs and maximize the distance between different coughs and would likely make it easier to choose an optimal verification threshold. FIG.5illustrates one possible environment in which the present disclosure is practiced. The user80has a smartphone82(or tablet, laptop, or other computing machine equipped with a microphone and processing unit) which serves to record sounds and generate an audio stream used in the methods ofFIGS.2and3. The smartphone includes the audio feature set ofFIG.1, an embedding model for generating embeddings based on coughs detected from the user80, a cough detection inference model, pre-processing code, post-processing code, e.g., generating cough metrics, cough episode metrics, and characterization of the coughs or cough episodes, and code for reporting the cough or cough metrics e.g. to the user, to a primary care physician, or to some external entity, while preserving patient privacy, confidentiality and in accordance with all applicable standards, e.g., HIPAA. The code resident on the smartphone82can optionally include the code implementing the enrollment and verification procedures ofFIG.3, including prompts for the user. Example 1 FIG.6is a flow chart showing an example of the implementation of the method ofFIG.2. A device82records an audio stream; the device can take the form of any piece of equipment or computer which includes a microphone and generates a recording, such as a smartphone, intelligent home assistant, etc. The audio stream is subject to pre-processing steps22which include sub-steps100,104and106. At step100the audio stream is converted to 16 kHz mono PCM stream, which is shown in box20including a signal102indicative of a cough. At step104, create model input, a log-Mel spectrogram is created (106), ranging from 125 to 7.5 kHz with PCEN (per-channel energy normalization). This log-Mel spectrogram106is similar to the spectrograms shown inFIG.8and described previously. At step108, this spectrogram106is framed as 1 second segments, with 200 ms overlap, represented as spectra S1, S2, S3 . . . . (110). As step26an embedding is created for each of the segments using the audio features set fromFIG.1(see the description ofFIG.7below) and the embedding subject to cough detection model inference using a TFLite model file. This model produces probabilities of a cough occurring in each segment, shown as P(cough)=0 for spectra S1 and S2, P (cough)=0.5 for spectrum S3, etc. as shown inFIG.6. One or more post-processing steps shown at34are performed including detecting cough episodes at step120and updating or generating cough metrics122. An example of a cough episode metric is shown at121and includes start and end times, density: 1 (density is the number of coughs detected in a 1 second audio segment) and score: 0.98; here the “score” is the probability produced by the cough inference model. A cough episode is defined as high scoring cough activity for more than 500 ms. An example of the cumulative cough metrics is shown at123, such as metrics which store accumulated statistics for a session for display and analysis, updated with each new cough episode that is detected. FIG.7is another example of the processing operations that perform the method ofFIG.2. The initial pre-processing steps22are basically the pre-processing steps22ofFIG.6but broken down into individual, discrete modules. Step26is the step of generating the embedding for the audio segments (in the form of log-Mel spectrogram frames) and basically consists of the step of applying a TRILL embedding model “trill_embedding_tflite_model” to the log-Mel spectrogram frame to generate a TRILL embedding, in this case a vector of numbers of dimension 512×1. TFlite is a tool packaged with Tensorflow that optimizes a model (typically a neural network) for on-device inference. The conversion process from a tensorflow model file- ->TFlite model file typically involves optimizing the neural network operations for the hardware of interest (for example a smartphone CPU, or an embedded DSP, or a server GPU). The conversion also allows the user to apply other various tricks to speed up the inference time, or reduce the amount of power needed (often at the cost of some model accuracy). The resulting TFLite model is typically a much smaller file size (a few megabytes) and suitable for packaging within an app that is resident on a portable computer, e.g., smart phone. In this example, the trill_embedding_tflite_model can be similar to MobileNet in some aspects, and may be configured as a sequence of convolution layers in a convolutional neural network. Once this embedding is created, a cough detection inference model30may be applied to the embeddings28and the output is the generation of a cough detection inference matrix32of probabilities of a cough (P cough) for each of the audio segments. The cough detection inference model30in this example is a neural network trained to identify coughs, indicated at “fcn_detector_tflite_model”. In some embodiments, it may include 4 fully connected ‘dense’ layers where each layer is half the length of the previous layer, and the final output is the cough ‘score’ or probability that coughing is happening. fcn_detector_tflite_modelInput: size=512 (TRILL embedding size)Layer 1: size=256Layer 2: size=128Layer 3: size=64Layer 4: size=32Output: size=1 (probability of coughing between 0 and 1)The number of layers and layer sizes may vary. The post-processing steps34are shown inFIG.7as consisting of sub-step200(unpack inference results),202(generate cough episode metrics) which consists of metrics for the latest cough episode (121) and metrics for all of the cough episodes (123). Examples of these metrics are shown inFIG.5. Examples of such metrics include the number of cough episodes per audio segment, b) number of cough episodes in the input audio stream data sequence; c) duration of the cough episode(s) per segment; and d) duration of the cough episode(s) in the input audio stream data sequence. The metrics which are computed in the post-processing could include performing a cough-type classification of one or more cough episodes that is detected. Such classification could be, for example, wet cough, dry cough, or cough associated with a particular type of medical condition, e.g., respiratory tract infection, emphysema, etc. Such classifications could be done with the aid of the cough inference detection model or alternatively a second neural network which is trained to characterize or distinguish between wet and dry coughs, coughs associated with particular medical conditions, etc. Example 2 The method described above in Example 1 is used on an audio stream recorded by a smartphone. A user initiates the recording via an app resident on the phone, and the app includes an instruction set that prompts the user to go through the enrollment process ofFIG.3. After the enrollment, the user initiates the recording and goes about their daily business (or, if at night, goes to bed). The user maintains their phone on with the recording proceeding for say 4 or 8 hours. The app includes a feature to turn off the recording. The methodology ofFIGS.2,6and7proceeds during the background while the recording is made, or, alternatively is initiated at the end of the recording. After the app generates all the cough metrics (step34,FIG.2) the user is prompted with a message such as: “Where would you like to have the cough metrics sent?” The user is provided with an option to select their primary care provider, and the audio stream portions that recorded coughs, along with the cough metrics, are sent via a secure link to an electronic medical records system maintained by the primary care provider, where the cough metrics and the actual sound segments of the coughs are available to the provider to help provide care for the patient, while preserving privacy and confidentiality of the information sent to the provider. Example 3 A user has an intelligent home assistant, which includes speech recognition capability, and a speaker that allows the assistant to converse with the user. The following dialog between the user and the assistant proceeds along the following lines: User: “Assistant, I would like to make a recording of my coughs for my doctor.” Assistant: “OK. First, we need to go through an enrollment process. Please cough 5 times.” User: [Coughs 5 times; Assistant records sounds of coughs and performs the enrollment process ofFIG.3]. Assistant: “Thank you. I have now completed the enrollment process. I am ready to start the recording. When would you like to start it and how long do you want me to record?” User: “Start Now. Please record my sounds for the next 5 hours.” Assistant: “OK. I am recording your sounds and will stop recording after 5 hours. What would you like me to do with the recording and cough metrics that I generate based on the recording?” User: “Please connect to the [“System X”, an electronic medical records system used by the user's primary care provider] and upload the recording and cough metrics for my Doctor, Bob Carlson. Assistant. “OK.” [Recording by the Assistant starts.] The user proceeds to go about their business and the Assistant records sounds for the next 5 hours. The cough verification process ofFIG.3identifies those sounds which are coughs of the user and ignores all other sounds, including coughs of other persons (such as the user's domestic partner or children). Either during or immediately after the end of the recording the Assistant generates the cough metrics, establishes a secure link to the “System X” and the pathway to the electronic medical records for the User (or to a server that maintains such records), and uploads the portions of the audio stream that contain cough episodes as well as all the cough metrics which were calculated. Other Possible Non-Semantic, Paralinguistic Uses The methods of this disclosure can also be used to detect and characterize other types of non-speech vocal sounds, such snoring, wheezing, determining whether the speaker is wearing a mask or not, and still others. The methodology for detecting or characterizing these other non-speech vocal sounds is basically the same as described above for coughs, and uses the same TRILL audio feature set obtained perFIG.1. Instead of a cough detection inference model, the method uses a model trained to recognize the specific non-semantic/paralinguistic sound for this application, such as snoring or wheezing for example. The TRILL audio feature set used in the cough detection work of this document is a general-purpose representation of non-semantic speech. A linear model on the TRILL representation appears to outperform the best baseline model, which is a fusion of many models, despite TRILL being trained only on a completely different dataset. Fine tuning the TRILL model on mask data appears to improve accuracy by 3.6% on the Unweighted Average Recall score. Further to the descriptions above, a user may be provided with controls allowing the user to make an election as to both if and when systems, programs, or features described herein may enable collection of user information (a user's preferences, health information, recordings or statistics/metrics of cough or other non-semantic data, or a user's current location). In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over what information is collected about the user, how that information is used, and what information is provided to the user. Fast TRILL (FRILL) Learned speech representations can improve performance on tasks with limited labeled data. However, due to their size and complexity, learned representations have limited utility in mobile settings where run-time performance can be a significant bottleneck. A class of lightweight non-semantic speech embedding models may be utilized in such situations, that run efficiently on mobile devices based on the TRILL speech embedding. Novel architectural modifications may be combined with existing speed-up techniques to create embedding models that are fast enough to run in real-time on a mobile device, and that exhibit minimal performance degradation on a benchmark of non-semantic speech tasks. For example, FRILL can be 32× faster on a Pixel 1 smartphone and yet comprise 40% the size of TRILL, with an average decrease in accuracy of only 2%. FRILL is anon-semantic embedding of a high quality that is designed for use on mobile devices. The representations described as part of FRILL can be useful for mobile health tasks such as, for example, detection of non-speech human sounds, and detection of face-masked speech. Many of the tasks in the non-semantic speech (NOSS) benchmark, such as keyword detection and speaker identification, have natural mobile computing applications (e.g. verifying a user and triggering a voice assistant). On a mobile device, a non-semantic speech embedding could be used as input features for several real-time audio detection tasks, considerably reducing the cost of running models simultaneously. Such an embedding could enable mobile devices to listen for additional events such as non-speech health sounds (e.g. coughing, sneezing) with minimal impact on battery performance. This is desirable as real-time analysis of mobile audio streams has shown to be useful for tracking respiratory symptoms. However, TRILL is based on a modified version of ResNet50, which is expensive to compute on mobile devices. Accordingly, in some aspects, TRILL may be distilled to a student model including a truncated MobileNet architecture, and two large dense layers (TRILL-Distilled). TRILL-Distilled can exhibit minimal performance degradation on most NOSS tasks. Due to the size of its final dense layers, TRILL-Distilled may contain over 26M parameters, which may still be too large to run in real-time on many devices. This performance gap may be addressed by creating non-semantic speech embeddings that are fast and small enough to run in real-time on mobile devices. To do this, knowledge distillation can be used to train efficient student models based on MobileNetV3 to mimic the TRILL representation. A combination of novel architectural modifications and existing speed-up techniques such as low-rank matrix approximation, and weight quantization may be applied to further optimize student embeddings. Finally, in addition to the NOSS benchmark, a quality of these embeddings on two privacy-sensitive, health-sensing tasks: human sounds classification and face-mask speech detection may be evaluated. Accordingly, in some aspects, (i) a class of non-semantic embedding models may be generated that are fast enough to run in real-time on a mobile device. One example model, FRILL, can demonstrate performance improvements, such as 32× faster and 40% the size of TRILL, with an average decrease in accuracy of only 2% over 7 diverse datasets. FRILL can also demonstrate performance improvements, such as 2.5× faster and 35% the size of TRILL-Distilled; (ii) an impact of performance optimization techniques like quantization-aware training, model compression, and architecture reductions on the latency, accuracy, and size of embedding models may be evaluated; and (iii) on-device representations may be bench-marked on two mobile-health tasks: a public dataset of human sounds, and detecting face-masked speech. The FRILL Student-Model Architecture The student models map log Mel-spectrograms to an embedding vector and are trained to mimic the TRILL representation described herein. In some embodiments, the student model architecture may include two components: a MobileNetV3 variant followed by a fully-connected bottleneck layer. The MobileNetV3 variant extracts rich information from inputted log Mel-spectrograms, and the bottleneck layer ensures a fixed embedding size. To explore the tradeoff between the performance and latency of the student models, a set of hyperparameters may be used as described below. FRILL Architecture: MobileNet Size MobileNetV3 comprises two sizes: small and large. The small variant may be targeted toward resource-constrained applications and contains fewer inverted residual blocks and convolutional channels. In addition to these sizes, a truncated version of MobileNetV3Small may be adapted herein, named MobileNetV3Tiny, comprising the following modifications: (a) two of the eleven inverted residual blocks (blocks 6 and 11) from MobileNetV3Small may be removed. The choice of these blocks is based on the fact that these are duplicates of a preceding block; and (b) the number of channels in the final convolutional layer may be reduced from 1024 to 512. FRILL Architecture: MobileNet Width MobileNet architectures feature a width multiplier α which modifies the number of channels in the convolutional layers within each inverted residual block. This hyperparameter is generally used to exchange model latency for performance. FIG.9illustrates a table900with example values of hyperparameters to reduce size and latency, in accordance with example embodiments. In the first row, the entry under first column indicates a name of the architecture, such as “MV3Size” corresponding to a description “MobileNetV3 size” indicated in the entry under the second column, and with values “tiny, small, large,” indicated in the entry under the third column. Additional rows indicate additional architectures. FRILL Architecture: Global Average Pooling MobileNetV3 produces a set of two-dimensional feature maps at its output. When global average pooling (GAP) is disabled, these feature maps are flattened, concatenated, and passed to the bottleneck layer to produce an embedding. This concatenated vector is generally large, resulting in a sizable kernel in the bottleneck layer. GAP discards temporal information within an input audio window, which is less important for learning a non-semantic speech representation due to the fact non-lexical aspects of the speech signal (e.g. emotion, speaker identity) are more stable in time compared to lexical information. Accordingly, GAP may be used to reduce the size of the bottleneck layer kernel by taking the global average of all “pixels” in each output feature map, thus reducing the size of the bottleneck input. FRILL Architecture: Bottleneck Layer Compression A significant portion of the student model weights are located in a kernel matrix of the bottleneck layer. To reduce the footprint of this layer, a compression operator based on Singular Value Decomposition (SVD) may be applied. The compression operator may learn a low-rank approximation of the bottleneck weight matrix W3. Generally, low-rank approximations may be learned during training, as opposed to post-training. Formally, this operator uses SVD to generate matrices U and V such that the Frobenius norm of W−UVTcan be minimized. The compressed kernel replaces a matrix of m×n weights with k(m+n) weights, where k is a hyperparameter that specifies the inner dimension of U and V, which we fix at k=100. A convex combination of original and compressed kernels may be used during training to produce the following layer output: y=x⁡(λ⁢W+(1-λ)⁢U⁢V)+b(Eqn.⁢2) where b is the bias vector in the bottleneck layer, x is the input vector, and λ is a scalar that is set to one at the beginning of training, and linearly decreases to zero over the first ten training epochs. Varying λ helps the optimizer transition to learning the weights of the compressed matrices. At inference time, λ may be set to zero and the original kernel may be discarded. FRILL Architecture: Bottleneck Layer Quantization Quantization aims to reduce model footprint and latency by reducing the numerical precision of model weights. Instead of using post-training quantization which may cause performance degradation, Quantization-Aware Training (QAT) may be used. QAT is a procedure that gradually quantizes model weights during training. In some embodiments, a Tensorflow implementation of QAT may be utilized to quantize the bottleneck layer kernel from 32-bit floating point to 8-bits. Experiments An effect of each hyperparameter in the table ofFIG.9on the representation quality, latency, and size of student embedding models may be determined. For each of 144 combinations of hyperparameters, the TRILL embedding may be distilled to a student network, the student embedding may be benchmarked by training simple classifiers to solve NOSS tasks and health tasks using embeddings as input features, and inference latency may be measured on a Pixel 1 smartphone. The distillation dataset, student network training procedure, NOSS benchmarking, and latency benchmarking procedures are as described in the following sections. Distillation Dataset To build a dataset for distillation, a 0.96-second audio context may be randomly sampled from each Audioset speech clip and a log-magnitude Mel spectrogram may be computed using a Short-Time Fourier Transform (STFT) window size and window stride of 25 ms and 10 ms respectively. In some experiments, 64 Mel bins may be computed. Using each spectrogram, the layer19 output of the TRILL model may be computed. Each pair, {log Mel spectrogram, layer19}, may be stored as a single observation for distillation training. Student Model Training FIG.10illustrates an example training phase of a student model architecture, in accordance with example embodiments. A diagram of the training setup is shown inFIG.10. Knowledge distillation for non-semantic speech embeddings is illustrated. Student models may be trained to map input Log Mel-spectrograms1005to the layer19 representation1010produced by a teacher model, TRILL1015. Because the layer19 vector is much larger (12288 d) than the student embeddings (2048 d), an equal-length fully-connected layer1020may be appended to the output of the student model. This fully-connected layer1020enables computation of a mean-squared-error (MSE) loss1025against layer191010. To train student models, a batch size of 128 and an initial learning rate of 1e−4 with an Adam optimizer may be used. In some embodiments, an exponential learning rate schedule may be used, with learning rates decreasing by a factor of 0.95 every 5,000 training steps. Each model may train for 50 epochs, or approximately 350,000 training steps. The dashed line shows the student model's output. As previously described, one or more student hyperparameters1030may be used to train the MobileNetV3 model, such as a width multiplier α, and a global average pooling (GAP) to reduce the size of the kernel of bottleneck layer1035by taking the global average of all “pixels” in each output feature map. Also, for example, a compression operator based on Singular Value Decomposition (SVD) may be applied to learn a low-rank approximation of the bottleneck weight matrix. As another example, Quantization-Aware Training (QAT) may be used to gradually quantizes model weights during the training. NOSS Benchmark Analysis To evaluate the quality of the student embeddings, a set of simple classifiers may be trained using embeddings as input features to solve each classification task in the NOSS benchmark. For each dataset in NOSS, a logistic regression, random forest, and linear discriminant analysis classifier may be trained using the SciKit-Learn library. Embeddings for each utterance may be averaged in time to produce a single feature vector. For tasks that contain multiple observations per speaker (SpeechCommands, CREMA-D, SAVEE), a set of classifiers using L2speaker normalization may be trained. Best test accuracy across combinations of downstream classifiers and normalization techniques may be determined. For example, accuracies on Dementia-Bank, one of the datasets included in the original NOSS benchmark, were all within 1% of each other. Mobile Health-Sensing Tasks In addition to tasks in the NOSS benchmark, Trill, Trill-Distilled, and each of the student models may be evaluated on a human sounds classification task and a face-mask speech detection task. The human sounds task is derived from the ESC-50 dataset, which contains 5-second sound clips from 50 classes. The human sounds subset of this dataset constitutes 10 of the 50 classes and includes labels such as ‘coughing’, ‘sneezing’, and ‘breathing’. Similar to NOSS, a set of simple classifiers may be trained using input features from each student model and test accuracy may be reported on the best model. The first four published folds of ESC-50 may be used for training, and the fifth fold may be used for testing. The objective of the mask speech task is to detect whether 1-second speech clips are from masked or unmasked speakers. The dataset contains around 19,000 masked and 18,000 unmasked speech examples. The performance of the models described herein may be evaluated as an indicator of their suitability for mobile health tasks. Run-Time Analysis The TensorFlow Lite (TFLite) framework enables execution of machine learning models on mobile and edge devices. To measure the run-time performance of the student embeddings in their intended environment, each model may be converted to TFLite's flatbuffer file format for 32-bit floating-point execution and benchmark inference latency (single-threaded, CPU execution) on the Pixel 1 smartphone. Conversion to the flatbuffer format does not affect the quality of the representations. Latency measurements for TRILL and TRILL-Distilled may also be recorded for reference. Results Because student embeddings are evaluated on 7 datasets, it may be challenging to naturally rank models based on their “quality”. Thus, an Aggregate Embedding Quality score may be determined by computing the performance difference between a student model and TRILL for each task, and averaging across tasks: Aggregate⁢⁢Embedding⁢⁢Qualit⁢ym=1D⁢∑d⁢(Amd-Td)(Eqn.⁢3) where m indicates the student model, d indicates the dataset, and Tdis the accuracy of TRILL on dataset d∈D. This score is indicative of an average deviation from TRILL's performance across all NOSS tasks and mobile health tasks. To understand an impact each hyperparameter in the table ofFIG.9has on the student models, a multivariate linear regression may be performed to model aggregate quality, latency, and size using model hyperparameters as predictors. Each regression target may be standardized in order to produce regression weights on the same order of magnitude while preserving relative importance. FIG.11illustrates a bar chart1100with magnitude of regression weights, in accordance with example embodiments. Linear regression weight magnitudes for predicting model quality, latency, and size are illustrated along the vertical axis. The weights indicate the expected impact of changing the input hyperparameter. A higher weight magnitude indicates a greater expected impact. The horizontal axis shows comparative bar graphs for aggregate embedding quality, model size, and Pixel 1 latency, for each of the student hyperparameters1030such as MV3Size, MV3Width, GAP, Compression, and QAT, as described with reference toFIG.10. FIG.12is a table1200illustrating NOSS benchmark and mobile health task accuracies for three representative frontier models, in accordance with example embodiments. Comparisons are shown with respect to TRILL (in the first row) and TRILL-Distilled (in the second row). The three representative frontier models are shown as Small_2.0_GAP (FRILL) (in the third row), Small_0.5_QAT (in the fourth row), and Tiny_0.5_Comp_GAP (in the fifth row). Test Performance on the NOSS Benchmark and Mobile Health Tasks are shown. Observations Architecture reduction techniques appear to have a smaller impact on performance and latency. For example, reducing MobileNetV3 size via α, by removing residual blocks, and by pooling early in the network had a smaller effect than QAT and bottleneck compression (see,FIG.11). This suggests that the TRILL-Distilled Mobilenet part of the architecture may be likely over-parameterized compared to the representation quality possible by the bottleneck. QAT appears to reduce model size the most, and latency the least. For example, QAT reduces overall model size the most and pixel 1 latency the least (see,FIG.11). It decreases embedding quality by only half as much as compression, and is present in ⅛ of the best models. Bottleneck compression appears to reduce embedding performance the most. This suggests that TRILL-Distilled's last bottleneck layer may be a highly performance-sensitive part of the model. Quality/Latency Tradeoff FIG.13illustrates embedding quality and latency trade-off, in accordance with example embodiments. The horizontal axis represents an inference latency measured in milliseconds (ms), and the vertical axis represents an aggregate embedding quality, a difference in accuracy from TRILL's performance, averaged across benchmark datasets. To illustrate the latency and quality tradeoff in the presently described cohort of models (for example, models referenced inFIG.12), a “quality” frontier plot1300may be generated. Plot1300is a sample of model performances and latencies on the quality/latency tradeoff curve. For all latency measurements 1, the model with the best aggregate embedding quality with a latency less than or equal to one may be selected. This frontier, shown inFIG.13, features 8 student models of various qualities and latencies. As illustrated, FRILL (fast TRILL), has an aggregate embedding quality score of 0.0169, indicating an average deviation from TRILL quality of 1.69% with respect to the datasets in this study. FRILL has an inference latency of 8.5 ms on a Pixel 1 smartphone, and is only 38.5 megabytes in the TFLite file format. After eliminating models with better and faster alternatives, 8 “frontier” models may be reviewed. The fastest model appears to run at 0.9 ms, which is 300× faster than TRILL and 25× faster than TRILL-Distilled. FRILL appears to run at 8.5 ms, which is about 32× faster than TRILL 2.5× faster than TRILL-Distilled. FRILL also appears to be roughly 40% the size of TRILL and TRILL-Distilled. The plot1300is steep on both sides of the frontier. This may mean that with minimal latency costs, much better performance may be achieved on one end, and vice versa on the other. This supports the choice of experiment hyperparameters. Though there is a frontier model with an aggregate embedding quality higher than FRILL, it comes at the cost of a significant bump in latency. As described in various embodiments, an efficient non-semantic speech embedding model trained via knowledge distillation is described, that is fast enough to be run in real-time on a mobile device. Latency and size reduction techniques are described, and their impact on model quality is quantified. Performance/latency tradeoff curve for the 144 trained models is analyzed, and size, latency, and performance numbers are reported for representative models. In particular, FRILL appears to exhibit a 32× inference speedup and 60% size reduction, with an average decrease in accuracy of less than 2% over 7 different datasets, as compared to the TRILL model. FRILL appears to be 2.5× faster and 35% the size of TRILL-Distilled. Effectiveness of the embeddings on two new mobile health tasks are evaluated. These new tasks in particular benefit from the on-device nature of the embeddings, since performing computations locally can improve both the privacy and latency of resulting models. Training Machine Learning Methods for Generating Inferences/Predictions FIG.14shows diagram1400illustrating a training phase1402and an inference phase1404of trained machine learning model(s)1432, in accordance with example embodiments. Some machine learning techniques involve training one or more machine learning algorithms on an input set of training data to recognize patterns in the training data and provide output inferences and/or predictions about (patterns in the) training data. The resulting trained machine learning algorithm can be termed as a trained machine learning model. For example,FIG.14shows training phase1402where one or more machine learning algorithms1420are being trained on training data1410to become trained machine learning model1432. Then, during inference phase1404, trained machine learning model1432can receive input data1430and one or more inference/prediction requests1440(perhaps as part of input data1430) and responsively provide as an output one or more inferences and/or predictions1450. As such, trained machine learning model(s)1432can include one or more models of one or more machine learning algorithms1420. Machine learning algorithm(s)1420may include, but are not limited to: an artificial neural network (e.g., convolutional neural networks, a recurrent neural network, a Bayesian network, a hidden Markov model, a Markov decision process, a logistic regression function, a support vector machine, a suitable statistical machine learning algorithm, and/or a heuristic machine learning system). Machine learning algorithm(s)1420may be supervised or unsupervised, and may implement any suitable combination of online and offline learning. In some examples, machine learning algorithm(s)1420and/or trained machine learning model(s)1432can be accelerated using on-device coprocessors, such as graphic processing units (GPUs), tensor processing units (TPUs), digital signal processors (DSPs), and/or application specific integrated circuits (ASICs). Such on-device coprocessors can be used to speed up machine learning algorithm(s)1420and/or trained machine learning model(s)1432. In some examples, trained machine learning model(s)1432can be trained, resident, and executed to provide inferences on a particular computing device, and/or otherwise can make inferences for the particular computing device. During training phase1402, machine learning algorithm(s)1420can be trained by providing at least training data1410as training input using unsupervised, supervised, semi-supervised, and/or reinforcement learning techniques. Training data1410can include a plurality of speech audio clips from a speech dataset. Unsupervised learning involves providing a portion (or all) of training data1410to machine learning algorithm(s)1420and machine learning algorithm(s)1420determining one or more output inferences based on the provided portion (or all) of training data1410. Supervised learning involves providing a portion of training data1410to machine learning algorithm(s)1420, with machine learning algorithm(s)1420determining one or more output inferences based on the provided portion of training data1410, and the output inference(s) are either accepted or corrected based on correct results associated with training data1410. In some examples, supervised learning of machine learning algorithm(s)1420can be governed by a set of rules and/or a set of labels for the training input, and the set of rules and/or set of labels may be used to correct inferences of machine learning algorithm(s)1420. Semi-supervised learning involves having correct results for part, but not all, of training data1410. During semi-supervised learning, supervised learning is used for a portion of training data1410having correct results, and unsupervised learning is used for a portion of training data1410not having correct results. Reinforcement learning involves machine learning algorithm(s)1420receiving a reward signal regarding a prior inference, where the reward signal can be a numerical value. During reinforcement learning, machine learning algorithm(s)1420can output an inference and receive a reward signal in response, where machine learning algorithm(s)1420are configured to try to maximize the numerical value of the reward signal. In some examples, reinforcement learning also utilizes a value function that provides a numerical value representing an expected total of the numerical values provided by the reward signal over time. In some examples, machine learning algorithm(s)1420and/or trained machine learning model(s)1432can be trained using other machine learning techniques, including but not limited to, incremental learning and curriculum learning. In some examples, machine learning algorithm(s)1420and/or trained machine learning model(s)1432can use transfer learning techniques. For example, transfer learning techniques can involve trained machine learning model(s)1432being pre-trained on one set of data and additionally trained using training data1410. More particularly, machine learning algorithm(s)1420can be pre-trained on data from one or more computing devices and a resulting trained machine learning model provided to a particular computing device, where the particular computing device is intended to execute the trained machine learning model during inference phase1404. Then, during training phase1402, the pre-trained machine learning model can be additionally trained using training data1410, where training data1410can be derived from kernel and non-kernel data of the particular computing device. This further training of the machine learning algorithm(s)1420and/or the pre-trained machine learning model using training data1410of the particular computing device's data can be performed using either supervised or unsupervised learning. Once machine learning algorithm(s)1420and/or the pre-trained machine learning model has been trained on at least training data1410, training phase1402can be completed. The trained resulting machine learning model can be utilized as at least one of trained machine learning model(s)1432. In particular, once training phase1402has been completed, trained machine learning model(s)1432can be provided to a computing device, if not already on the computing device. Inference phase1404can begin after trained machine learning model(s)1432are provided to the particular computing device. During inference phase1404, trained machine learning model(s)1432can receive input data1430and generate and output one or more corresponding inferences and/or predictions1450about input data1430. As such, input data1430can be used as an input to trained machine learning model(s)1432for providing corresponding inference(s) and/or prediction(s)1450to kernel components and non-kernel components. For example, trained machine learning model(s)1432can generate inference(s) and/or prediction(s)1450in response to one or more inference/prediction requests1440. In some examples, trained machine learning model(s)1432can be executed by a portion of other software. For example, trained machine learning model(s)1432can be executed by an inference or prediction daemon to be readily available to provide inferences and/or predictions upon request. Input data1430can include data from the particular computing device executing trained machine learning model(s)1432and/or input data from one or more computing devices other than the particular computing device. Input data1430can include an audio stream to generate an input audio sequence comprising a plurality of time-separated audio segments. Inference(s) and/or prediction(s)1450can include output cough metrics for each of cough episodes detected in the input audio sequence, and/or other output data produced by trained machine learning model(s)1432operating on input data1430(and training data1410). In some examples, trained machine learning model(s)1432can use output inference(s) and/or prediction(s)1450as input feedback1460. Trained machine learning model(s)1432can also rely on past inferences as inputs for generating new inferences. In some examples, a single computing device (“CD_SOLO”) can include the trained version of the machine learning model, perhaps after training the machine learning model. Then, computing device CD_SOLO can receive requests to detect a cough in an audio stream, and use the trained version of the machine learning model to generate cough metrics for each cough episode detected in the input audio sequence. In some examples, two or more computing devices, such as a first client device (“CD_CLI”) and a server device (“CD_SRV”) can be used to provide the output; e.g., a first computing device CD_CLI can generate and send requests to detect a cough in an audio stream to a second computing device CD_SRV. Then, CD_SRV can use the trained version of the machine learning model, to generate cough metrics for each cough episode detected in the input audio sequence. Then, upon reception of responses to the requests, CD_CLI can provide the requested output via one or more control interfaces (e.g., using a user interface and/or a display, a printed copy, an electronic communication, etc.). Example Data Network FIG.15depicts a distributed computing architecture1500, in accordance with example embodiments. Distributed computing architecture1500includes server devices1508,1510that are configured to communicate, via network1506, with programmable devices1504a,1504b,1504c,1504d,1504e. Network1506may correspond to a local area network (LAN), a wide area network (WAN), a WLAN, a WWAN, a corporate intranet, the public Internet, or any other type of network configured to provide a communications path between networked computing devices. Network1506may also correspond to a combination of one or more LANs, WANs, corporate intranets, and/or the public Internet. AlthoughFIG.15only shows five programmable devices, distributed application architectures may serve tens, hundreds, or thousands of programmable devices. Moreover, programmable devices1504a,1504b,1504c,1504d,1504e(or any additional programmable devices) may be any sort of computing device, such as a mobile computing device, desktop computer, wearable computing device, head-mountable device (HMD), network terminal, a mobile computing device, and so on. In some examples, such as illustrated by programmable devices1504a,1504b,1504c,1504e, programmable devices can be directly connected to network1506. In other examples, such as illustrated by programmable device1504d, programmable devices can be indirectly connected to network1506via an associated computing device, such as programmable device1504c. In this example, programmable device1504ccan act as an associated computing device to pass electronic communications between programmable device1504dand network1506. In other examples, such as illustrated by programmable device1504e, a computing device can be part of and/or inside a vehicle, such as a car, a truck, a bus, a boat or ship, an airplane, etc. In other examples not shown inFIG.15, a programmable device can be both directly and indirectly connected to network1506. Server devices1508,1510can be configured to perform one or more services, as requested by programmable devices1504a-1504e. For example, server device1508and/or1510can provide content to programmable devices1504a-1504e. The content can include, but is not limited to, web pages, hypertext, scripts, binary data such as compiled software, images, audio, and/or video. The content can include compressed and/or uncompressed content. The content can be encrypted and/or unencrypted. Other types of content are possible as well. As another example, server devices1508and/or1510can provide programmable devices1504a-1504ewith access to software for database, search, computation, graphical, audio, video, World Wide Web/Internet utilization, and/or other functions. Many other examples of server devices are possible as well. Computing Device Architecture FIG.16is a block diagram of an example computing device1600, in accordance with example embodiments. In particular, computing device1600shown inFIG.16can be configured to perform at least one function of and/or related to neural network1000, and/or methods1800, and/or1900. Computing device1600may include a user interface module1601, a network communications module1602, one or more processors1603, data storage1604, one or more cameras1618, one or more sensors1620, and power system1622, all of which may be linked together via a system bus, network, or other connection mechanism1605. User interface module1601can be operable to send data to and/or receive data from external user input/output devices. For example, user interface module1601can be configured to send and/or receive data to and/or from user input devices such as a touch screen, a computer mouse, a keyboard, a keypad, a touch pad, a trackball, a joystick, a voice recognition module, and/or other similar devices. User interface module1601can also be configured to provide output to user display devices, such as one or more cathode ray tubes (CRT), liquid crystal displays, light emitting diodes (LEDs), displays using digital light processing (DLP) technology, printers, light bulbs, and/or other similar devices, either now known or later developed. User interface module1601can also be configured to generate audible outputs, with devices such as a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices. User interface module1601can further be configured with one or more haptic devices that can generate haptic outputs, such as vibrations and/or other outputs detectable by touch and/or physical contact with computing device1600. In some examples, user interface module1601can be used to provide a graphical user interface (GUI) for utilizing computing device1600. Network communications module1602can include one or more devices that provide one or more wireless interfaces1607and/or one or more wireline interfaces1608that are configurable to communicate via a network. Wireless interface(s)1607can include one or more wireless transmitters, receivers, and/or transceivers, such as a Bluetooth™ transceiver, a Zigbee® transceiver, a Wi-Fi™ transceiver, a WiMAX™ transceiver, an LTE™ transceiver, and/or other type of wireless transceiver configurable to communicate via a wireless network. Wireline interface(s)1608can include one or more wireline transmitters, receivers, and/or transceivers, such as an Ethernet transceiver, a Universal Serial Bus (USB) transceiver, or similar transceiver configurable to communicate via a twisted pair wire, a coaxial cable, a fiber-optic link, or a similar physical connection to a wireline network. In some examples, network communications module1602can be configured to provide reliable, secured, and/or authenticated communications. For each communication described herein, information for facilitating reliable communications (e.g., guaranteed message delivery) can be provided, perhaps as part of a message header and/or footer (e.g., packet/message sequencing information, encapsulation headers and/or footers, size/time information, and transmission verification information such as cyclic redundancy check (CRC) and/or parity check values). Communications can be made secure (e.g., be encoded or encrypted) and/or decrypted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, Data Encryption Standard (DES), Advanced Encryption Standard (AES), a Rivest-Shamir-Adelman (RSA) algorithm, a Diffie-Hellman algorithm, a secure sockets protocol such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS), and/or Digital Signature Algorithm (DSA). Other cryptographic protocols and/or algorithms can be used as well or in addition to those listed herein to secure (and then decrypt/decode) communications. One or more processors1603can include one or more general purpose processors, and/or one or more special purpose processors (e.g., digital signal processors, tensor processing units (TPUs), graphics processing units (GPUs), application specific integrated circuits, etc.). One or more processors1603can be configured to execute computer-readable instructions1606that are contained in data storage1604and/or other instructions as described herein. Data storage1604can include one or more non-transitory computer-readable storage media that can be read and/or accessed by at least one of one or more processors1603. The one or more computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with at least one of one or more processors1603. In some examples, data storage1604can be implemented using a single physical device (e.g., one optical, magnetic, organic or other memory or disc storage unit), while in other examples, data storage1604can be implemented using two or more physical devices. Data storage1604can include computer-readable instructions1606and perhaps additional data. In some examples, data storage1604can include storage required to perform at least part of the herein-described methods, scenarios, and techniques and/or at least part of the functionality of the herein-described devices and networks. In some examples, data storage1604can include storage for a trained neural network model1612(e.g., a model of trained convolutional neural networks such as convolutional neural networks140). In particular of these examples, computer-readable instructions1606can include instructions that, when executed by processor(s)1603, enable computing device1600to provide for some or all of the functionality of trained neural network model1612. In some examples, computing device1600can include one or more cameras1618. Camera(s)1618can include one or more image capture devices, such as still and/or video cameras, equipped to capture light and record the captured light in one or more images; that is, camera(s)1618can generate image(s) of captured light. The one or more images can be one or more still images and/or one or more images utilized in video imagery. Camera(s)1618can capture light and/or electromagnetic radiation emitted as visible light, infrared radiation, ultraviolet light, and/or as one or more other frequencies of light. In some examples, computing device1600can include one or more sensors1620. Sensors1620can be configured to measure conditions within computing device1600and/or conditions in an environment of computing device1600and provide data about these conditions. For example, sensors1620can include one or more of: (i) sensors for obtaining data about computing device1600, such as, but not limited to, a thermometer for measuring a temperature of computing device1600, a battery sensor for measuring power of one or more batteries of power system1622, and/or other sensors measuring conditions of computing device1600; (ii) an identification sensor to identify other objects and/or devices, such as, but not limited to, a Radio Frequency Identification (RFID) reader, proximity sensor, one-dimensional barcode reader, two-dimensional barcode (e.g., Quick Response (QR) code) reader, and a laser tracker, where the identification sensors can be configured to read identifiers, such as RFID tags, barcodes, QR codes, and/or other devices and/or object configured to be read and provide at least identifying information; (iii) sensors to measure locations and/or movements of computing device1600, such as, but not limited to, a tilt sensor, a gyroscope, an accelerometer, a Doppler sensor, a GPS device, a sonar sensor, a radar device, a laser-displacement sensor, and a compass; (iv) an environmental sensor to obtain data indicative of an environment of computing device1600, such as, but not limited to, an infrared sensor, an optical sensor, a light sensor, a biosensor, a capacitive sensor, a touch sensor, a temperature sensor, a wireless sensor, a radio sensor, a movement sensor, a microphone, a sound sensor, an ultrasound sensor and/or a smoke sensor; and/or (v) a force sensor to measure one or more forces (e.g., inertial forces and/or G-forces) acting about computing device1600, such as, but not limited to one or more sensors that measure: forces in one or more dimensions, torque, ground force, friction, and/or a zero moment point (ZMP) sensor that identifies ZMPs and/or locations of the ZMPs. Many other examples of sensors1620are possible as well. Power system1622can include one or more batteries1624and/or one or more external power interfaces1626for providing electrical power to computing device1600. Each battery of the one or more batteries1624can, when electrically coupled to the computing device1600, act as a source of stored electrical power for computing device1600. One or more batteries1624of power system1622can be configured to be portable. Some or all of one or more batteries1624can be readily removable from computing device1600. In other examples, some or all of one or more batteries1624can be internal to computing device1600, and so may not be readily removable from computing device1600. Some or all of one or more batteries1624can be rechargeable. For example, a rechargeable battery can be recharged via a wired connection between the battery and another power supply, such as by one or more power supplies that are external to computing device1600and connected to computing device1600via the one or more external power interfaces. In other examples, some or all of one or more batteries1624can be non-rechargeable batteries. One or more external power interfaces1626of power system1622can include one or more wired-power interfaces, such as a USB cable and/or a power cord, that enable wired electrical power connections to one or more power supplies that are external to computing device1600. One or more external power interfaces1626can include one or more wireless power interfaces, such as a Qi wireless charger, that enable wireless electrical power connections, such as via a Qi wireless charger, to one or more external power supplies. Once an electrical power connection is established to an external power source using one or more external power interfaces1626, computing device1600can draw electrical power from the external power source the established electrical power connection. In some examples, power system1622can include related sensors, such as battery sensors associated with one or more batteries or other types of electrical power sensors. Cloud-Based Servers FIG.17depicts a network1506of computing clusters1709a,1709b,1709carranged as a cloud-based server system in accordance with an example embodiment. Computing clusters1709a,1709b, and1709ccan be cloud-based devices that store program logic and/or data of cloud-based applications and/or services; e.g., perform at least one function of and/or related to neural networks1000, and/or methods1800, and/or1900. In some embodiments, computing clusters1709a,1709b, and1709ccan be a single computing device residing in a single computing center. In other embodiments, computing clusters1709a,1709b, and1709ccan include multiple computing devices in a single computing center, or even multiple computing devices located in multiple computing centers located in diverse geographic locations. For example,FIG.17depicts each of computing clusters1709a,1709b, and1709cresiding in different physical locations. In some embodiments, data and services at computing clusters1709a,1709b,1709ccan be encoded as computer readable information stored in non-transitory, tangible computer readable media (or computer readable storage media) and accessible by other computing devices. In some embodiments, computing clusters1709a,1709b,1709ccan be stored on a single disk drive or other tangible storage media, or can be implemented on multiple disk drives or other tangible storage media located at one or more diverse geographic locations. InFIG.17, functionality of neural networks1000, and/or a computing device can be distributed among computing clusters1709a,1709b,1709c. Computing cluster1709acan include one or more computing devices1700a, cluster storage arrays1710a, and cluster routers1711aconnected by a local cluster network1712a. Similarly, computing cluster1709bcan include one or more computing devices1700b, cluster storage arrays1710b, and cluster routers1711bconnected by a local cluster network1712b. Likewise, computing cluster1709ccan include one or more computing devices1700c, cluster storage arrays1710c, and cluster routers1711cconnected by a local cluster network1712c. In some embodiments, each of computing clusters1709a,1709b, and1709ccan have an equal number of computing devices, an equal number of cluster storage arrays, and an equal number of cluster routers. In other embodiments, however, each computing cluster can have different numbers of computing devices, different numbers of cluster storage arrays, and different numbers of cluster routers. The number of computing devices, cluster storage arrays, and cluster routers in each computing cluster can depend on the computing task or tasks assigned to each computing cluster. In computing cluster1709a, for example, computing devices1700acan be configured to perform various computing tasks of convolutional neural network, and/or a computing device. In one embodiment, the various functionalities of a convolutional neural network, and/or a computing device can be distributed among one or more of computing devices1700a,1700b, and1700c. Computing devices1700band1700cin respective computing clusters1709band1709ccan be configured similarly to computing devices1700ain computing cluster1709a. On the other hand, in some embodiments, computing devices1700a,1700b, and1700ccan be configured to perform different functions. In some embodiments, computing tasks and stored data associated with a convolutional neural networks, and/or a computing device can be distributed across computing devices1700a,1700b, and1700cbased at least in part on the processing requirements of convolutional neural networks, and/or a computing device, the processing capabilities of computing devices1700a,1700b,1700c, the latency of the network links between the computing devices in each computing cluster and between the computing clusters themselves, and/or other factors that can contribute to the cost, speed, fault-tolerance, resiliency, efficiency, and/or other design goals of the overall system architecture. Cluster storage arrays1710a,1710b,1710cof computing clusters1709a,1709b, and1709ccan be data storage arrays that include disk array controllers configured to manage read and write access to groups of hard disk drives. The disk array controllers, alone or in conjunction with their respective computing devices, can also be configured to manage backup or redundant copies of the data stored in the cluster storage arrays to protect against disk drive or other cluster storage array failures and/or network failures that prevent one or more computing devices from accessing one or more cluster storage arrays. Similar to the manner in which the functions of convolutional neural networks, and/or a computing device can be distributed across computing devices1700a,1700b,1700cof computing clusters1709a,1709b,1709c, various active portions and/or backup portions of these components can be distributed across cluster storage arrays1710a,1710b,1710c. For example, some cluster storage arrays can be configured to store one portion of the data of a convolutional neural network, and/or a computing device, while other cluster storage arrays can store other portion(s) of data of a convolutional neural network, and/or a computing device. Also, for example, some cluster storage arrays can be configured to store the data of a first convolutional neural network, while other cluster storage arrays can store the data of a second and/or third convolutional neural network. Additionally, some cluster storage arrays can be configured to store backup versions of data stored in other cluster storage arrays. Cluster routers1711a,1711b,1711cin computing clusters1709a,1709b, and1709ccan include networking equipment configured to provide internal and external communications for the computing clusters. For example, cluster routers1711ain computing cluster1709acan include one or more internet switching and routing devices configured to provide (i) local area network communications between computing devices1700aand cluster storage arrays1710avia local cluster network1712a, and (ii) wide area network communications between computing cluster1709aand computing clusters1709band1709cvia wide area network link1713ato network1506. Cluster routers1711band1711ccan include network equipment similar to cluster routers1711a, and cluster routers1711band1711ccan perform similar networking functions for computing clusters1709band1709bthat cluster routers1711aperform for computing cluster1709a. In some embodiments, the configuration of cluster routers1711a,1711b,1711ccan be based at least in part on the data communication requirements of the computing devices and cluster storage arrays, the data communications capabilities of the network equipment in cluster routers1711a,1711b,1711c, the latency and throughput of local cluster networks1712a,1712b,1712c, the latency, throughput, and cost of wide area network links1713a,1713b,1713c, and/or other factors that can contribute to the cost, speed, fault-tolerance, resiliency, efficiency and/or other design criteria of the moderation system architecture. Example Methods of Operation FIG.18illustrates flow chart1800of operations related to detecting a cough in an audio stream. The operations may be executed by and/or used with any of computing devices1600, or other ones of the preceding example embodiments. Block1810involves performing one or more pre-processing steps on the audio stream to generate an input audio sequence comprising a plurality of time-separated audio segments. Block1820involves generating an embedding for each of the segments of the input audio sequence using an audio feature set generated by a self-supervised triplet loss embedding model, the embedding model having been trained to learn the audio feature set in a self-supervised triplet loss manner from a plurality of speech audio clips from a speech dataset. Block1830involves providing the embedding for each of the segments to a model performing cough detection inference, the model generating a probability that each of the segments of the input audio sequence includes a cough episode. Block1840involves generating cough metrics for each of the cough episodes detected in the input audio sequence. Some embodiments involve instructing a user generating the audio stream to conduct a calibration procedure in which the user is instructed to cough N times. Such embodiments also involve computing an embedding for each detected cough using the audio feature set. Such embodiments further involve computing a similarity or the equivalent between each pairwise combination of the N coughs. Such embodiments additionally involve determining a verification threshold for the model performing cough detection inference based on the computed similarities. Some embodiments involve characterizing the cough based on the cough metrics. In some embodiments, the cough metrics may include at least one of: a) a number of cough episodes per segment, b) a number of cough episodes in the input audio sequence; c) a duration of the cough episode(s) per segment; or d) a duration of the cough episode(s) in the input audio sequence. Some embodiments involve performing a cough-type classification of one or more cough episodes detected in the input data. Some embodiments involve training the self-supervised triplet loss embedding model to learn the audio feature set in the self-supervised triplet loss manner from the plurality of speech audio clips from the speech dataset, and responsively generate the audio feature set in the form of a multidimensional vector. In some embodiments, the generating of the embedding involves applying the self-supervised triplet loss embedding model by utilizing temporal proximity in the speech data as a self-supervision signal. In some embodiments, the generating of the embedding involves applying the self-supervised triplet loss embedding model by applying knowledge distillation to the embedding model, and wherein the embedding model is further configured based on one or more of: (i) varying a number filters in each layer of the model, (ii) reducing a size of a bottleneck layer kernel by computing a global average over pixels in each output feature map, (iii) applying a compression operator to a bottleneck layer, wherein the compression operator is based on a Singular Value Decomposition (SVD) that is configured to learn a low-rank approximation of a weight matrix associated with the bottleneck layer, or (iv) applying Quantization-Aware training (QAT) that is configured to gradually reduce a numerical precision of weights associated with a bottleneck layer during training. FIG.19illustrates flow chart1900of operations related to detecting a non-semantic, paralinguistic event in an audio stream. The operations may be executed by and/or used with any of computing devices1600, or other ones of the preceding example embodiments. Block1910involves performing one or more pre-processing steps on the audio stream to generate an input audio sequence comprising a plurality of time-separated audio segments. Block1920involves generating an embedding for each of the segments of the input audio sequence using an audio feature set generated by a self-supervised triplet loss embedding model, the embedding model having been trained to learn the audio feature set in a self-supervised triplet loss manner from a plurality of speech audio clips from a speech dataset. Block1930involves providing the embedding for each of the segments to a model performing inference to detect the non-semantic, paralinguistic event, the model generating a probability that each of the segments of the input audio sequence includes such an event. Some embodiments involve generating metrics for each of the non-semantic paralinguistic events detected in the input audio sequence. In some embodiments, the non-semantic, paralinguistic event involves a determination of whether the audio stream contains speech from a person wearing a mask. In some embodiments, the non-semantic, paralinguistic event includes one or more of snoring, wheezing, or a hiccup. Some embodiments involve training the self-supervised triplet loss embedding model to learn the audio feature set in the self-supervised triplet loss manner from the plurality of speech audio clips from the speech dataset, and responsively generate the audio feature set in the form of a multidimensional vector. In some embodiments, the generating of the embedding involves applying the self-supervised triplet loss embedding model by utilizing temporal proximity in the speech data as a self-supervision signal. In some embodiments, the generating of the embedding involves applying the self-supervised triplet loss embedding model by applying knowledge distillation to the embedding model, and wherein the embedding model is further configured based on one or more of: (i) varying a number filters in each layer of the model, (ii) reducing a size of a bottleneck layer kernel by computing a global average over pixels in each output feature map, (iii) applying a compression operator to a bottleneck layer, wherein the compression operator is based on a Singular Value Decomposition (SVD) that is configured to learn a low-rank approximation of a weight matrix associated with the bottleneck layer, or (iv) applying Quantization-Aware training (QAT) that is configured to gradually reduce a numerical precision of weights associated with a bottleneck layer during training. The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The above detailed description describes various features and functions of the disclosed systems, devices, and methods with reference to the accompanying figures. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, figures, and claims are not meant to be limiting. Other embodiments can be utilized, and other changes can be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein. With respect to any or all of the ladder diagrams, scenarios, and flow charts in the figures and as discussed herein, each block and/or communication may represent a processing of information and/or a transmission of information in accordance with example embodiments. Alternative embodiments are included within the scope of these example embodiments. In these alternative embodiments, for example, functions described as blocks, transmissions, communications, requests, responses, and/or messages may be executed out of order from that shown or discussed, including substantially concurrent or in reverse order, depending on the functionality involved. Further, more or fewer blocks and/or functions may be used with any of the ladder diagrams, scenarios, and flow charts discussed herein, and these ladder diagrams, scenarios, and flow charts may be combined with one another, in part or in whole. A block that represents a processing of information may correspond to circuitry that can be configured to perform the specific logical functions of a herein-described method or technique. Alternatively or additionally, a block that represents a processing of information may correspond to a module, a segment, or a portion of program code (including related data). The program code may include one or more instructions executable by a processor for implementing specific logical functions or actions in the method or technique. The program code and/or related data may be stored on any type of computer readable medium such as a storage device including a disk or hard drive or other storage medium. The computer readable medium may also include non-transitory computer readable media such as non-transitory computer-readable media that stores data for short periods of time like register memory, processor cache, and random access memory (RAM). The computer readable media may also include non-transitory computer readable media that stores program code and/or data for longer periods of time, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example. The computer readable media may also be any other volatile or non-volatile storage systems. A computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device. Moreover, a block that represents one or more information transmissions may correspond to information transmissions between software and/or hardware modules in the same physical device. However, other information transmissions may be between software modules and/or hardware modules in different physical devices. While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are provided for explanatory purposes and are not intended to be limiting, with the true scope being indicated by the following claims.
87,736
11862189
V. DETAILED DESCRIPTION Devices and methods that use a multi-stage target sound detector to reduce power consumption are disclosed. Because an always-on sound detection system that continually scans audio input to detect audio events in the audio input results in relatively large power consumption, battery life is reduced when the always-on sound detection system is implemented in a power-constrained environment, such as in a mobile device. Although power consumption can be reduced by reducing the number of audio events that the sound detection system is configured to detect, reducing the number of audio events reduces the utility of the sound detection system. As described herein, a multi-stage target sound detector supports detection of a relatively large number of target sounds of interest using relatively low power for always-on operation. The multi-stage target sound detector includes a first stage that supports binary classification of audio data between all target sounds of interest (as a group) and non-target sounds. The multi-stage target sound detector includes a second stage to perform further analysis and to categorize the audio data as including a particular one or more of the target sounds of interest. The binary classification of the first stage enables low power consumption due to low complexity and small memory footprint to support sound event detection in an always-on operating state. The second stage includes a more powerful target sound classifier to distinguish between target sounds and to reduce or eliminate false positives (e.g., inaccurate detections of target sound) that may be generated by the first stage. In some implementations, in response to detecting that one or more of the target sounds of interest in the audio data, the second stage is activated (e.g., from a sleep state) to enable more powerful processing of the audio data. Upon completion of processing the audio data at the second stage, the second stage may return to a low-power state. By using the low-complexity binary classification of the first stage for always-on operation and selectively activating the more powerful target sound classifier of the second stage, the target sound detector enables high-performance target sound classification with reduced average power consumption for always-on operation. In some implementations, a multiple-stage environmental scene detector includes an always-on first stage that detects whether or not an environmental scene change has occurred and also includes a more powerful second stage that is selectively activated when the first stage detects a change in the environment. In some examples, the first stage includes a binary classifier configured to detect whether audio data represents an environmental scene change without identifying any particular environmental scene. In other examples, a hierarchical scene change detector includes a classifier configured to detect a relatively small number of broad classes in the first stage (e.g., indoors, outdoors, and in vehicle), and a more powerful classifier in the second stage is configured to detect a larger number of more specific environmental scenes (e.g., in a car, on a train, at home, in an office, etc.). As a result, high-performance environmental scene detection may be provided with reduced average power consumption for always-on operation in a similar manner as for the multi-stage target sound detection. In some implementations, the target sound detector adjusts operation based on its environment. For example, when the target sound detector is in the user's house, the target sound detector may use trained data associated with household sounds, such as a dog barking or a doorbell. When the target sound detector is in a vehicle, such as a car, the target sound detector may be trained data associated with vehicle sounds, such as glass breaking or a siren. A variety of techniques can be used to determine the environment, such as using an audio scene detector, a camera, location data (e.g., from a satellite-based positioning system), or combinations of techniques. In some examples, the first stage of the target sound detector activates a camera or other component to determine the environment, and the second stage of the target sound detector is “tuned” for more accurate detection of target sounds associated with the detected environment. Using the camera or other component for environment detection enables enhanced target sound detection, and maintaining the camera or other component in a low-power state until activated by the first stage of the target sound detector enables reduced power consumption. Unless expressly limited by its context, the term “producing” is used to indicate any of its ordinary meanings, such as calculating, generating, and/or providing. Unless expressly limited by its context, the term “providing” is used to indicate any of its ordinary meanings, such as calculating, generating, and/or producing. Unless expressly limited by its context, the term “coupled” is used to indicate a direct or indirect electrical or physical connection. If the connection is indirect, there may be other blocks or components between the structures being “coupled”. For example, a loudspeaker may be acoustically coupled to a nearby wall via an intervening medium (e.g., air) that enables propagation of waves (e.g., sound) from the loudspeaker to the wall (or vice-versa). The term “configuration” may be used in reference to a method, apparatus, device, system, or any combination thereof, as indicated by its particular context. Where the term “comprising” is used in the present description and claims, it does not exclude other elements or operations. The term “based on” (as in “A is based on B”) is used to indicate any of its ordinary meanings, including the cases (i) “based on at least” (e.g., “A is based on at least B”) and, if appropriate in the particular context, (ii) “equal to” (e.g., “A is equal to B”). In the case (i) where A is based on B includes based on at least, this may include the configuration where A is coupled to B. Similarly, the term “in response to” is used to indicate any of its ordinary meanings, including “in response to at least.” The term “at least one” is used to indicate any of its ordinary meanings, including “one or more”. The term “at least two” is used to indicate any of its ordinary meanings, including “two or more”. The terms “apparatus” and “device” are used generically and interchangeably unless otherwise indicated by the particular context. Unless indicated otherwise, any disclosure of an operation of an apparatus having a particular feature is also expressly intended to disclose a method having an analogous feature (and vice versa), and any disclosure of an operation of an apparatus according to a particular configuration is also expressly intended to disclose a method according to an analogous configuration (and vice versa). The terms “method,” “process,” “procedure,” and “technique” are used generically and interchangeably unless otherwise indicated by the particular context. The terms “element” and “module” may be used to indicate a portion of a greater configuration. The term “packet” may correspond to a unit of data that includes a header portion and a payload portion. Any incorporation by reference of a portion of a document shall also be understood to incorporate definitions of terms or variables that are referenced within the portion, where such definitions appear elsewhere in the document, as well as any figures referenced in the incorporated portion. As used herein, the term “communication device” refers to an electronic device that may be used for voice and/or data communication over a wireless communication network. Examples of communication devices include smart speakers, speaker bars, cellular phones, personal digital assistants (PDAs), handheld devices, headsets, wireless modems, laptop computers, personal computers, etc. FIG.1depicts a system100that includes a device102that is configured to receive an input sound and process the input sound with a multi-stage target sound detector120to detect the presence or absence of one or more target sounds in the input sound. The device102includes one or more microphones, represented as a microphone112, and one or more processors160. The one or more processors160include the target sound detector120and a buffer130configured to store audio data132. The target sound detector120includes a first stage140and a second stage150. In some implementations, the device102can include a wireless speaker and voice command device with an integrated assistant application (e.g., a “smart speaker” device or home automation system), a portable communication device (e.g., a “smart phone” or headset), or a vehicle system, as illustrative, non-limiting examples. The microphone112is configured to generate an audio signal114responsive to the received input sound. For example the input sound can include target sound106, non-target sound107, or both. The audio signal114is provided to the buffer130and is stored as the audio data132. In an illustrative example, the buffer130corresponds to a pulse-code modulation (PCM) buffer and the audio data132corresponds to PCM data. The audio data132at the buffer130is accessible to the first stage140and to the second stage150of the target sound detector120for processing, as described further herein. The target sound detector120is configured to process the audio data132to determine whether the audio signal114is indicative of one or more target sounds of interest. For example, the target sound detector120is configured to detect each of a set of target sounds104, including an alarm191, a doorbell192, a siren193, glass breaking194, a baby crying195, a door opening or closing196, and a dog barking197, that may be in the target sound106. It should be understood that the target sounds191-197included in the set of target sounds104are provided as illustrative examples, in other implementations, the set of target sounds104can include fewer, more, or different sounds. The target sound detector120is further configured to detect that the non-target sound107, originating from one or more other sound sources (represented as a non-target sound source108), does not include any of the target sounds191-197. The first stage140of the target sound detector120includes a binary target sound classifier144configured to process the audio data132. In some implementations, the binary target sound classifier144includes a neural network. In some examples, the binary target sound classifier144includes at least one of a Bayesian classifier or a Gaussian Mixed Model (GMM) classifier, as illustrative, non-limiting examples. In some implementations, the binary target sound classifier144is trained to generate one of two outputs: either a first output (e.g., 1) indicating that the audio data132being classified contains one or more of the target sounds191-197, or a second output (e.g., 0) indicating that the audio data132does not contain any of the target sounds191-197. In an illustrative example, the binary target sound classifier144is not trained to distinguish between each of the target sounds191-197, enabling a reduced processing load and smaller memory footprint. The first stage140is configured to activate the second stage150in response to detection of a target sound. To illustrate, the binary target sound classifier144is configured to generate a signal142(also referred to as an “activation signal”142) to activate the second stage150in response to detecting the presence of any of the multiple target sounds104in the audio data132and to refrain from generating the signal142in response to detecting that none of the multiple target sounds104are in the audio data132. In a particular aspect, the signal142is a binary signal including a first value (e.g., the first output) and a second value (e.g., the second output, and generating the signal142corresponds to generating the binary signal having the first value (e.g., a logical 1). In this aspect, refraining from generating the signal142corresponds to generating the binary signal having the second value (e.g., a logical 0). In some implementations, the second stage150is configured to be activated, responsive to the signal142, to process the audio data132, such as described further with reference toFIG.2. In an illustrative example, a specific bit of a control register represents the presence or absence of the activation signal142and a control circuit within or coupled to the second stage150is configured to read the specific bit. A “1” value of the bit indicates the signal142and causes the second stage150to activate, and a “0” value of the bit indicates absence of the signal142and that the second stage150can de-activate upon completion of processing a current portion of the audio data132. In other implementations, the activation signal142is instead implemented as a digital or analog signal on a bus or a control line, an interrupt flag at an interrupt controller, or an optical or mechanical signal, as illustrative, non-limiting examples. The second stage150is configured to receive the audio data132from the buffer130in response to the detection of the target sound106. In an example, the second stage150is configured to process one or more portions (e.g., frames) of the audio data132that include the target sound106. For example, the buffer130can buffer a series of frames of the audio signal114as the audio data132so that, upon the activation signal142being generated, the second stage150can process the buffered series of frames and generate a detector output152that indicates, for each of the multiple target sounds104, the presence or absence of that target sound in the audio data132. When deactivated, the second stage150does not process the audio data132and consumes less power than when activated. For example, deactivation of the second stage150can include gating an input buffer to the second stage150to prevent the audio data132from being input to the second stage150, gating a clock signal to prevent circuit switching within the second stage150, or both, to reduce dynamic power consumption. As another example, deactivation of the second stage150can include reducing a power supply to the second stage150to reduce static power consumption without losing the state of the circuit elements, removing power from at least a portion of the second stage150, or a combination thereof. In some implementations, the target sound detector120, the buffer130, the first stage140, the second stage150, or any combination thereof, are implemented using dedicated circuitry or hardware. In some implementations, the target sound detector120, the buffer130, the first stage140, the second stage150, or any combination thereof, are implemented via execution of firmware or software. To illustrate, the device102can include a memory configured to store instructions and the one or more processors160are configured to execute the instructions to implement one or more of the target sound detector120, the buffer130, the first stage140, and the second stage150. Because the processing operations of the binary target sound classifier144are less complex as compared to the processing operations performed by the second stage150, always-on processing of the audio data132at the first stage140uses significantly less power than processing the audio data132at the second stage150. As a result, processing resources are conserved, and overall power consumption is reduced. In some implementations, the first stage140is also configured to activate one or more other components of the device102. In an illustrative example, the first stage140activates a camera that is used to detect an environment of the device102(e.g., at home, outdoors, in a car, etc.), and the second stage150may be operated to focus on target sounds associated with the detected environment, such as described further with reference toFIG.6. FIG.2depicts an example200of the device102in which the binary target sound classifier144includes a neural network212, and the binary target sound classifier144and the buffer130are included in a low-power domain203, such as an always-on low power domain of the one or more processors160. The second stage150is in another power domain205, such as an on-demand power domain. In some implementations, the first stage140of the target sound detector120(e.g., the binary target sound classifier144) and the buffer130are configured to operate in an always-on mode, and the second stage150of the target sound detector120is configured to operate in an on-demand mode. The power domain205includes the second stage150of the target sound detector102, a sound context application240, and activation circuitry230. The activation circuitry230is responsive to the activation signal142(e.g., a wakeup interrupt signal) to selectively activate one or more components of the power domain205, such as the second stage150. To illustrate, in some implementations, the activation circuitry230is configured to transition the second stage150from a low-power state232to an active state234responsive to receiving the signal142. For example, the activation circuitry230may include or be coupled to power management circuitry, clock circuitry, head switch or foot switch circuitry, buffer control circuitry, or any combination thereof. The activation circuitry230may be configured to initiate powering-on of the second stage150, such as by selectively applying or raising a voltage of a power supply of the second stage150, of the power domain205, or both. As another example, the activation circuitry230may be configured to selectively gate or un-gate a clock signal to the second stage150, such as to prevent circuit operation without removing a power supply. The second stage150includes a multiple target sound classifier210configured to generate a detector output152that indicates, for each of the multiple target sounds104, the presence or absence of that target sound in the audio data132. The multiple target sounds correspond to multiple classes290of sound events, the multiple classes290of sound events including at least two of: alarm291, doorbell292, siren293, glass breaking294, baby crying295, door opening or closing296, or dog barking297. It should be understood that the sound event classes291-297are provided as illustrative examples. In other examples, the multiple classes290includes fewer, more, or different sound events. For example, in an implementation in which the device102is implemented in a vehicle (e.g., a car), the multiple classes290include sound events more commonly encountered in a vehicle, such as one or more of a vehicle door opening or closing, road noise, window opening or closing, radio, braking, hand brake engaging or disengaging, windshield wipers, turn signal, or engine revving, as illustrative, non-limiting examples. Although a single set of sound event classes (e.g., the multiple classes290) is depicted, in other implementations the multiple target sound classifier210is configured to select from between multiple sets of sound event classes based on the environment of the device102(e.g., one set of target sounds when the device102is at home, and another set of target sounds when the device102is in a vehicle), as described further with reference toFIG.6. In some implementations, the multiple target sound classifier210performs “faster than real-time” processing of the audio data132. In an illustrative, non-limiting example, the buffer130is sized to store approximately two seconds of audio data in a circular buffer configuration in which the oldest audio data in the buffer130is replaced by the most recently received audio data. The first stage140may be configured to periodically process sequentially received, 20 millisecond (mS) segments (e.g., frames) of the audio data132in a real-time manner (e.g., the binary target sound classifier144processes one 20 mS segment every 20 mS) and with low power consumption. However, when the second stage150is activated, the multiple target sound classifier210processes the buffered audio data132at a faster rate and higher power consumption to more quickly process the buffered audio data132to generate the detector output152. In some implementations, the detector output152includes multiple values, such as a bit or multi-bit value for each target sound, indicating detection (or likelihood of detection) of that target sound. In an illustrative example, the detector output152includes a seven-bit value, with a first bit corresponding to detection or non-detection of sound classified as an alarm291, a second bit corresponding to detection or non-detection of sound classified as a doorbell292, a third bit corresponding to detection or non-detection of sound classified as a siren293, a fourth bit corresponding to detection or non-detection of sound classified as glass breaking294, a fifth bit corresponding to detection or non-detection of sound classified as a baby crying295, a sixth bit corresponding to detection or non-detection of sound classified as a door opening or closing296, and a seventh bit corresponding to detection or non-detection of sound classified as a dog barking297. The detector output152generated by the second stage150is provided to a sound context application240. The sound context application240may be configured to perform one or more operations based on the detection of one or more target sounds. To illustrate, in an implementation in which the device102is in a home automation system, the sound context application240may generate a user interface signal242to alert a user of one or more detected sound events. For example, the user interface signal242may cause an output device250(e.g., a display screen or a loudspeaker of a speech interface device) to alert the user that a barking dog and breaking glass have been detected at a back door of the building. In another example, when the user is not within the building, the user interface signal242may cause the output device250(e.g., a transmitter coupled to a wireless network, such as a cellular network or wireless local area network) to transmit the alert to the user's phone or smart watch. In another implementation in which the device102is in a vehicle (e.g., an automobile), the sound context application240may generate the user interface signal242to warn an operator of the vehicle, via the output device250(e.g., a display screen or voice interface), that a siren has been detected via an external microphone while the vehicle is in motion. If the vehicle is turned off and the operator has exited the vehicle, the sound context application240may generate the user interface signal242to warn an owner of the vehicle, via the output device250(e.g., wireless transmission to the owner's phone or smart watch), that a crying baby has been detected via an interior microphone of the vehicle. In another implementation in which the device102is integrated in or coupled to an audio playback device, such as headphones or a headset, the sound context application240may generate the user interface signal242to warn a user of the playback device, via the output device250(e.g., a display screen or loudspeaker), that a siren has been detected, or may pass-though the siren for playback at a loudspeaker of the headphones or headset, as illustrative examples. Although the activation circuitry230is illustrated as distinct from the second stage150in the power domain205, in other implementations the activation circuitry230can be included in the second stage150. Although in some implementations the output device250is implemented as a user interface component of the device102, such as a display screen or a loudspeaker, in other implementations the output device250can be a user interface device that is remote from and coupled to the device102. Although the multiple target sound classifier210is configured to detect and distinguish between sound events corresponding to the seven classes291-297, in other implementations the multiple target sound classifier210can be configured to detect any other sound event in place of, or in addition to, any one or more of the seven classes291-297, and the multiple target sound classifier210can be configured to classify sound events according to any other number of classes. FIG.3depicts an implementation300in which the device102includes the buffer130and the target sound detector120and also includes an audio scene detector302. The audio scene detector302includes an audio scene change detector304and an audio scene classifier308. The audio scene change detector304is configured to process the audio data132and to generate a scene change signal306in response to detection of an audio scene change. In some implementations, the audio scene change detector304is implemented in a first stage of the audio scene detector302(e.g., a low-power, always-on processing stage) and the audio scene classifier308is implemented in a second stage of the audio scene detector302(e.g., a more powerful, high-performance processing stage) that is activated by the scene change signal306in a similar manner as the multiple target sound classifier210ofFIG.2is activated by the activation signal142. Unlike target sound detection, an audio environment is always present, and efficiency of operation of the audio scene detector302is enhanced in the first stage by detecting changes in the audio environment without incurring the computational penalty associated with identifying the exact audio environment. In some implementations, the audio scene change detector304is configured to detect a change in an audio scene based on detecting changes in at least one of noise statistics310or non-stationary sound statistics312. As an example, the audio scene change detector304processes the audio data132to determine the noise statistics310(e.g., an average spectral energy distribution of audio frames that are identified as containing noise) and the non-stationary sound statistics312(e.g., an average spectral energy distribution of audio frames that are identified as containing non-stationary sound), time-averaged over a relatively large time window (e.g., 3-5 seconds). Changes between audio scenes are detected based on determining a change in the noise statistics310, the non-stationary sound statistics312, or both. For example, noise and sound characteristics of an office environment are sufficiently distinct from the noise and sound characteristics within a moving automobile that a change from the office environment to the vehicle environment can be detected, and in some implementations the change is detected without identifying the noise and sound characteristics as corresponding to either of the office environment or the vehicle environment. In response to detecting an audio scene change, the audio scene change detector generates and sends the scene change signal306to the audio scene classifier308. The audio scene classifier308is configured to receive the audio data132from the buffer130in response to the detection of the audio scene change. In some implementations, the audio scene classifier308is a more powerful, higher-complexity processing component than the audio scene change detector304and is configured to classify the audio data132as corresponding to a particular one of multiple audio scene classes330. In one example, the multiple audio scene classes330include at home332, in an office334, in a restaurant336, in a car338, on a train340, on a street342, indoors344, and outdoors346. A scene detector output352is generated by the audio scene detector302and presents an indication of the detected audio scene, which may be provided to the sound context application240ofFIG.2. For example, the sound context application240can adjust operation of the device102based on the detected audio scene, such as changing a graphical user interface (GUI) at a display screen to present top-level menu items associated with the environment. To illustrate, navigation and communication items (e.g., hands-free dialing) may be presented when the detected environment is in a car, camera and audio recording items may be presented when the detected environment is outdoors, and note-taking and contacts items may be presented when the detected environment is in an office, as illustrative, non-limiting examples. Although the multiple audio scene classes330are described as including eight classes332-346, in other implementations the multiple audio scene classes330may include at least two of at home332, in an office334, in a restaurant336, in a car338, on a train340, on a street342, indoors344, or outdoors346. In other implementations, one or more of the classes330may be omitted, one or more other classes may be used in place of, or in addition to, the classes332-346, or any combination thereof. FIG.4depicts an implementation400of the audio scene change detector304in which the audio scene change detector304includes a scene transition classifier414that is trained using audio data corresponding to transitions between scenes. For example, the scene transition classifier414can be trained on captured audio data for office-to-street transitions, car-to-outdoor transitions, restaurant-to-street transitions, etc. In some implementations, the scene transition classifier414provides more robust change detection using a smaller model than the implementation of the audio scene change detector304described with reference toFIG.3. FIG.5depicts an implementation500in which audio scene detector302corresponds to a hierarchical detector such that the audio scene change detector304classifies the audio data132using a reduced set of audio scenes as compared to the audio scene classifier308. To illustrate, the audio scene change detector304includes a hierarchical model change detector514that is configured to detect the audio scene change based on detecting changes between audio scene classes of a reduced set of classes530. For example, the reduced set of classes530includes an “In Vehicle” class502, the indoors class344, and the outdoors class346. In some implementations, one or more (or all) of the reduced set of classes530includes or spans multiple classes used by the audio scene classifier308. To illustrate, the “In Vehicle” class502is used to classify audio scenes that the audio scene classifier308distinguishes as either “in a car” or “on a train.” In some implementations, one or more (or all) of the reduced set of classes530form a subset of the classes330used by the audio scene classifier308, such as the indoors class344and the outdoors class346. In some examples, the reduced set of classes530is configured to include two or three of the most likely encountered audio scenes for improved probability of detecting audio scene changes. The reduced set of classes530includes a reduced number of classes as compared to the classes330of the audio scene classifier308. To illustrate, a first count of the audio scene classes of the reduced set of classes530(three) is less than a second count of the audio scene classes330(eight). Although the reduced set of classes530is described as including three classes, in other implementations the reduced set of classes530may include any number of classes (e.g., at least two classes, such as two, three, four, or more classes) that is fewer than the number of classes supported by the audio scene classifier308. Because the hierarchical model change detector514performs detection from among a smaller set of classes as compared to the audio scene classifier308, the audio scene change detector304can detect scene changes with reduced complexity and power consumption as compared to the more powerful audio scene classifier308. Transitions between environments that are not detected by the hierarchical model change detector514may be unlikely to occur, such as transitioning directly from “at home” to “in a restaurant” (e.g., both in the “indoors” class344) without an intervening transition to a vehicle or an outdoors environment. AlthoughFIGS.3-5correspond to various implementations in which the audio scene detector304and the target sound detector120are both included in the device102, in other implementations the audio scene detector302can be implemented in a device that does not include a target sound detector. In an illustrative example, the device102includes the buffer130and the audio scene detector302and omits the first stage140, the second stage150, or both, of the target sound detector120. FIG.6depicts a particular example600in which the device102includes a scene detector606configured to detect an environment based on at least one of a camera, a location detection system, or an audio scene detector. The device102includes one or more sensors602that generate data usable by the scene detector606in determining the environment608. The one or more sensors602include one or more cameras and one or more sensors of a location detection system, illustrated as a camera620and a global positioning system (GPS) receiver624, respectively. The camera620can include any type of image capture device and can support or include still image or video capture, visible, infrared, or ultraviolet spectrums, depth sensing (e.g., structured light, time-of-flight), any other image capture technique, or any combination thereof. The first stage140is configured to activate one or more of the sensors602from a low-power state in response to the detection of a target sound by the first stage140. For example, the signal142can be provided to the camera620and to the GPS receiver624. The camera620and the GPS receiver624are responsive to the signal142to transition from a low-power state (e.g., when not in use by another application of the device102) to an active state. The scene detector606includes the audio scene detector302and is configured to detect the environment608based on at least one of the camera620, the GPS receiver624, or the audio scene detector302. As a first example, the scene detector606is configured to generate a first estimate of the environment608of the device102at least partially based on an input signal622(e.g., image data) from the camera624. To illustrate, the scene detector606may be configured to process the input data622to generate a first classification of the environment608, such as at home, in an office, in a restaurant, in a car, on a train, on a street, outdoors, or indoors, based on visual features. As a second example, the scene detector606is configured to generate a second estimate of the environment608at least partially based on location information626from the GPS receiver. To illustrate, the scene detector606may search map data using the location information626to determine whether the location corresponds to a user's home, the user's office, a restaurant, a train route, a street, an outdoor location, or an indoor location. The scene detector606may be configured to determine a speed of travel of the device102based on the location data626to determine whether the device102is traveling in a car or airplane. In some implementations, the scene detector606is configured to determine the environment608based on the first estimate, the second estimate, the scene detector output352of the audio scene detector302, and respective confidence levels associated with the first estimate, the second estimate, and the scene detector output352. An indication of the environment608is provided to the target sound detector120, and operation of the multiple target sound classifier210is at least partially based on the classification of the environment608by the scene detector606. AlthoughFIG.6depicts the device102including the camera620, the GPS receiver624, and the audio scene detector302, in other implementations one or more of the camera620, the GPS receiver624, or the audio scene detector302is omitted, one or more other sensors is added, or any combination thereof. For example, the audio scene detector302may be omitted or replaced with one or more other audio scene detectors. In other examples, the scene detector606determines the environment608solely based on the image data622from the camera, solely based on the location data62from the GPS sensor624, or solely based on a scene detection from an audio scene detector. Although the one or more sensors602, the audio scene detector302, and the scene detector606are activated responsive to the signal142, in other implementations the scene detector606, the audio scene detector302, one or more of the sensors602, or any combination thereof, may be activated or deactivated independently of the signal142. As a non-limiting example, in a non-power-constrained environment, such as in a vehicle or a home appliance, the one or more sensors602, the audio scene detector302, and the scene detector606may maintain an active state even though no target sound activity is detected. FIG.7depicts an example700in which the multiple target sound classifier210is adjusted to focus on one or more particular classes702, of the multiple classes290of sound events, that correspond to the environment608. In the example700, the environment608is detected as “in a car,” and the multiple target sound classifier210is adjusted to give more focus to identifying target sound in the audio data132as one of the classes of the multiple classes290that are more commonly encountered in a car: siren293, breaking glass294, baby crying295, or door opening or closing296, and to give less focus to identifying target sound as one of the classes less commonly encountered in a car: alarm291, doorbell292, or dog barking297. As a result, target sound detection can be performed more accurately than in implementations in which no environmental information is used to focus the target sound detection. FIG.8depicts an example800in which the multiple target sound classifier210is configured to select a particular set of sound event classes that correspond to the environment608from among multiple sets of sound event classes. A first set of trained data802includes a first set of sound event classes812associated with a first environment (e.g., at home). A second set of trained data804includes a second set of sound event classes814associated with a second environment (e.g., in a car), and one or more additional sets of trained data including an Nth set of trained data808that includes an Nth set of sound event classes818associated with an Nth environment (e.g., in an office), where N is an integer greater than one. In a non-limiting example, each of the sets of trained data802-808corresponds to one of the classes330(e.g., N=8). In some implementations, one or more of the sets of trained data802-808corresponds to a default set of trained data to be used when the environment is undetermined. As an example, as the multiple classes290ofFIG.2may be used as a default set of trained data. In an illustrative implementation, the first set of sound event classes812corresponds to “at home” and the second set of sound event classes814corresponds to “in a car.” The first set of sound event classes812includes sound events more commonly encountered in a home, such as one or more of a fire alarm, a baby crying, a dog barking, a doorbell, a door opening or closing, and breaking glass, as illustrative, non-limiting examples. The second set of event classes814includes sound events more commonly encountered in a car, such as one or more of a car door opening or closing, road noise, window opening or closing, radio, braking, hand brake engaging or disengaging, windshield wipers, turn signal, or engine revving, as illustrative, non-limiting examples. In response to the environment608being detected as “at home,” the multiple target sound classifier210selects the first set of sound event classes812to classify the audio data132based on the sound event classes of that particular set (i.e., the first set of sound event classes812). In response to the environment608being detected as “in a car,” the multiple target sound classifier210selects the second set of sound event classes814to classify the audio data132based on the sound event classes of that particular set (i.e., the second set of sound event classes814). As a result, a larger overall number of target sounds can be detected by using different sets of sound events for each environment, without increasing an overall processing and memory usage for performing target sound classification for any particular environment. In addition, by using the first stage140to activate the sensors602, the scene detector606, or both, power consumption is reduced as compared to always-on operation of the sensors602and the scene detector606. Although the example800describes the multiple target sound classifier210as selecting one of the sets of sound event classes812-818based on the environment608, in some implementations each of the sets of trained data802-808also includes trained data for the binary target sound classifier144to detect the presence or absence, as a group, of the target sounds that are associated with a particular environment. In an example, the target sound detector120is configured to select, from among the sets of trained data802-808, a particular set of trained data that corresponds to the detected environment608of the device102, and to process the audio data132based on the particular set of trained data. FIG.9depicts an implementation900of the device102as an integrated circuit902that includes the one or more processors160. The integrated circuit902also includes a sensor signal input910, such as one or more first bus interfaces, to enable the audio signal114to be received from the microphone112. For example, the sensor signal input910receives the audio signal114from the microphone112and provides the audio signal114to the buffer130. The integrated circuit902also includes a data output912, such as a second bus interface, to enable sending of the detector output152(e.g., to a display device, a memory, or a transmitter, as illustrative, non-limiting examples). For example, the target sound detector120provides the detector output152to the data output912and the data output912sends the detector output152. The integrated circuit902enables implementation of multi-stage target sound detection as a component in a system that includes one or more microphones, such as a vehicle as depicted inFIG.10or11, a virtual reality or augmented reality headset as depicted inFIG.12, a wearable electronic device as depicted inFIG.13, a voice-controlled speaker system as depicted inFIG.14, or a wireless communication device as depicted inFIG.16. FIG.10depicts an implementation1000in which the device102corresponds to, or is integrated within, a vehicle1002, illustrated as a car. In some implementations, multi-stage target sound detection can be performed based on an audio signal received from interior microphones, such as for a baby crying in the car, based on an audio signal received from external microphones (e.g., the microphone112) such as for a siren, or both. The detector output152ofFIG.1can be provided to a display screen of the vehicle1002, to a mobile device of a user, or both. For example, the output device250includes a display screen that displays a notification indicating that a target sound (e.g., a siren) is detected outside the vehicle1002. As another example, the output device250includes a transmitter that transmits a notification to a mobile device indicating that a target sound (e.g., a baby's cry) is detected in the vehicle1002. FIG.11depicts another implementation1100in which the device102corresponds to or is integrated within a vehicle1102, illustrated as a manned or unmanned aerial device (e.g., a package delivery drone). Multi-stage target sound detection can be performed based on an audio signal received from one or more microphones (e.g., the microphone112) of the vehicle1102, such as for opening or closing of a door. For example, the output device250includes a transmitter that transmits a notification to a control device indicating that a target sound (e.g., opening or closing of a door) is detected by the vehicle1102. FIG.12depicts an implementation1200in which the device102is a portable electronic device that corresponds to a virtual reality, augmented reality, or mixed reality headset1202. The one or more processors160and the microphone112are integrated into the headset1202. Multi-stage target sound detection can be performed based on an audio signal received from the microphone112of the headset1202. A visual interface device, such as the output device250, is positioned in front of the user's eyes to enable display of augmented reality or virtual reality images or scenes to the user while the headset1202is worn. In a particular example, the output device250is configured to display a notification indicating that a target sound (e.g., a fire alarm or a doorbell) is detected external to the headset1202. FIG.13depicts an implementation1300in which the device102is a portable electronic device that corresponds to a wearable electronic device1302, illustrated as a “smart watch.” The one or more processors160and the microphone112are integrated into the wearable electronic device1302. Multi-stage target sound detection can be performed based on an audio signal received from the microphone112of the wearable electronic device1302. The wearable electronic device1302includes a display screen, such as the output device250, that is configured to display a notification indicating that a target sound is detected by the wearable electronic device1302. In a particular example, the output device250includes a haptic device that provides a haptic notification (e.g., vibrates) in response to detection of a target sound. The haptic notification can cause a user to look at the wearable electronic device1302to see a displayed notification indicating that the target sound is detected. The wearable electronic device1302can thus alert a user with a hearing impairment or a user wearing a headset that the target sound is detected. FIG.14is an illustrative example of a wireless speaker and voice activated device1400. The wireless speaker and voice activated device1400can have wireless network connectivity and is configured to execute an assistant operation. The one or more processors160, the microphone112, and one or more cameras, such as the camera620, are included in the wireless speaker and voice activated device1400. The camera620is configured to be activated responsive to the integrated assistant application1402, such as in response to a user instruction to initiate a video conference. The camera620is further configured to be activated responsive to detection, by the binary target sound classifier144in the target sound detector120, of the presence of any of multiple target sounds in the audio data from the microphone112, such as to function as a surveillance camera in response to detection of a target sound. The wireless speaker and voice activated device1400also includes a speaker1404. During operation, in response to receiving a verbal command, the wireless speaker and voice activated device1400can execute assistant operations, such as via execution of an integrated assistant application1402. The assistant operations can include adjusting a temperature, playing music, turning on lights, initiating a video conference, etc. For example, the assistant operations are performed responsive to receiving a command after a keyword (e.g., “hello assistant”). Multi-stage target sound detection can be performed based on an audio signal received from the microphone142of the wireless speaker and voice activated device1400. In some implementations, the integrated assistant application1402is activated in response to detection, by the binary target sound classifier144in the target sound detector120, of the presence of any of multiple target sounds in the audio data from the microphone112. An indication of the identified target sound (e.g., the detector output152) is provided to the integrated assistant application1402, and the integrated assistant application1402causes the wireless speaker and voice activated device1400to provide a notification, such as to play out an audible speech notification via the speaker1404or to transmit a notification to a mobile device, indicating that a target sound (e.g., opening or closing of a door) is detected by the wireless speaker and voice activated device1400. Referring toFIG.15, a particular implementation of a method1500of multi-stage target sound detection is shown. In a particular aspect, one or more operations of the method1500are performed by at least one of the binary target sound classifier144, the target sound detector120, the buffer130, the processor160, the device102, the system100ofFIG.1, the activation signal unit204, the multiple target sound classifier210, the activation circuitry230, the sound context application240, the output device250, the system200ofFIG.2, the audio scene detector302, the audio scene change detector304, the audio scene classifier308ofFIG.3, the scene transition classifier414ofFIG.4, the hierarchical model change detector514ofFIG.5, the scene detector606ofFIG.6, or a combination thereof. The method1500includes storing audio data in a buffer, at1502. For example, the buffer130ofFIG.1stores the audio data132, as described with reference toFIG.1. In a particular aspect, the audio data132corresponds to the audio signal114received from the microphone112ofFIG.1. The method1500also includes processing the audio data in the buffer using a binary target sound classifier in a first stage of a target sound detector, at1504. For example, the binary target sound classifier144ofFIG.1processes the audio data132that is stored in the buffer130, as described with reference toFIG.1. The binary target sound classifier144is in the first stage140of the target sound detector150ofFIG.1. The method1500further includes activating a second stage of the target sound detector in response to detection of a target sound by the first stage, at1506. For example, the first stage140ofFIG.1activates the second stage150of the target sound detector120in response to detection of the target sound106by the first stage140, as described with reference toFIG.1. In some implementations the binary target sound classifier and the buffer operate in an always-on mode, and activating the second stage includes sending a signal from the first stage to the second stage and transitioning the second stage from a low-power state to an active state responsive to receiving the signal at the second stage, such as described with reference toFIG.2. The method1500includes processing the audio data from the buffer using a multiple target sound classifier in the second stage, at1508. For example, the multiple target sound classifier210ofFIG.2processes the audio data132from the buffer130in the second stage150, as described with reference toFIG.2. The multiple target sound classifier may process the audio data based on multiple target sounds that correspond to multiple classes of sound events, such as the classes290or one or more of the sets of sound event classes812-818, as illustrative, non-limiting examples. The method1500can also include generating a detector output that indicates, for each of multiple target sounds, the presence or absence of that target sound in the audio data, such as the detector output152. In some implementations, the method1500also includes processing the audio data at an audio scene change detector, such as the audio scene detector302ofFIG.3. In such implementations, in response to detecting an audio scene change, the method1500includes activating an audio scene classifier, such as the audio scene classifier308, and processing the audio data from the buffer using the audio scene classifier. The method1500may include classifying, at the audio scene classifier, the audio data according to multiple audio scene classes, such as the classes330. In an illustrative example, the multiple audio scene classes include at least two of: at home, in an office, in a restaurant, in a car, on a train, on a street, indoors, or outdoors. Detecting the audio scene change may be based on detecting changes in at least one of noise statistics or non-stationary sound statistics, such as described with reference to the audio scene change detector304ofFIG.3. Alternatively, or in addition, detecting the audio scene change may be performed using a classifier trained using audio data corresponding to transitions between scenes, such as the scene transition classifier414ofFIG.4. Alternatively, or in addition, the method1500can include detecting the audio scene change based on detecting changes between audio scene classes in a first set of audio scene classes (e.g., the reduced set of classes530ofFIG.5) and classifying the audio data according to a second set of audio scene classes (e.g., the classes330ofFIG.3), where a first count of the audio scene classes (e.g., 3) in the first set of audio scene classes is less than a second count of audio scene classes (e.g., 8) in the second set of audio scene classes. Because the processing operations of the binary target sound classifier are less complex as compared to the processing operations performed by the second stage, the audio data processed at the binary target sound classifier consumes less power as compared to processing the audio data at the second stage. By selectively activating the second stage in response to detection of a target sound by the first stage, the method1500enables processing resources to be conserved and overall power consumption to be reduced. The method1500ofFIG.15may be implemented by a field-programmable gate array (FPGA) device, an application-specific integrated circuit (ASIC), a processing unit such as a central processing unit (CPU), a DSP, a controller, another hardware device, firmware device, or any combination thereof. As an example, the method1500ofFIG.15may be performed by a processor that executes instructions, such as described with reference toFIG.16. Referring toFIG.16, a block diagram of a particular illustrative implementation of a device is depicted and generally designated1600. In various implementations, the device1600may have more or fewer components than illustrated inFIG.16. In an illustrative implementation, the device1600may correspond to the device102. In an illustrative implementation, the device1600may perform one or more operations described with reference toFIGS.1-15. In a particular implementation, the device1600includes a processor1606(e.g., a central processing unit (CPU)). The device1600may include one or more additional processors1610(e.g., one or more DSPs). The processors1610may include a speech and music coder-decoder (CODEC)1608, the target sound detector120, the sound context application240, the activation circuitry230, the audio scene detector302, or a combination thereof. The speech and music codec1608may include a voice coder (“vocoder”) encoder1636, a vocoder decoder1638, or both. The device1600may include a memory1686and a CODEC1634. The memory1686may include instructions1656, that are executable by the one or more additional processors1610(or the processor1606) to implement the functionality described with reference to the target sound detector120, the sound context application240, the activation circuitry230, the audio scene detector302, or any combination thereof. The memory1686may include the buffer160. The device1600may include a wireless controller1640coupled, via a transceiver1650, to an antenna1652. The device1600may include a display1628coupled to a display controller1626. A speaker1692and the microphone112may be coupled to the CODEC1634. The CODEC1634may include a digital-to-analog converter1602and an analog-to-digital converter1604. In a particular implementation, the CODEC1634may receive analog signals from the microphone112, convert the analog signals to digital signals using the analog-to-digital converter1604, and provide the digital signals to the speech and music codec1608. The speech and music codec1608may process the digital signals, and the digital signals may further be processed by one or more of the target sound detector120and the audio scene detector302. In a particular implementation, the speech and music codec1608may provide digital signals to the CODEC1634. The CODEC1634may convert the digital signals to analog signals using the digital-to-analog converter1602and may provide the analog signals to the speaker1692. In a particular implementation, the device1600may be included in a system-in-package or system-on-chip device1622. In a particular implementation, the memory1686, the processor1606, the processors1610, the display controller1626, the CODEC1634, and the wireless controller1640are included in a system-in-package or system-on-chip device1622. In a particular implementation, an input device1630and a power supply1644are coupled to the system-on-chip device1622. Moreover, in a particular implementation, as illustrated inFIG.16, the display1628, the input device1630, the speaker1692, the microphone112, the antenna1652, and the power supply1644are external to the system-on-chip device1622. In a particular implementation, each of the display1628, the input device1630, the speaker1692, the microphone112, the antenna1652, and the power supply1644may be coupled to a component of the system-on-chip device1622, such as an interface or a controller. The device1600may include a smart speaker, a speaker bar, a mobile communication device, a smart phone, a cellular phone, a laptop computer, a computer, a tablet, a personal digital assistant, a display device, a television, a gaming console, a music player, a radio, a digital video player, a digital video disc (DVD) player, a tuner, a camera, a navigation device, a vehicle, a headset, an augmented reality headset, a virtual reality headset, an aerial vehicle, or any combination thereof. In conjunction with the described implementations, an apparatus to process an audio signal representing input sound includes means for detecting a target sound. The means for detecting the target sound includes a first stage and a second stage. The first stage includes means for generating a binary target sound classification of audio data and for activating the second stage in response to classifying the audio data as including the target sound. For example, the means for detecting the target sound can correspond to the target sound detector120, the one or more processors160, the one or more processors1610, one or more other circuits or components configured to detect a target sound, or any combination thereof. The means for generating the binary target sound classification and for activating the second stage can correspond to the binary target sound classifier144, one or more other circuits or components configured to generate binary target sound classification and to activate the second stage, or any combination thereof. The apparatus also includes means for buffering the audio data and for providing the audio data to the second stage in response to the classification of the audio data as including the target sound. For example, the means for buffering the audio data and for providing the audio data to the second stage can correspond to the buffer160, the one or more processors160, the one or more processors1610, one or more other circuits or components configured to buffer audio data and providing the audio data to the second stage in response to the classification of the audio data as including the target sound, or any combination thereof. In some implementations, the apparatus further includes means for detecting an audio scene, the means for detecting the audio scene including means for detecting an audio scene change in the audio data and means for classifying the audio data as a particular audio scene in response to detection of the audio scene change. For example, the means for detecting an audio scene can correspond to the audio scene detector302, the one or more processors160, the one or more processors1610, one or more other circuits or components configured to detect an audio scene, or any combination thereof. The means for detecting an audio scene change in the audio data can correspond to the audio scene change detector304, the scene transition classifier414, the hierarchical model change detector514, one or more other circuits or components configured to detect an audio scene change in the audio data, or any combination thereof. The means for classifying the audio data as a particular audio scene in response to detection of the audio scene change can correspond to the audio scene classifier308, one or more other circuits or components configured to classify the audio data as a particular audio scene in response to detection of the audio scene change, or any combination thereof. In some implementations, a non-transitory computer-readable medium (e.g., the memory1686) includes instructions (e.g., the instructions1656) that, when executed by one or more processors (e.g., the one or more processors1610or the processor1606), cause the one or more processors to perform operations to store audio data in a buffer (e.g., the buffer130) and to process the audio data in the buffer using a binary target sound classifier (e.g., the binary target sound classifier144) in a first stage of a target sound detector (e.g., the first stage140of the target sound detector120). The instructions, when executed by the one or more processors, also cause the one or more processors to activate a second stage of the target sound detector (e.g., the second stage150) in response to detection of a target sound by the first stage and to process the audio data from the buffer using a multiple target sound classifier (e.g., the multiple target sound classifier210) in the second stage. Those of skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software executed by a processor, or combinations of both. Various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or processor executable instructions depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, such implementation decisions are not to be interpreted as causing a departure from the scope of the present disclosure. The steps of a method or algorithm described in connection with the implementations disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disc read-only memory (CD-ROM), or any other form of non-transient storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application-specific integrated circuit (ASIC). The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal. The previous description of the disclosed implementations is provided to enable a person skilled in the art to make or use the disclosed implementations. Various modifications to these implementations will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other implementations without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the implementations shown herein and is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims.
64,765
11862190
PREFERRED MODE FOR CARRYING OUT THE INVENTION Hereinafter, an embodiment of the present invention will be described with reference to the drawings as appropriate. [Service Outline] FIG.1is an image diagram showing an outline of a sales support service (hereinafter referred to as “this service”) that can be realized by an information processing system including a management server1of an information processing device according to an embodiment of the present invention. First, the outline of this service to which the information processing system inFIG.2described later is applied, will be described with reference toFIG.1. This service provides information for supporting telephone sales (hereinafter referred to as “sales support information”) to a person (hereinafter referred to as “user”) who conducts sales business (hereinafter referred to as “telephone sales”) using calling means such as a telephone. Here, the concept of the “call” is not limited to the exchange of speech by a general telephone, but includes the exchange of speech and silence through communication. The “speech” means a sound (voice) that a person utters through a vocal organ, and typically includes, for example, voices and the like exchanged between a user U and a call destination C through a telephone. In addition, the “speech” in the present specification includes various sounds that may be generated in connection with a call, for example, an on-hold tone, ambient noise, and the like. As shown inFIG.1, in this service, analysis software (first invention) and call hardware (second invention) are used. By using the analysis software, it is possible to analyze and assess the contents of the telephone sales of the user, and by using the call hardware, telephone sales by the user becomes possible. As a result, it is possible to increase profits and reduce costs both qualitatively and quantitatively. [First Invention] (Analysis Software) In this service, information on a call between the user and a person whom the user calls or receives a call from (hereinafter referred to as a “call destination”) is stored as call information and made into big data. The call information made into big data is subjected to analysis by AI (artificial intelligence), and sales support information is generated based on the result of the analysis. By using the analysis software in this service, all incoming and outgoing logs can be stored in a cloud (the management server1inFIG.2) and saved, so that a huge amount of call information can be stored as big data. As a result, the big data can be utilized for processing using AI (artificial intelligence). Specifically, for example, AI (artificial intelligence) can feed back a user's call in real time, so that the quality of telephone sales can be improved, and the contract rate can be improved. In addition, in the course of business, a person who manages users (hereinafter referred to as a “manager”), such as a person who is in a managerial position or a supervisor, can monitor the operating status of all the users, who are managed, in real time, and can therefore provide accurate instructions and training with “good points” and “bad points”. Since the history of telephoning is automatically created, it is possible to easily access the call information in which customer data and contract information are linked. Therefore, this service can be linked with customer relationship management (CRM). Since this service can be linked with a database or the like separately managed in a company, collective management of in-house systems can be realized. Call information made into big data can also be tagged with keywords. That is, by using speech recognition, when the appearance of a pre-registered keyword has been detected, the detected keyword and information of the location of appearance can be appended. Moreover, it is possible to analyze the ratio between the speaking time of the user and that of the call destination (Talk:Listen ratio), an overlapping count, a silence count, a speech speed (hereinafter referred to as a “speaking speed”), speech recognition results, an automatically summarized document, and the like. It is also possible to analyze the contents of a call. Since the contents of a call can be transcribed, the user can focus on the conversation with the call destination without inputting or taking notes. Fillers (e.g., stammering, such as “uh” and “um”) in sentences of speech recognition results can be identified and removed. As a result, the readability of the speech recognition results can be improved. A specific example in which fillers in sentences of speech recognition results are identified and removed will be described later with reference toFIG.10. As a result, the user using this service can solve the following existing problems by utilizing the sales support information. That is, the problems solved by utilizing the sales support information are as follows: “it is unknown how to conduct telephone sales because know-how for telephone sales has not been accumulated”, “a cause why an order has been missed (hereinafter referred to as a ‘lost order’) is not investigated”, “it is not possible to transmit nuance or personality to a call destination in detail”, and “it is troublesome to call a customer while checking customer information”. The manager can solve the following existing problems by utilizing the sales support information. That is, the problems solved by utilizing the sales support information are as follows: “it is not possible to identify by whom and why a lost order has occurred”, “there is no way for other users to learn efficiently the conversation skills of a user with excellent sales performance”, and “when trouble occurs, it is difficult to check past call records”. Further, according to this service, since operations such as the change of various settings are easy, the problem that “the change of the incoming call setting in the absence of the user or outside the business hours is troublesome” can be easily solved. The sales support information provided to the user using this service is “visualized” by a dashboard function using a graph or specific numerical values. Thus, it is possible to analyze all users' calls. Specifically, for example, although not shown in the drawings, it is possible to show comparing the performance of each salesperson (user) in charge, comparing with numerical values those of a telephoning in which a business negotiation has succeeded, and what kind of indicators are diverging by comparing with another salesperson (user) who has similar business negotiation strategies. This allows users to cooperate with each other or engage in friendly competition to improve the productivity of the entire organization. As described above, according to this service, when training users who are managed, the manager (not shown) can train the users inexpensively and efficiently by utilizing the sales support information. In addition, the user can utilize the support information in real time in a call with a call destination. As a result, it is possible to improve the contract rate while reducing the cost of training the user (salesperson). (System Configuration) The configuration of the information processing system that realizes the provision of this service shown inFIG.1will be described.FIG.2shows the configuration of the information processing system including the management server1of the information processing device according to the embodiment of the present invention. The information processing system shown inFIG.2includes the management server1, a dedicated communication device2, a user terminal3, a speech server (PBX/Private Branch eXchanger)4, and a call destination terminal5. The management server1, the dedicated communication device2, the user terminal3, and the speech server (PBX)4are connected to each other via a predetermined network N such as the Internet. The speech server (PBX)4is connected to the call destination terminal5via a telephone network T. (Management Server) The management server1is an information processing device managed by a service provider (not shown). The management server1executes various processes for realizing this service while appropriately communicating with the dedicated communication device2, the user terminal3, and the speech server (PBX)4. Specifically, the management server1detects sections where speech exists (hereinafter referred to as “speaking sections”) VS1to VSn (n is an integer value of 1 or more) from call information recorded in a call between the user U and the call destination C, and extracts speech information VI1to VIm (m is an integer value of 1 or more) for the speaking sections VS1to VSn, respectively. For each of the extracted speech information VI1to VIm, voice, an on-hold tone, and other noises are discriminated. A specific method for discriminating these is not limited. For example, it may be discriminated by machine learning or deep learning using a signal processing technique or AI (artificial intelligence). Hereinafter, when it is not necessary to distinguish between the speaking sections VS1to VSv, these sections are collectively referred to as a “speaking section VS”. Further, when it is not necessary to distinguish between the speech information VI1to VIm, these are collectively referred to as “speech information VI”. The management server1performs analysis based on elements E1to Ep (p is an integer value of 1 or more) based on the extracted speech information VI, and generates sales support information based on the result of the analysis. Hereinafter, when it is not necessary to distinguish the elements E1to Ep, these elements are collectively referred to as an “element E”. Note that the content of the element E is not limited. For example, when analysis is performed using information on “on-hold tone” as an element E, the extracted speech information VI is analyzed for the duration and count of on-hold tones. When analysis is performed using information on “locations where only the user U is speaking” as an element E, the extracted speech information VI is analyzed for the duration, the count, or the contents of the locations where the user U is speaking. When analysis is performed using information on “locations where only the call destination C is speaking” as an element E, the extracted speech information VI is analyzed for the duration, the count, or the contents of the locations where the call destination C is speaking. When analysis is performed using information on “locations where overlapping occurs” as an element E, the extracted speech information VI is analyzed for the duration, the count, or the contents of the locations where the speaking of the user U and that of the call destination C simultaneously occur (overlap). When analysis is performed using information on “locations where silence occurs” as an element E, the extracted speech information VI is analyzed for the duration and count of the locations where neither the user U nor the call destination C is speaking (silent locations). The management server1presents the generated sales support information to the user U. The management server1simply executes control for transmitting the sales support information to the user terminal3. Then, the user terminal3outputs acquired sales support information, and the user recognizes the sales support information. In this sense, in the present specification, the management server1can present generated sales support information to the user U. (Dedicated Communication Device) The dedicated communication device2controls making calls from the user U to the call destination C, and receiving calls from the call destination C to the user U. The dedicated communication device2may include an independent housing, or some or all of the functions may be mounted on the user terminal3(e.g., the PC drawn inFIG.2) described later. The dedicated communication device2may be mounted on a headset of the user U (e.g., the headset drawn inFIG.2). The aspect of the dedicated communication device2will be described later in the description of a second invention. (User Terminal) The user terminal3is an information processing device operated by the user U to conduct telephone sales, and is composed of, for example, a personal computer, a smartphone, a tablet, or the like. The user terminal3displays sales support information generated by the management server1. As a result, the user U can utilize the sales support information displayed on the user terminal3in his/her own telephone sales. Various application programs (hereinafter referred to as an “app”) for receiving the provision of this service are installed in the user terminal3. In the following description, unless otherwise specified, “the user U operates the user terminal3” means that the user U activates apps installed in the user terminal3to perform various operations. (Speech Server (PBX)) The speech server4functions as an exchange that enables calls between the dedicated communication device2and the call destination terminal5by connecting the network N and the telephone network T to each other. When the call destination C calls the user U, the speech server4transmits a message indicating this (hereinafter referred to as an “incoming call notification message”) to an app of the dedicated communication device2. The speech server4transmits an incoming call notification message to a code snippet (hereinafter referred to as “beacon”) incorporated in a website and a software development kit (SDK). (Call Destination Terminal) The call destination terminal5is an information processing terminal operated when the call destination C calls the user U, and is composed of, for example, a smartphone, a fixed phone, or the like. Since the information processing system including the management server1has the above-described configuration, in the course of business, when training users who are managed, the manager can train the users inexpensively and efficiently by utilizing the sales support information. The user can utilize the support information in real time in a call with the call destination. As a result, it is possible to improve the contract rate while reducing the cost of training the user (salesperson). (Hardware Configuration) FIG.3is a block diagram showing an example of the hardware configuration of the management server1constituting the information processing system inFIG.2. The management server1includes a CPU (central processing unit)11, a ROM (read only memory)12, a RAM (random access memory)13, a bus14, an input/output interface15, an output unit16, an input unit17, a storage unit18, a communication unit19, and a drive20. The CPU11executes various processes according to a program recorded in the ROM12or a program loaded from the storage unit18into the RAM13. In the RAM13, data required for the CPU11to perform various processes is also stored as appropriate. The CPU11, the ROM12and the RAM13are connected to each other via the bus14. The input/output interface15is also connected to the bus14. The output unit16, the input unit17, the storage unit18, the communication unit19, and the drive20are connected to the input/output interface15. The output unit16is composed of a liquid crystal display or the like, and displays various images. The input unit17is composed of various hardware buttons and the like, and inputs various information according to an instruction operation of an operator. The storage unit18is composed of a DRAM (dynamic random access memory) or the like, and stores various data. The communication unit19controls communication with other devices (the dedicated communication device2, the user terminal3, and the speech server (PBX)4) via the network N including the Internet. The drive20is provided as necessary. A removable medium30composed of a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is appropriately mounted in the drive20. A program read from the removable medium30by the drive20is installed in the storage unit18as necessary. The removable medium30can store various data stored in the storage unit18in the same manner as the storage unit18. Although not shown, in the information processing system inFIG.2, the dedicated communication device2, the user terminal3, the speech server (PBX)4, and the call destination terminal5each also have the hardware configuration shown inFIG.3. In this regard, however, when the dedicated communication device2, the user terminal3, and the call destination terminal5each are composed of a smartphone or a tablet, touch panels are provided as the output unit16and the input unit17. This collaboration between various hardware and software of the management server1inFIG.1enables the management server1to perform various processes such as sales support processing. As a result, a service provider (not shown) can provide the above-described service to the user U. The sales support processing refers to a process of generating and presenting sales support information to the user U. Hereinafter, functional components for executing the sales support processing will be described. (Functional Components) (Management Server) FIG.4is a functional block diagram showing functional components for executing sales support processing among the functional components of the information processing system including the management server1inFIG.3. As shown inFIG.4, in the CPU11in the management server1, when the execution of sales support processing is controlled, an acquiring unit101, an extracting unit102, an analyzing unit103, a generating unit104, and a presenting unit105function. The acquiring unit101acquires information recorded in a call between a user and a call destination as call information. Specifically, the acquiring unit101acquires information recorded in a call between the user U and the call destination C as call information. The call information acquired by the acquiring unit101is stored and managed in a call database181. The extracting unit102detects speaking sections in which speech exists from the acquired call information, and extracts speech information for each speaking section. Specifically, the extracting unit102detects the speaking sections VS1to VSn from the call information acquired by the acquiring unit101, and extracts the speech information VI1to VIm from the speaking sections VS1to VSv, respectively. The analyzing unit103performs analysis based on one or more elements, based on the extracted one or more pieces of the speech information. Specifically, the analyzing unit103performs analysis based on the elements E1to Ep, based on the speech information VI1to VIm extracted by the extracting unit102. As described above, the analyzing unit103can perform analysis using information on “on-hold tone”, “locations where only the user U is speaking”, “locations where only the call destination C is speaking”, “locations where overlapping occurs”, “locations where silence occurs”, and the like as elements E. For example, when performing analysis using information on “on-hold tone” as an element E, the speech of the user U and the speech of the call destination C included in speech information VI are distinguished from on-hold tones, and the count and duration of locations where a call is put on hold are identified. Further, for example, the analyzing unit103can determine the degree of emotion of the user U and the call destination C based on the elements E1to Ep, and can add the determination result to the analysis result. In this case, by including a video relay server (not shown) in addition to the speech server (PBX)4in the configuration of the information processing system, it is also possible to determine the degree of emotion of the user U and the call destination C from a captured moving image. Further, for example, the analyzing unit103may improve the accuracy of analysis by considering search results including fluctuation in analysis candidates in the analysis. The results of analysis by the analyzing unit103are stored and managed in an analysis result database182. The generating unit104generates support information that supports calls of the user based on the results of analysis. Specifically, the generating unit104generates sales support information based on the results of the analysis by the analyzing unit103. The details of the sales support information generated by the generating unit104will be described later with reference to a specific example shown inFIG.5. The presenting unit105presents the generated support information to the user. Specifically, the presenting unit105controls transmitting the sales support information generated by the generating unit104to the user terminal3. (Dedicated Communication Device) When the management server1controls execution of the sales support processing, a speech input/output unit201and a control unit202function in the dedicated communication device2. The speech input/output unit201inputs and outputs speech. The control unit202controls various functions of the dedicated communication device2. Specifically, for example, the control unit202controls input/output of speech and communication in the dedicated communication device2. (User Terminal) When the management server1controls the execution of the sales support processing, an app control unit301functions in the user terminal3. The app control unit301controls the installation, activation, and termination of various apps in the user terminal3. Specifically, for example, the app control unit301controls the installation, activation, and termination of a web app311, a desktop app312, and a mobile app313. In this regard, the web app311is an app used through the network N. The desktop app312is an app that operates on the desktop environment of the user terminal3, and operates by being installed in the user terminal3. The mobile app313is an app designed to operate on smartphones, tablets, or other mobile terminals. (Speech Server (PBX)) When the management server1controls the execution of the sales support processing, a communication forwarding unit401and a control unit402function in the speech server (PBX)4. The communication forwarding unit401forwards communication information transmitted from the dedicated communication device2to the call destination terminal5, and forwards communication information transmitted from the call destination terminal5to the dedicated communication device2. The control unit402controls the forwarding of communication information by the communication forwarding unit401. The information processing system that includes the management server1, the dedicated communication device2, the user terminal3, and the speech server (PBX)4with the above-described functional components can perform a posting determination process. As a result, in the course of business, when training the user who is managed, the manager can train the user inexpensively and efficiently by utilizing the sales support information. In addition, the user can utilize the support information in real time in a call with the call destination. As a result, it is possible to improve the contract rate while reducing the cost of training the user (salesperson). Specific Example A specific example of the sales support information generated by the management server1will be described with reference toFIGS.5to7.FIG.5shows a specific example of sales support information generated by the management server1. The sales support information shown inFIG.5is displayed on the user terminal3so as to be visible to the user U who performs telephone sales. As shown inFIG.5, the sales support information is composed of display areas F1and F2. The display area F1can display a search button for performing normal search or fuzzy search (fluctuation) and the history of the most recent call with the call destination C by each of users U1to Ur (r is an integer value of 1 or more). This makes it possible to search from various angles, and easily confirm which user has talked to which call destination C, when (year/month/day/hour/minute/second), what, and how. In the example shown inFIG.5, it is understood that the user U2conducts telephone sales to a person in charge “OO” of “OO Corporation” at “10:04” on “Oct. 25, 2018”, and the duration of the call is “1:56 (1 minute 56 seconds)”. It is understood that the user U3conducts telephone sales to a person in charge “OO” of “OO Co., Ltd.” at “09:03” on “Oct. 25, 2018”, and the duration of the call is “2:12 (2 minutes and 12 seconds)”. It is understood that the user U4conducts telephone sales to a person in charge “OO” of “OO Clinic” at “08:57” on “Oct. 25, 2018”, and the duration of the call is “2:02 (2 minutes and 2 seconds)”. Other examples of the history of the most recent call with the call destination C by each of users U1to Ur are as shown in the display area F1ofFIG.5. The display area F2displays registered telephoning memo items, a button B51displayed as “add telephoning memo” (hereinafter referred to as “telephoning memo addition button B51”), the results of analyzing call information based on a plurality of elements E (hereinafter referred to as “call analysis results”), and comments including information related to the call information. Here, “telephoning memo” refers to a brief memo created using pre-registered items after the end of a call. The telephoning memo can be registered in association with the call information. As a result, it is easy to manage call information, so that it is possible to easily perform after-the-fact check. The function of registering a telephoning memo is hereinafter referred to as a “telephoning memo function”. The telephoning memo function is not a function to register the content of a memo inputted as free words, but rather a function to register preset standardized sentences in addition to the content of a memo inputted as free words. Therefore, since the user U can immediately register one or more telephoning memos after the end of a call, the time cost required for registering the telephoning memos can be minimized. As a result, it is possible to avoid the occurrence of a situation such as “I couldn't leave a memo because I didn't have time”. For example, if the content of a call is that “an appointment was successfully acquired”, the user U selects and registers an item such as “appointment successfully acquired” from preset telephoning memo items. The telephoning memo function may be intended for managers. In other words, only managers may perform the setting and registration of telephoning memos. In this case, it can be utilized as a reliable telephoning memo reviewed by a manager. Alternatively, the telephoning memo function may be a function that can be used by people other than a manager. That is, even people other than a manager can register telephoning memos. In this case, a person in charge can register a telephoning memo as a memo created at the end of a call where the memory of the content of the call is clearest. In the example ofFIG.5, as registered telephoning memo items, an icon indicating “appointment successfully acquired” and an icon indicating “other company's service X being used” are displayed. In this case, a telephoning memo “appointment successfully acquired” and a telephoning memo “other company's service X being used” are registered in the call information. The telephoning memo addition button B51is a button that is pressed when an additional telephoning memo is registered in the call information. When the telephoning memo addition button B51is pressed, an operation screen for selecting and registering a telephoning memo (hereinafter referred to as “telephoning memo selection registration screen”) is displayed. A registered telephoning memo can be deleted (unregistered) by performing a predetermined operation. A specific example of the telephoning memo selection registration screen will be described later with reference toFIG.6. In the “call analysis results”, a graph in which call information is visible (hereinafter referred to as “speech graph”), assessment of telephoning, assessment of speech, speaking speed, the time and the number of times a given keyword appeared during a call, and comments from another user U and AI (artificial intelligence). In the speech graph, the call information between the user U1and the call destination C (person in charge OO of OO sports) is visualized between 15:25 on Oct. 25, 2018 and 15:27 on Oct. 25, 2018. The speech graph is a graph in which the horizontal axis represents call time, the vertical axis (upper) represents the output amount of the speech of the user U1, and the vertical axis (lower) represents the output amount of the speech of the call destination C. A solid line L1represents the speech of the user U1, and a dotted line L2represents the speech of the call destination C. From the solid line L1and the dotted line L2, it is understood that basically, while the user U1speaks, the call destination C does not speak (listening silently), and while the call destination C speaks, the user U1does not speak (listening silently). Here, the location indicated by Z3is a state in which both simultaneously speak (overlapping), and the user U1might begin to speak before the call destination C had finished speaking. The locations indicated by Z1and Z2are periods during which both parties are not speaking (periods of silence). The locations indicated by P1and P2are locations where a given keyword appeared. In the speech graph, as shown inFIG.5, various buttons displayed as “playback”, “stop”, “comments”, “playback speed”, and “download” are arranged. Since various buttons displayed as “playback”, “stop”, and “playback speed” are arranged, the playback and stop of the call, and the change of playback speed can be freely performed. In addition, the button displayed as “comments” is arranged so that the user can view comments related to the call and write his/her own. Further, since the button displayed as “download” is arranged, the call information can be freely downloaded and saved. Further, although not shown, it is also possible to jump to “bookmark” and play back it. The assessment of telephoning (the “telephoning assessment” inFIG.5) is indicated by “total score”, “Talk:Listen ratio”, “silence count”, “overlapping count”, and “keyword count”. In the example shown inFIG.5, it is understood that the total score is “4.7”, the Talk:Listen ratio is “63(%):37(%)”, the silence count is “2 (Z1and Z2in the speech graph)”, the overlapping count is “1 (Z3in the speech graph)”, and the keyword count is “2 (P1and P2in the speech graph)”. As a result, the user U1can check, for example, whether he/she talked too much or whether the explanation was insufficient, from the numerical values displayed in the “Talk:Listen ratio”. In addition, from the numerical value displayed in the “silence count”, the user U1can infer, for example, that his/her conversation skill was inexperienced, the possibility that he/she has made the call destination C feel uneasy or uncomfortable, etc. From the numerical value displayed in the “overlapping count”, the user U1can check, for example, the possibility that he/she has made the call destination C feel uncomfortable by interrupting the call destination C before the call destination C has finished speaking. From the “keyword count”, the user U1can check, for example, whether the name of a new product, a merit or risk for the call destination C, etc. have been properly communicated to the call destination C. The assessment of speech (“speech assessment” inFIG.5) is indicated by “basic frequency (user)”, “basic frequency (call destination)”, “inflection strength (user)”, and “inflection strength (call destination)”. In the example shown inFIG.5, it is understood that the basic frequency (user) is “246.35 Hz”, the basic frequency (call destination) is “86.94 Hz”, the inflection strength (user) is “0.3”, and the inflection strength (call destination) is “0.1”. As a result, the user U1can check, for example, whether he/she talked calmly, whether he/she did not unnecessarily excite the call destination C, and whether he/she took care to calm the excited call destination C, by comparing the numerical values of the “basic frequency” and the “inflection strength” of the user U1with those of the call destination C. The “speaking speed” is indicated by the number of letters (or the number of words) uttered within one second for each of the user U1and the call destination C. In the example shown inFIG.5, it is understood that the speaking speed of the user U1was “10.30 letters/second” and the speaking speed of the call destination C was “6.08 letters/second”. That is, it is understood that the user U1spoke at a much higher speed. As a result, the user U1can check whether he/she spoke too fast and too much and whether he/she made the call destination C speak calmly. The “keyword appearance” is indicated by the time and the number of times a given keyword appeared for each of the user U1and the call destination C. At this time, even if the result of speech recognition is incorrect because the speech is unclear, or the output is incorrect because it is a word that is not commonly used, such as an internal company term, a keyword can be detected by recognizing a phoneme sequence similar to a given keyword. The algorithm used for recognition of similar phoneme sequences is not limited. Specifically, for example, matching can be performed by a method using a Levenshtein distance (modified distance). In the example shown inFIG.5, it is understood that the time at which the keyword appeared is the time (P1) at which “1:23 (1 minute 23 seconds)” has elapsed after the start of the call and the time (P2) at which “1:36 (1 minute 36 seconds)” has elapsed after the start of the call. As a result, the user U1can check, for example, whether he/she has properly conveyed the name of a new product as a keyword, or whether he/she has been able to impress the name of the new product by making the call destination C speak the name of the new product. As described above, in the “comments”, comments including information related to the call information between the user U1and the call destination C are displayed. Specifically, a comment from another user U and a comment from AI (artificial intelligence) are displayed. This makes it possible not only to share information and know-how among the users U, but also to easily acquire accurate sales support information based on analysis results. In the example shown inFIG.5, at 22:58 (hour:min) on Oct. 27, 2018, a comment stating “With an internal transfer, the person in charge seems to have replaced OO from OO in the 1:00 location” has been posted. This comment was presented to the user U1as sales support information because it was found from the analysis result of the speech information that the call destination C was replaced when exactly one minute has elapsed from the start of the call, and that the reason therefor is an internal transfer. At 23:00 (hour:min) on Oct. 27, 2018, a comment stating “According to 2:35-3:00, they are currently using other company's service, but are dissatisfied with the service, and thus they are considering introducing our service. According to 5:00, the maximum number of users is expected to be 1300” has been posted. This comment was presented to the user U1as sales support information because it was found from the analysis result of the speech information that the call destination C was considering changing the currently used other company's service to the service of the user U1's company because they are dissatisfied with the currently used service, and that the maximum number of users is expected to be 1300. In this manner, the user U1can easily check the sales support information shown inFIG.5by operating the user terminal3. Therefore, the user U1can perform self-coaching by utilizing the support information in real time or after the fact in a call with the call destination C. In addition, in the course of business, when training the user U1, the manager can train the user U1inexpensively and efficiently by utilizing the sales support information. Thus, it is possible to improve the contract rate while reducing the cost of training the user U1(salesperson). FIG.6shows a specific example of the telephoning memo selection registration screen. When a call is completed, or when the telephoning memo addition button B51inFIG.5is pressed, for example, the “telephoning memo selection registration screen” as shown inFIG.6is displayed. The user U can select a corresponding item from one or more items displayed on the telephoning memo selection registration screen and register it in the call information. The telephoning memo selection registration screen is composed of display areas F3and F4. In the display area F3, each of preset items is displayed together with a check box T31. In the example ofFIG.6, the following items are displayed: appointment successfully acquired, absence of the person in charge, callback, resignation of the person in charge, continuous follow-up, no needs, other company's service X being used, and other company's service Y being used. Among the items exemplified inFIG.6, the “appointment successfully acquired” is an item that can be registered as a telephoning memo when an appointment of the call destination C is acquired, as described above. The “absence of the person in charge” is an item that can be registered as a telephoning memo when the person in charge of the call destination C is absent. The “callback” is an item that can be registered as a telephoning memo when the call destination C wants to call back because of the absence of the person in charge or the like. The “resignation of the person in charge” is an item that can be registered as a telephoning memo when the person in charge of the call destination C has resigned. The “continuous follow-up” is an item that can be registered as a telephoning memo when it is determined that continuous follow-up with the call destination C is necessary. The “no needs” is an item that can be registered as a telephoning memo when it is determined that there are no needs for the call destination C. The “other company's service X is being used” is an item that can be registered as a telephoning memo when it is found that the call destination C uses the service X that has already been provided by a competitor. The “other company's service Y being used” is an item that can be registered as a telephoning memo when it is found that the call destination C uses the service Y that has already been provided by a competitor. The user U can register an item as a telephoning memo in the call information only by performing an operation of selecting a check box T31(check) displayed together with each item. As described above, the item registered as a telephoning memo is displayed as an icon in the display area F2of the call information shown inFIG.5. In the example shown inFIG.6, the check boxes of “appointment successfully acquired” and “other company's service X being used” are selected (checked). Therefore, as shown inFIG.5, icons displayed as “appointment successfully acquired” and “other company's service X being used” are displayed in a predetermined area (the display area F2in the example ofFIG.5) of the call information. In the display area F4, a button B41displayed as “+add item” and a button B42displayed as “register” are displayed. When the button B41displayed as “+add item” is pressed, an operation screen (hereinafter referred to as “telephoning memo setting screen”) on which the setting of adding a new item can be performed, in addition to the items displayed in the display area F3, is displayed. When a new item is set in advance on the telephoning memo setting screen, the setting content is reflected on the telephoning memo selection registration screen. A specific example of the telephoning memo setting screen will be described later with reference toFIG.7. FIG.7shows a specific example of the telephoning memo setting screen. The telephoning memo setting screen is composed of display areas F5and F6. In the display area F5, the guidance message “If the telephoning memo function is set, a telephoning memo can be registered after the call is over” and a check box T51to enable/disable the telephoning memo are displayed. The user U can specify whether to display a telephoning memo in the call information by pressing the check box T51. Specifically, if the check box T51is selected (checked), the user U has decided to “display” a telephoning memo in the call information. On the other hand, when the check box T51is not selected (checked), the user U has decided not to display a telephoning memo in the call information. In the example ofFIG.7, since the check box T51is selected (checked), the user U decides to “display” a telephoning memo in the call information. In the display area F6, an input field R1for inputting the content of each item to be set, and check boxes T61for setting whether to select (check) it as positive telephoning are displayed. Here, an item not set in the list of telephoning memos can be additionally set by inputting free words in the input field R1. Further, when the check box T61is selected (checked), the item will be recorded as exemplary telephoning. An item recorded as exemplary telephoning can be utilized in various analyses. In the example shown inFIG.7, the check box T61indicating appointment successfully acquired is selected (checked) from among the set items of appointment successfully acquired, absence of the person in charge, callback, resignation of the person in charge, continuous follow-up, no needs, other company's service X being used, and other company's service Y being used. Therefore, the telephoning memo “appointment successfully acquired” is recorded as exemplary telephoning, and can be utilized in various analyses. [Second Invention] (Communication Hardware) The communication hardware (e.g., the dedicated communication device2inFIG.2) used by the user U to use this service can be substituted by existing communication hardware (e.g., a personal computer, a smartphone, a tablet). Here, since the user terminal3is composed of a personal computer, a smartphone, a tablet, or the like, the user terminal3can encompass the functions of the dedicated communication device2. That is, since this service can be utilized using existing communication hardware, the user U can enjoy the following merits, for example. That is, according to the communication hardware used in this service, by substituting an existing smartphone or the like, this service can be used only after a setting work of several minutes. This eliminates construction costs, maintenance costs, leasing costs, costs required for various equipment, and the like. In addition, all calls can be recorded and analyzed, and the call history can be checked. In addition, according to the communication hardware used in this service, since an excellent carrier in Japan can be used, an inexpensive communication fee and a simple fee system can be utilized. This can greatly reduce communication costs, particularly in a sales department where there are many opportunities to make calls. A telephone number starting with “(Tokyo) 03”, “050”, “0120”, “0800”, or the like can be freely acquired. In addition, it is possible to realize high-quality and stable calls. In addition, a single telephone number can be used to make calls in Japan and overseas. Further, even when an existing smartphone is used, for example, it is possible to make and receive calls using a telephone number starting with “(Tokyo) 03”. It is suitable for sales departments who often go out because it can be used from outside as well as in-house. Moreover, by sharing the same telephone number among a plurality of users U, telephoning by a team is possible. Since this service uses a cloud (the management server1inFIG.2), addition or deletion of members can be easily performed. This makes it possible to flexibly cope with organizational changes and internal transfers. In addition, it is possible to easily set an interactive voice response (IVR) and automatic call forwarding. The communication path when this service is provided is not limited. For example, in addition to a communication path that connects to a cloud on the Internet via an internal LAN (local area network), a communication path that connects to a cloud via a data communication network provided by a telecommunications company can be employed. This makes it possible to avoid network congestion, and to cooperate with a Web app connected through a separate path via the management server1. It is also possible to determine network congestion and automatically switch the network path used in this service. The specific configuration of the communication hardware used in this service is not limited. Any device may be used as long as it is equipped with a subscriber identity module (SIM), which is a module for recognizing subscribers, and equipment (modem, antenna, etc.) necessary for communication. For this reason, an existing communication device such as a smartphone may be used, or dedicated hardware may be used. If dedicated hardware is used, the headset used by the user U may be equipped with dedicated hardware including power supply means (e.g., a lithium ion battery). (Processing Flow) With reference toFIGS.8and9, the flow of processing of an information processing system including the communication hardware (e.g., the dedicated communication device2inFIG.2) according to the second invention, will be described.FIGS.8and9are diagrams showing a flow of processing of the information processing system including the dedicated communication device2.FIG.8shows a flow of processing of the information processing system when the user U calls the call destination C. When the user U calls the call destination C, the following processing is executed in the information processing system. That is, in step S31-1, the user terminal3activates various apps. Specifically, the user terminal3activates the web app311, the desktop app312, and the mobile app313. In step S31-2, the user terminal3transmits an outgoing call request to the speech server (PBX)4. Specifically, the “outgoing call” button or a telephone number displayed on the screen of the user terminal3is pressed. More specifically, an app installed in the user terminal3transmits an outgoing call request. In step S41-1, the speech server (PBX)4receives the outgoing call request from the user terminal3. In step S41-2, the speech server (PBX)4makes an outgoing call (call) to the call destination terminal5. Along with this, in step S21-1, the dedicated communication device2makes a ringing indicating that an outgoing call (call) is being made by the voice server (PBX)4. Then, in step S31-3, the user terminal3displays information indicating that the outgoing call (call) is being made by the speech server (PBX)4. Here, the information displayed on the user terminal3is not limited. For example, the text “calling” may be displayed on the user terminal3. In step S51-1, the call destination terminal5responds to the outgoing call (call) of the speech server (PBX)4. In step S51-2, the call destination terminal5is ready to allow communication. Accordingly, in step S41-3, the speech server (PBX)4transmits information (hereinafter referred to as “response event”) indicating that a response is made by the call destination terminal5to the user terminal3. Then, in step S21-2, the dedicated communication device2is ready to allow communication. This allows the user U and the call destination C to talk. When the dedicated communication device2is ready to allow communication, in step S31-4, the user terminal3receives the response event and displays information indicating that a call is in progress. Here, the information displayed on the user terminal3is not limited. For example, the text “responding” may be displayed on the user terminal3. In step S41-4, the speech server (PBX)4forwards call information to the management server1. In step S11-1, the management server1acquires the transmitted call information. In step S11-2, the management server1detects speaking sections VS1to VSn from the acquired call information. In step S11-3, the management server1extracts speech information VI1to VIm from the detected speaking sections VS1to VSv, respectively. In step S11-4, the management server1performs analysis based on elements E1to Ep based on the extracted speech information VI1to VIm. As described above, the analyzing unit103can perform analysis using information on “on-hold tone”, “locations where only the user U is speaking”, “locations where only the call destination C is speaking”, “locations where overlapping occurs”, “locations where silence occurs”, and the like as elements E. In step S11-5, the management server1generates sales support information based on the results of the analysis. In step S11-6, the management server1transmits the generated sales support information to the user terminal3. In step S31-5, the user terminal3displays the sales support information transmitted from the management server1. Thus, the processing of the information processing system when the user U calls the call destination C is completed. By executing each of the above processes in the information processing system, it is possible to improve the contract rate while reducing the cost of training the user (salesperson). FIG.9shows a flow of processing of the information processing system when the user U receives a call from the call destination C. When the user U receives a call from the call destination C, the following processing is executed in the information processing system. That is, in step S32-1, the user terminal3activates various apps. Specifically, the user terminal3activates the web app311, the desktop app312, and the mobile app313. In step S52-1, the call destination terminal5makes an outgoing call to the speech server (PBX)4. In step S42-1, the speech server (PBX)4receives the outgoing call from the call destination terminal5as an incoming event. In step S42-2, the speech server (PBX)4transmits the incoming event to the user terminal3. Specifically, the speech server (PBX)4transmits an incoming event to an app installed in the user terminal3. Accordingly, in step S22-1, the dedicated communication device2makes a ringing indicating that the incoming event is being transmitted by the speech server (PBX)4. Then, in step S32-2, the user terminal3displays information indicating that the incoming event is being transmitted by the speech server (PBX)4. Here, the information displayed on the user terminal3is not limited. For example, the text “receiving” may be displayed on the user terminal3. In step S32-3, the user terminal3receives a response operation by the user U. The response operation is, for example, an operation in which the user U presses a button displayed as “answer the telephone” on the screen of the user terminal3. In step S32-4, the user terminal3transmits a response request to the speech server (PBX)4. In step S42-3, the speech server (PBX)4receives the transmitted response request. In step S42-4, the speech server (PBX)4establishes speech communication. As a result, in step S22-2, the dedicated communication device2is ready to allow communication. In step S52-2, the call destination terminal5is ready to allow communication. Then, in step S32-5, the user terminal3displays information indicating that a call is in progress. Here, the information displayed on the user terminal3is not limited. For example, the text “talking” may be displayed on the user terminal3. In step S42-5, the speech server (PBX)4forwards call information to the management server1. In step S12-1, the management server1acquires the transmitted call information. In step S12-2, the management server1detects speaking sections VS1to VSn from the acquired call information. In step S12-3, the management server1extracts speech information VI1to VIm from the detected speaking sections VS1to VSv, respectively. In step S12-4, the management server1performs analysis based on elements E1to Ep based on the extracted speech information VI1to VIm. As described above, the analyzing unit103can perform analysis using information on “on-hold tone”, “locations where only the user U is speaking”, “locations where only the call destination C is speaking”, “locations where overlapping occurs”, “locations where silence occurs”, and the like as elements E. In step S12-5, the management server1generates sales support information based on the results of the analysis. In step S12-6, the management server1transmits the generated sales support information to the user terminal3. In step S32-6, the user terminal3displays the sales support information transmitted from the management server1. Thus, the processing of the information processing system when the user U receives a call from the call destination C is completed. By executing each of the above processes in the information processing system, it is possible to improve the contract rate while reducing the cost of training the user (salesperson). Specific Examples FIG.10shows a specific example in which fillers in sentences in speech recognition results are identified and removed. The speech recognition results are transcribed into text, and so-called fillers f indicating stuttering are removed. Specifically, for example, as shown in the upper part ofFIG.10, if the speech recognition result is “Yes, hello, thank you for calling”, then “Yes” is identified as a filler f1. If the speech recognition results are “Uh, hello, um, my name is □□ from um OO. Thank you for your help”, the “Uh” and the two “um” are respectively identified as fillers f2to f4. The letters respectively identified as fillers f1to f4are deleted. As a result, as shown in the lower part of FIG.10, the sentence “Hello, thank you for calling” is displayed from which the filler f1has been deleted. In addition, the sentences “Hello, my name is □□ from OO. Thank you for your help” are displayed from which the fillers f2to f4have been deleted. As shown in the upper part ofFIG.10, when a tab G1displayed as “speech recognition” is selected, the speech recognition results from which the fillers f have not been deleted are displayed. On the other hand, as shown in the lower part ofFIG.10, when a tab G2displayed as “speech recognition results (excluding fillers)” is selected, the results with the fillers f deleted are displayed. While an embodiment of the present invention has been described above, the present invention is not limited to the above-described embodiment, and modifications, improvements, and the like within the scope of achieving the object of the present invention are included in the present invention. Further, for example, in the embodiment described above, the speech server4and the call destination terminal5are connected to each other via the telephone network T, but the present invention is not limited thereto. That is, the speech server4and the call destination terminal5may be connected to each other via any other communication means such as the Internet. Further, for example, althoughFIG.2shows only one user U, one user terminal3, one speech server (PBX)4, one call destination C, and one call destination terminal5, this is only an example, and there can be more than one of any of them. Further, for example, in the above-described embodiment, this service can be used in the user terminal3by activating various apps installed in the user terminal3, but the present invention is not limited thereto. This service may be made available by accessing a predetermined website and performing a predetermined login operation without installing apps. Further, for example, in the above-described embodiment, as elements E for analyzing speech information, information on “on-hold tone”, “locations where only the user U is speaking”, “locations where only the call destination C is speaking”, “locations where overlapping occurs”, and “locations where silence occurs” is adopted, but these are merely examples, and analysis based on an element E other than these can be performed. Further, for example, in the above-described embodiment, the call information includes only speech information, but in addition to the speech server (PBX)4, a video relay server (not shown) may be included in the configuration of the information processing system. As a result, speech information and image information based on a captured moving image can be linked and managed as call information. In this case, by further providing the management server1with an image analysis function, analysis based on not only speech information but also image information can be performed. Further, for example, in the above-described embodiment, the communication method between the user terminal3and the speech server (PBX)4is not limited. However, when using any port of TCP/UDP as the speech communication method, it may be regarded as an unauthorized communication, and blocked by a firewall or the like in an organization, causing the speech communication to fail. For this reason, the same communication method (443/TCP) as that of Web browsing may be adopted, for example. This enables the risk of being blocked by a firewall or the like in an organization to be reduced. Further, for example, the history of calls made with the call destination C shown in the display area F1inFIG.5may be arranged such that the most recent call is displayed at the top as in the present embodiment, or may be arranged in any other manner. For example, it may be arranged in order of the internal ID (not shown) of the users U1to Ur, or it may be arranged in order of their sales performance from the top. By arranging it in order of sales performance from the top, many users U can easily see and use it as a reference for their own telephone sales. Further, for example, the elements E shown as items in the column of “telephoning assessment” in the display area F2inFIG.5are “total score”, “Talk:Listen ratio”, “silence count”, “overlapping count”, and “keyword count”, but are not limited thereto. Analysis based on an element E other than these five elements E may be performed. In addition, for example, the elements E shown as items in the column of “speech assessment” in the display area F2inFIG.5are “basic frequency (user)”, “basic frequency (call destination)”, “inflection strength (user)”, and “inflection strength (call destination)”, but are not limited thereto. Analysis based on an element E other than these four elements E may be performed. According to this service to which the present invention can be applied, the following functions can be implemented in addition to the functions described above. That is, telephoning time, fluctuation of telephoning time, speed, and the like are measured on a website or software, and it is possible to check whether the environment has sufficient quality for voice calls by one click. Alternatively, it is possible to use an engine that actually plays back speech for confirmation, compares it with normal speech in terms of interruption, fluctuation, sound quality, and the like, and performs analysis. This makes it possible to quantitatively quantify the readiness of the communication environment. The following functions to set the details of this service can be implemented: various setting functions for managing the user U, a function for setting automatic forwarding, a setting function for managing a plurality of users U as a group, a function for setting an answering machine, a function for setting telephone numbers, a function for setting rules for numeric values (scores) used for analysis, a function for setting prefix numbers such as non-notification setting of a telephone number, a function for setting a keyword inFIG.5, a function for setting sounds such as on-hold tone, a function for setting a telephoning memo inFIG.5, a function for setting rules for incoming calls, a function for linking with websites of other companies providing services related to CRM, a function for setting business hours, a function for setting an automatic voice response, and a setting function for linking with in-house services. Specifically, for example, according to the function for setting rules for numeric values (scores) used for analysis, it is possible to change the speaking speed depending on the industry to which the call destination C belongs. As an incoming/outgoing call function using the user terminal3, it is possible to make an incoming/outgoing call using a widget or an app, or to easily make an outgoing call by clicking a telephone number displayed on a web page in a website. An incoming/outgoing call screen having a user interface (UI) that can be used in conjunction with various systems (e.g., an in-house system) may be provided. FIG.11shows a specific example of the incoming/outgoing call screen having a UI that can be used in conjunction with various systems. As shown in the upper part ofFIG.11, for example, a widget W labeled “Phone” can be displayed on an app screen or a part of a web page. When the widget W is pressed, the display of the widget W may be changed to a mode in which a telephone call can be made, as shown in the lower part ofFIG.11. The hardware configuration of the management server1shown inFIG.3is merely an example for achieving the object of the present invention, and the present invention is not limited thereto. The functional block diagram shown inFIG.4is merely an example, and the present invention is not limited thereto. That is, it suffices that the information processing system is provided with a function capable of executing the above-described series of processes as a whole, and what functional blocks are used for realizing this function is not limited to the example inFIG.4. The location of the functional blocks is not limited toFIG.4, and any location may be possible. One functional block may consist of hardware alone, software alone, or a combination thereof. When the processing of each function block is executed by software, a program constituting the software is installed on a computer or the like from a network or a recording medium. The computer may be embedded in dedicated hardware. The computer may be a computer capable of performing various functions by installing various programs, such as a general-purpose smartphone or a personal computer, in addition to a server. The recording medium including such a program is not only composed of a removable medium that is separated from the device main body in order to provide the program to each user, but is also composed of a recording medium or the like that is provided to each user in a state of being incorporated in advance in the device main body. In the present specification, the step of describing the program recorded on the recording medium includes not only processing performed in time series in accordance with the order, but also processing performed in parallel or individually, which is not necessarily performed in time series. In the present specification, the term “system” means an overall device composed of a plurality of devices, a plurality of means, and the like. In summary, it is sufficient that the information processing device to which the present invention is applied has the following configuration, and various embodiments may be employed. That is, the information processing device (for example, the management server1inFIG.4) to which the present invention is applied supports a user (e.g., the user U inFIG.2) who calls a call destination (e.g., the call destination C inFIG.2). The information processing device includes: an acquiring unit (e.g., the acquiring unit101inFIG.4) that acquires information recorded during a call between the user and the call destination as call information; an extracting unit (e.g., the extracting unit102inFIG.4) that detects speaking sections (e.g., speaking sections VS1to VSn) in which speech exists, from the acquired call information and extracts speech information (e.g., VI1to VIm) for each speaking section; an analyzing unit (e.g., the analyzing unit103inFIG.4) that performs analysis based on one or more elements (e.g., E1to Ep) based on the extracted one or more pieces of the speech information; a generating unit (e.g., the generating unit104inFIG.4) that generates support information (e.g., sales support information) that supports the call of the user based on a result of the analysis; and a presenting unit (e.g., the presenting unit105inFIG.4) that presents the generated support information to the user. As a result, in the course of business, when training the user U who is managed, the manager can train the user U inexpensively and efficiently by utilizing the sales support information. In addition, the user U can utilize the support information in real time during the call with the call destination. FIG.12shows a specific example of a case where the support information is utilized in real time. As shown inFIG.12, the speaking of a customer and the user U (salesperson) can be sequentially displayed as text. This enables support information to be checked in real time. AI (artificial intelligence) and the superior of the user U (salesperson) can provide advice to the user U (salesperson) in real time. Specifically, for example, when advice such as “It is better to increase the speaking speed” is posted, the content is displayed in real time. Further, on the same screen as the screen on which the contents of the speaking of the customer and the user U (salesperson) are displayed, the user U (salesperson) can also post a message to their superior, for example. Specifically, for example, when a message such as “The customer has had a lot of trouble with us in the past. Please give me some advice” is input in an input field R2and posted, the content is displayed in real time. As a result, it can assist in achieving more efficient sales activities while considering objective indicators. The one or more elements may include information on on-hold tones. This makes it possible to clarify the count and duration of locations where the call is put on hold, so that it is possible to check insufficient understanding of the user U and to infer the possibility that the user U has given stress to the call destination C. The one or more elements may include information on a sound signal. Specifically, for example, the information on a sound signal may include locations where only the user is speaking, locations where only the call destination is speaking (e.g., “Talk:Listen ratio” inFIG.5), locations where overlapping occurs (e.g., “overlapping count” inFIG.5), locations where silence occurs (e.g., “silence count” inFIG.5), frequency (e.g., the “basic frequency (user)”, “basic frequency (call destination)” inFIG.5), or inflection (e.g., “inflection strength (user)”, and “inflection strength (call destination)” inFIG.5). As a result, the user U can check whether he/she talked too much and whether the explanation was insufficient. In addition, the user U can infer the immaturity of his/her conversation skill, the possibility of making the call destination C feel uneasy, or the possibility of making the call destination C feel uncomfortable. In addition, the user U can check the possibility of whether the user U may have caused the call destination C to feel uncomfortable due to interrupting the call destination C before the call destination C had finished speaking. Further, the user U can check whether the name of a new product, a merit or risk for the call destination C, and the like have been properly communicated to the call destination C. As a result, the user U can check, for example, whether he/she talked calmly, whether he/she did not unnecessarily excite the call destination C, and whether he/she took care to calm the excited call destination C. The one or more elements may further include information on letters in the speaking section (e.g., “speaking speed” inFIG.5). As a result, the user U can check whether he/she spoke too fast and too much and whether he/she made the call destination C speak calmly. The support information may include at least one (e.g., “comments” inFIG.5) of the following: a speaking style of the user, a content spoken by the call destination, or advice for the user. Thus, the user U can utilize the support information in real time during the call with the call destination C. In addition, in the course of business, when training the user U who is managed, the manager can train the user U inexpensively and efficiently by utilizing the sales support information. As a result, it is possible to improve the contract rate while reducing the cost of training the user U. When at least a part of a speech recognition result includes an error, the user can perform an operation of correcting it on the screen. FIG.13shows a specific example of a correction function of a speech recognition result. As shown inFIG.13, when the actual speech is “Uh, hello, I'm Nagata from Revcom Support”, while the speech recognition result is “Uh, hello, I'm Shinagara from Business Support”, for example, which includes some errors. In this case, the user performs input operations for correcting the speech recognition result to the actual speech content by pressing a playback button B102, a button B103for copying to a clipboard, and an edit button B104. Thus, the errors of the speech recognition result can be corrected. The corrected speech recognition result is used as learning data to ensure that the next speech recognition is performed correctly. This can prevent the same misrecognition from being repeated. As a result, the accuracy of speech recognition can be improved. EXPLANATION OF REFERENCE NUMERALS 1: management server,2: dedicated communication device,3: user terminal,4: speech server (PBX),5: call destination terminal,11: CPU,12: ROM,13: RAM,14: bus,15: input/output interface,16: output unit,17: input unit,18: storage unit,19: communication unit,20: drive,30: removal media,101: acquiring unit,102: extracting unit,103: analyzing unit,104: generating unit,105: presenting unit,181: call database,182: analysis result database,201: speech input/output unit,202: control unit,301: app control unit,311: web app,312: desktop app,313: mobile app,401: communication forwarding unit,402: control unit, N: network, T: telephone network, U, U1to Ur: user, C: call destination, S: each step of processing executed in information processing system, F: each display area, L1: solid line (speech by the user), L2: dotted line (speech by the call destination), Z1, Z2: location where silence occurs, Z3: location where overlapping occurs, P1, P2: location where a keyword appears, T: each check box, B: each button, R: input field, G: tab, W: widget.
71,341
11862191
DETAILED DESCRIPTION In the following detailed description, references are made to the accompanying drawings that form a part hereof, and in which are shown by way of illustrations specific embodiments or examples. These aspects may be combined, other aspects may be utilized, and structural changes may be made without departing from the present disclosure. Embodiments may be practiced as methods, systems or devices. Accordingly, embodiments may take the form of a hardware implementation, an entirely software implementation, or an implementation combining software and hardware aspects. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims and their equivalents. The example aspects described herein address a voice separation task, whose domain is often considered from a time-frequency perspective, as the translation of a mixed spectrogram into vocal and instrumental spectrograms. By using this framework, the technology exploits to advantage some advances in image-to-image translation—especially in regard to the reproduction of fine-grained details—for use in blind source separation for music. The decomposition of a music audio signal into its vocal and backing track components is analogous to image-to-image translation, where a mixed spectrogram is transformed into its constituent sources. According to an example aspect herein, a U-Net architecture—initially developed for medical imaging—is employed for the task of source separation, given its proven capacity for recreating the fine, low-level detail required for high-quality audio reproduction. At least some example embodiments herein, through both quantitative evaluation and subjective assessment, demonstrate that they achieve state-of-the-art performance. An example aspect described herein adapts a U-Net architecture to the task of vocal separation. That architecture was introduced originally in biomedical imaging, to improve precision and localization of microscopic images of neuronal structures. The architecture builds upon a fully convolutional network (see, e.g., Reference [14]) and, in one example, may be similar to the deconvolutional network (see, e.g., Reference [19]). In a deconvolutional network, a stack of convolutional layers—where each layer halves the size of an image but doubles the number of channels—encodes the image into a small and deep representation. That encoding is then decoded to the original size of the image by a stack of upsampling layers. In the reproduction of a natural image, displacements by just one pixel are usually not perceived as major distortions. In the frequency domain, however, even a minor linear shift in a spectrogram may have significant effects on perception. This is particularly relevant in music signals, because of the logarithmic perception of frequency. Moreover, a shift in the time dimension can become audible as jitter and other artifacts. Therefore, it can be useful that a reproduction preserves a high level of detail. According to an example aspect herein, the U-Net architecture herein adds additional skip connections between layers at the same hierarchical level in the encoder and decoder. This enables low-level information to flow directly from the high-resolution input to the high-resolution output. The neural network architecture described herein, according to one example embodiment, can predict vocal and instrumental components of an input signal indirectly. In one example embodiment herein, an output of a final decoder layer is a soft mask that is multiplied element-wise with a mixed spectrogram to obtain a final estimate. Also in one example embodiment herein, two separate models are trained for the extraction of instrumental and vocal components, respectively, of a signal, to allow for more divergent training schemes for the two models in the future. In one example embodiment herein, the neural network model operates exclusively on the magnitude of audio spectrograms. The audio signal for an individual (vocal/instrumental) component is rendered by constructing a spectrogram, wherein the output magnitude is given by applying a mask predicted by the U-Net to the magnitude of the original spectrum, while the output phase is that of the original spectrum, unaltered. Experimental results presented below indicate that such a simple methodology proves effective. Dataset According to an example aspect herein, the model architecture can employ training data available in the form of a triplet (original signal, vocal component, instrumental component). However, in the event that vast amounts of unmixed multi-track recordings are not available, an alternative strategy according to an example aspect herein can be employed to mine for matching or candidate pairs of tracks, to obtain training data. For example, it is not uncommon for artists to release instrumental versions of tracks along with the original mix. In accordance with one example aspect herein, pairs of (original, instrumental) tracks from a large commercial music database are retrieved. Candidates are found by examining associated metadata for tracks with, in one example embodiment, matching duration and artist information, where the track title (fuzzily) matches except for the string “Instrumental” occurring in exactly one title in the pair. The pool of tracks is pruned by excluding exact content matches. In one example, such procedures are performed according to the technique described in Reference [10], which is incorporated by reference herein in its entirety, as if set forth fully herein. The approach enables a large source of X (mixed) and Y, (instrumental) magnitude spectrogram pairs to be provided. A vocal magnitude spectrogram Yvis obtained from their half-wave rectified difference. In one example, a final dataset included approximately 20,000 track pairs, resulting in almost two months-worth of continuous audio, which is perhaps the largest training data set ever applied to musical source separation. Table A below shows a relative distribution of frequent genres in the dataset, obtained from catalog metadata. TABLE ATraining data genre distributionGenrePercentagePop26.0%Rap21.3%Dance & House14.2%Electronica7.4%R&B3.9%Rock3.6%Alternative3.1%Children's2.5%Metal2.5%Latin2.3%Indie Rock2.2%Other10.9% Selection of Matching Recordings The manner in which candidate recording pairs are formed using a method according to an example embodiment herein will now be described, with reference to the flow diagram ofFIG.2. The method (procedure)200commences at step202. According to one example embodiment herein, in step204a search is performed based on a set of tracks (e.g., a set of ten million commercially recorded tracks) stored in one or more databases to determine tracks that match (step206), such as one or more matching pairs of tracks (A, B). Each track may include, for example, information representing instrumental and vocal activity (if any), and an associated string of metadata which can be arranged in a table of a database. For example, as shown in the example table depicted inFIG.1, the metadata for each track (e.g., track1, track2 . . . track-n) can include various types of identifying information, such as, by example and without limitation, the track title100, artist name102, track duration104, the track type106(e.g., whether the track is “instrumental” or “original”, arranged by columns in the table. In one example embodiment herein, step204includes evaluating the metadata for each track to match (in step206) all tracks that meet predetermined criteria. For example, in the example embodiment herein, the matching of step206is performed based on the metadata identifying information (i.e., track titles, artist names, track durations etc.) about the tracks, to match and identify all tracks (A, B) determined to meet the following criteria:tracks A and B are recorded by a same artist;the term “instrumental” does not appear in the title (or type) of track A;the term “instrumental” does appear in the title (or type) of track B;the titles of tracks A and B are fuzzy matches; andthe track durations of tracks A and B differ by less than a predetermined time value (e.g., 10 seconds). According to one example embodiment herein, the fuzzy matching is performed on track titles by first formatting them to a standardized format, by, for example, latinizing non-ASCII characters, removing parenthesized text, and then converting the result to lower-case text. In one example, this process yields about 164 k instrumental tracks, although this example is non-limiting. Also in one example embodiment herein, the method may provide a 1:n, n:n, or many-to-many mapping, in that an original song version may match to several different instrumentals in step206, and vice versa. Thus, although described herein in terms of an example case where tracks A and B can be matched, the invention is not so limited, and it is within the scope of the invention for more than two tracks to be matched together in step206, and for more than two or a series of tracks to be matched in step206. For example, multiple pairs or multiples series of tracks can be matched in that step. In step208, matching versions of a track, such as a pair of tracks (A, B) that were matched in step206, are marked or otherwise designated (e.g., in a memory) as being either “instrumental” or “original”, based on whether or not the term “instrumental” appears in the metadata associated with those tracks. In the present example wherein the metadata of track A does not indicate that it is an instrumental, and where the metadata of track B does indicate that track B is an instrumental, then the matching tracks (A, B) are marked as “(original, instrumental)”. In one example embodiment herein, at least some of the results of step206can be evaluated manually (or automatically) to check for quality in step210, since it may occur that some tracks were matched that should not have been matched. In general, such undesired matching can be a result of one or more errors, such as, for example, instrumental tracks appearing on multiple albums (such as compilations or movie soundtracks, where the explicit description of the track as “instrumental” may be warranted by the context). Pairs that are suspected of being incorrectly matched can be identified using a procedure according to an example aspect herein. For example an audio fingerprinting algorithm can be used to remove suspect pairs from the candidate set. In one example embodiment, that step is performed using an open-source fingerprinting algorithm, and the procedure described in Reference [34], can be employed although in other embodiments other types of algorithms can be employed. Reference [34] is hereby incorporated by reference in its entirety, as if set forth fully herein. In one example embodiment, step210is performed according to procedure300illustrated inFIG.3. Referring now toFIG.3, for each matched track A and B a code sequence is computed using, in one example, a fingerprinting algorithm (step302). Any suitable type of known fingerprinting algorithm for generating a code sequence based on a track can be employed. Next, in step304the code sequences for the respective, matched tracks A and B are compared using, in one example embodiment herein, a Jaccard similarity. If sequences are determined based on the Jaccard similarity to overlap within a predetermined range of acceptability (“Yes” in step306), then the corresponding tracks are identified as being correctly matched in step308. The predetermined range of acceptability can be defined by upper and lower boundaries of acceptability. If, on the other hand, the comparison performed in step304results in a determination that the code sequences do not overlap within the predetermined range of acceptability (“No” in step306), then in step310the tracks are determined to be matched incorrectly, and thus at least one of them is removed from the results (step312), and only those that remain are deemed to be correctly matched (step308). A determination of “No” in step306may be a result of, for example, the codes not overlapping enough (e.g., owing to an erroneous fuzzy metadata match), or the codes overlapping too much (i.e., beyond the predetermined range of acceptability), which may occur in cases where, for example, the tracks are identical (e.g., the tracks are both instrumental or both vocal). The performance of step312may result in the removal of both tracks A and B, in certain situations. However, in the case for a 1:n, n:n, or many-to-many matching in earlier step206, then only those tracks B which were determined to be matched with track A incorrectly are removed in step312. In one example embodiment herein, step312is performed so that each original track is linked to only one non-redundant, instrumental track. The result of the performance of step312in that embodiment is that only pair(s) of tracks A, B deemed to match within the predetermined range of acceptability remain (step308). In one sample case where 10 million commercially available tracks are evaluated using the procedures200and300, the processes yielded roughly 24,000 tracks, or 12,000 original-instrumental pairs, totaling about 1500 hours of audio track durations. 24,000 strongly labeled tracks were obtained for use as a training dataset. Estimation of Vocal Activity Before describing how matches tracks A, B are employed for training according to an example aspect herein, the manner in which vocal or non-vocal activity can be separated from a track and/or predicted, according to an example aspect herein, will first be described.FIG.4is a flow diagram of a procedure400according to an example embodiment herein, andFIG.6ashows a block diagram of an example embodiment of a neural network system600for performing the procedure400. For purposes of the following description, TOand TIare employed to denote tracks, in particular, an “original” (“mixed”) track and an “instrumental” track, respectively, that are available, and it is assumed that it is desired to obtain the vocal and/or instrumental component of a provided “original” (“mixed”) track (also referred to as a “mixed original signal”). Generally, the procedure400according to the present example aspect of the present application includes computing a Time-Frequency Representation (TFR) for the tracks TOand TI, using a TFR obtainer602, to yield corresponding TFRs XOand XI, respectively, in the frequency domain (step402), wherein the TFRs XOand XIeach are a spectrogram of 2D coefficients, having frequency and phase content, and then performing steps404to410as will be described below. It should be noted that, although described herein in the context of steps402to405being performed together for both types of tracks TOand TI(i.e., an “original” track and an “instrumental” track), the scope of the invention herein is not so limited, and in other example embodiments herein, those steps402to405may be performed separately for each separate type of track. In other example embodiments, steps402to405are performed for the “original” (“mixed”) track, such as, for example, in a case where it is desired to predict or isolate the instrumental or vocal component of the track, and steps402to405are performed separately for the instrumental track, for use in training (to be described below) to enable the prediction/isolation to occur. In one example, step402is performed according to the procedures described in Reference [39], which is incorporated by reference herein in its entirety, as if set forth fully herein. At step404, the pair of TFRs (XO, XI) obtained in step402undergoes a conversion (by polar coordinate converter604) to polar coordinates including magnitude and phase components, representing a frequency intensity at different points in time. The conversion produces corresponding spectrogram components (ZO, ZI), wherein the components (ZO, ZI) are a version of the pair of TFRs (XO, XI) that has been converted in step404into a magnitude and phase representation of the pair of TFRs, and define intensity of frequency at different points in time. The magnitude is the absolute value of a complex number, and the phase is the angle of the complex number. In step405, patches are extracted from the spectrogram components (ZO, ZI) using patch extractor606. In one example embodiment herein, step405results in slices of the spectrograms from step404(by way of polar coordinate converter604) being obtained along a time axis, wherein the slices are fixed sized images (such as, e.g., 512 bins and 128 frames), according to one non-limiting and non-exclusive example embodiment herein. Patches obtained based on the magnitude of components (ZO, ZI) (wherein such patches also are hereinafter referred to as “magnitude patches (MPO,MPI)” or “magnitude spectrogram patches (MPO,MPI)”)). In one example, step405is performed according to the procedures described in the Reference [38], which is incorporated by reference herein in its entirety, as if set forth fully herein. In a next step406, the magnitude patch)(MPO) (e.g., the original mix spectrogram magnitude) obtained in step405is applied to a pre-trained network architecture500, wherein, according to one example aspect herein, the network architecture is a U-Net architecture (also referred to herein as “U-Net architecture500” or “U-Net500”). For purposes of the present description ofFIG.4, it is assumed that the U-Net architecture is pre-trained according to, in one example embodiment, procedure700to be described below in conjunction withFIG.7. In one example embodiment herein, the network architecture500is similar to the network architecture disclosed in Reference [11] and/or Reference [24], which are incorporated by reference herein in their entireties, as if set forth fully herein, although these examples are non-exclusive and non-limiting. FIG.5illustrates in more detail one example of U-Net architecture500that can be employed according to an example aspect herein. The U-Net architecture500comprises a contracting (encoder) path502and an expansive (decoder) path504. In one example embodiment herein, the contracting path502can be similar to an architecture of a convolutional network, and includes repeated application of two 3×3 convolutions (unpadded convolutions), and a rectified linear unit (ReLU). More particularly, in the illustrated embodiment, contracting path502comprises an input layer502arepresenting an input image slice, wherein the input image slice is the magnitude patch)(MPO) obtained from step405. Contracting path502also comprises a plurality of downsampling layers502bto502n, where, in one example embodiment herein, n equals 5, and each downsampling layer502bto502nperforms a 2D convolution that halves the number of feature channels. For convenience, each layer502bto502nis represented by a corresponding image slice, Also in the illustrated embodiment, expansive path504comprises a plurality of upsampling layers504ato504n, wherein, in one example embodiment herein, n equals 5 and each upsampling layer504ato504nperforms a 2D deconvolution that doubles the number of feature channels, and where at least some of the layers504ato504n, such as, e.g., layers504ato504c, also perform spatial dropout. Additionally, a layer506is included in the U-Net architecture500, and can be said to be within each path502and504as shown. According to one example embodiment herein, contracting path502operates according to that described in Reference [36], which is incorporated by reference herein in its entirety, as if set forth fully herein, although that example is non-exclusive and non-limiting. Also in one example embodiment herein, each layer of path502includes a strided 2D convolution of stride 2 and kernel size 5×5, batch normalization, and leaky rectified linear units (ReLU) with leakiness 0.2. The layers of path504employ strided deconvolution (also referred to as “transposed convolution”) with stride 2 and kernel size 5×5, batch normalization, plain ReLU, and a 50% dropout (in the first three layers). In at least the final layer (e.g., layer504n), a sigmoid activation function can be employed, in one example embodiment herein. Each downsampling layer502bto502nreduces in half the number of bins and frames, while increasing the number of feature channels. For example, where the input image of layer502ais a 512×128×1 image slice (where 512 represents the number of bins, 128 represents the number of frames, and 1 represents the number of channels), application of that image slice to layer502bresults in a 256×64×16 image slice. Application of that 256×64×16 image slice to layer502cresults in a 128×32×32 image slice, and application of the 128×32×32 image slice to subsequent layer502dresults in a 64×16×64 image slice. Similarly, application of the 64×16×64 image slice to subsequent layer502eresults in a 32×8×128 image slice, and application of the 32×8×128 image slice to layer502nresults in a 16×4×256 image slice. Similarly, application of the 64×4×256 image slice to layer506results in a 8×2×512 image slice. Of course, the foregoing values are examples only, and the scope of the invention is not limited thereto. Each layer in the expansive path504upsamples the (feature map) input received thereby followed by a 2×2 convolution (“up-convolution”) that doubles the number of bins and frames, while reducing the number of channels. Also, a concatenation with the correspondingly cropped feature map from the contracting path is provided, and two 3×3 convolutions, each followed by a ReLU. In an example aspect herein, concatenations are provided by connections between corresponding layers of the paths502and504, to concatenate post-convoluted channels to the layers in path504. This feature is because, in at least some cases, when an image slice is provided through the path504, at least some details of the image may be lost. As such, predetermined features (also referred to herein as “concatenation features”)510(such as, e.g., features which preferably are relatively unaffected by non-linear transforms) from each post-convolution image slice in the path502are provided to the corresponding layer of path504, where the predetermined features are employed along with the image slice received from a previous layer in the path504to generate the corresponding expanded image slice for the applicable layer. More particularly, in the illustrated embodiment, the 8×2×512 image slice obtained from layer506, and concatenation features510from layer502n, are applied to the layer504a, resulting in a 16×4×256 image slice being provided, which is then applied along with concatenation features510from layer502eto layer504b, resulting in a 32×8×128 image slice being provided. Application of that 32×8×128 image slice, along with concatenation features510from layer502d, to layer504cresults in a 64×16×64 image slice, which is then applied along with concatenation features510from layer502cto layer504d, resulting in a 128×32×32 image slice being provided. That latter image slice is then applied, along with concatenation features510from layer502b, to layer504e, resulting in a 256×16×16 image slice being provided, which, after being applied to layer504n, results in a 512×128×1 image slice being provided. In one example embodiment herein, cropping may be performed to compensate for any loss of border pixels in every convolution. Having described the U-Net architecture500ofFIG.5, the next step of the procedure400ofFIG.4will now be described. In step408, the output of layer504nis employed as a mask for being applied by mask combiner608to the input image of layer502a, to provide an estimated magnitude spectrogram508, which, in an example case where the U-Net architecture500is trained to predict/isolate an instrumental component of a mixed original signal, is an estimated instrumental magnitude spectrum (of course, in another example case where the U-Net architecture500is trained to predict/isolate a vocal component of a mixed original signal, the spectrogram is an estimated vocal magnitude spectrum). That step408is performed to combine the image (e.g., preferably a magnitude component) from layer504nwith the phase component from the mixed original spectrogram502ato provide a complex value spectrogram having both phase and magnitude components (i.e., to render independent of the amplitude of the original spectrogram). Step408may be performed in accordance with any suitable technique. The result of step408is then applied in step410to an inverse Short Time Fourier Transform (ISTFT) component610to transform (by way of a ISTFT) the result of step408from the frequency domain, into an audio signal in the time domain (step410). In a present example where it is assumed that the U-Net architecture500is trained to learn/predict instrumental components of input signals (i.e., the mixed original signal, represented by the component MPOapplied in step406), the audio signal resulting from step410is an estimated instrumental audio signal. For example, the estimated instrumental audio signal represents an estimate of the instrumental portion of the mixed original signal first applied to the system600in step402. In the foregoing manner, the instrumental component of a mixed original signal that includes both vocal and instrumental components can be obtained/predicted/isolated. To obtain the vocal component of the mixed original signal, a method according to the foregoing procedure400is performed using system600, but for a case where the U-Net architecture500is trained (e.g., in a manner as will be described later) for learn/predict vocal components of mixed signals. For example, the procedure for obtaining the vocal component includes performing steps402to410in the manner described above, except that, in one example embodiment, the U-Net architecture500employed in step406has been trained for estimating a vocal component of mixed original signals applied to the system600. As a result of the performance of procedure400for such a case, the spectrogram508obtained in step408is an estimated vocal magnitude spectrum, and the audio signal obtained in step410is an estimated vocal audio signal, which represents an estimate of the vocal component of the mixed original signal applied to system600in step402(and an estimate of the component MPOapplied to the U-Net architecture500in step406). Dataset In one example embodiment herein, the model architecture assumes that training data is available in the form of a triplet (mixed original signal, vocal component, instrumental component), as would be the case in which, for example, access is available to vast amounts of unmixed multi-track recordings. In other example embodiments herein, an alternative strategy is provided to provide data for training a model. For example, one example solution exploits a specific but large set of commercially available recordings in order to “construct” training data: instrumental versions of recordings. Indeed, in one example embodiment, the training data is obtained in the manner described above in connection withFIGS.1-3. Training In one example embodiment herein, the model herein can be trained using an ADAM optimizer. One example of an ADAM optimizer that can be employed is described in Reference [12], which is incorporated by reference herein in its entirety, as if set forth fully herein, although this example is non-limiting and non-exclusive. Given the heavy computational requirements of training such a model, in one example embodiment herein, input audio is downsampled to 8192 Hz in order to speed up processing. Then, a Short Time Fourier Transform is computed with a window size of 1024 and a hop length of 768 frames, and patches of, e.g., 128 frames (roughly 11 seconds) are extracted, which then are fed as input and targets to the U-Net architecture500. Also in this example embodiment, the magnitude spectrograms are normalized to the range [0, 1]. Of course, these examples are non-exclusive and non-limiting. The manner in which training is performed, according to an example embodiment herein, will now be described in greater detail, with reference toFIGS.6band7. In the present example embodiment, it is assumed that it is desired to train the U-Net architecture500to learn to predict/isolate an instrumental component of mixed original signals φ used as training data, wherein, in one example embodiment, the mixed original signals φ used for training are “original” tracks A such as those identified as being correct matches with corresponding “instrumental” tracks B in step308ofFIG.3described above. Referring toFIG.6b, the system600is shown, along with additional elements including a loss calculator612and a parameter adaptor614. The system600, loss calculator612, and parameter adaptor614form a training system650. The system600ofFIG.6bis the same as that ofFIG.6a, except that U-Net the architecture500is assumed not to be trained, in the present example, at least at the start of procedure700. In one example embodiment herein, in step702the system600ofFIG.6bis fed with short time fragments of at least one signal φ, and the system600operates as described above and according to steps402to410ofFIG.4described above, in response to the signal φ (except that the U-Net architecture500is assumed not to be fully trained yet). For each instance of the signal φ applied to the system600ofFIG.6b, the system600provides an output f(X, Θ) from the mask combiner608, to the loss calculator612. Also, input to the loss calculator612, according to an example embodiment herein, is a signal Y, which represents the magnitude of the spectrogram of the target audio. For example, in a case where it is desired to train the architecture to predict/isolate an instrumental component of an original mixed signal (such as a track “A”), then the target audio is the “instrumental” track B (from step308) corresponding thereto, and the magnitude of the spectrogram of that track “B” is obtained for use as signal Y via application of a Short Time Fourier Transform (STFT) thereto. In step704the loss calculator612employs a loss function to determine how much difference there is between the output f(X, Θ) and the target, which, in this case, is the target instrumental (i.e., the magnitude of the spectrogram of the track “B”). In one example embodiment herein, the loss function is the L1,1norm (e.g., wherein the norm of a matrix is the sum of the absolute values of its elements) of a difference between the target spectrogram and the masked input spectrogram, as represented by the following formula (F1): L(X,Y;Θ)=∥f(X,Θ)⊗X−Y∥(F1)where X denotes the magnitude of the spectrogram of the original, mixed signal (e.g., including both vocal and instrumental components), Y denotes the magnitude of the spectrogram of the target instrumental (or vocal, where a vocal signal is used instead) audio (wherein Y may be further represented by either Yv for a vocal component or Yi for an instrumental component of the input signal), f (X, Θ) represents an output of mask combiner608, and Θ represents the U-Net (or parameters thereof). For the case where the U-Net is trained to predict instrumental spectrograms, denotation Θ may be further represented by Θi(whereas for the case where the U-Net is trained to predict vocal spectrograms, denotation Θ may be further represented by Θv). In the above formula F1, the expression f(X, Θ)⊗X represents masking of the magnitude X (by mask combiner608) using the version of the magnitude X after being applied to the U-Net500. A result of formula F1 is provided from loss calculator612to parameter adaptor614, which, based on the result, varies one or more parameters of the U-Net architecture500, if needed, to reduce the loss value (represented by L(X, Y; Θ)) (step706). Procedure700can be performed again in as many iterations as needed to substantially reduce or minimize the loss value, in which case the U-Net architecture500is deemed trained. For example, in step708it is determined whether the loss value is sufficiently minimized. If “yes” in step708, then the method ends at step710and the architecture is deemed trained. If “no” in step708, then control passes back to step702where the procedure700is performed again as many times as needed until the loss value is deemed sufficiently minimized. The manner in which the parameter adaptor614varies the parameters of the U-Net architecture500in step706can be in accordance with any suitable technique, such as, by example and without limitation, that disclosed in Reference [36], which is incorporated by reference herein in its entirety, as if set forth fully herein. In one example embodiment, step706may involve altering one or more weights, kernels, and/or other applicable parameter values of the U-Net architecture500, and can include performing a stochastic gradient descent algorithm. A case where it is desired to train the U-Net architecture500to predict a vocal component of a mixed original signal will now be described. In this example embodiment, the procedure700is performed in the same manner as described above, except that the signal Y provided to the loss calculator612is a target vocal signal corresponding to the mixed original signal(s) φ (track(s) A) input to the system650(i.e., the target vocal signal and mixed original signal are deemed to be a match). The target vocal signal may be obtained from a database of such signals, if available (and a magnitude of the spectrogram thereof can be employed). In other example embodiments, and referring to the procedure800ofFIG.8, the target vocal signal is obtained by determining the half-wave difference between the spectrogram of the mixed original signal (i.e., the magnitude component of the spectrogram, which preferably is representation after the time-frequency conversion via STFT by TFR obtainer602, polar coordinate conversion via converter604, and extraction using extractor606) and the corresponding instrumental spectrogram (i.e., of the instrumental signal paired with the mixed original signal, from the training set, to yield the target vocal signal (step802). The instrumental spectrogram is preferably a representation of the mixed original signal after the time-frequency conversion via STFT by TFR obtainer602, polar coordinate conversion via converter604, and extraction using extractor606). For either of the above example scenarios for obtaining the target vocal signal, and referring again toFIGS.6band7, the target vocal signal is applied as signal Y to the loss calculator612, resulting in the loss calculator612employing the above formula F1 (i.e., the loss function) to determine how much difference there is between the output f(X, Θ) and the target (signal Y) (step704). A result of formula F1 in step704is provided from loss calculator612to parameter adaptor614, which, based on the result, varies one or more parameters of the U-Net architecture500, if needed, to reduce the loss value L(X, Y; Θ) (step706). Again, procedure can be performed again in as many iterations as needed (as determined in step708) to substantially reduce or minimize the loss value, in which case the U-Net architecture500is deemed trained to predict a vocal component of a mixed original input signal (step710). Quantitative Evaluation To provide a quantitative evaluation, an example embodiment herein is compared to the Chimera model (see, e.g., Reference[15]) that produced the highest evaluation scores in a 2016 MIREX Source Separation campaign. A web interface can be used to process audio clips. It should be noted that the Chimera web server runs an improved version of the algorithm that participated in MIREX, using a hybrid “multiple heads” architecture that combines deep clustering with a conventional neural network (see, e.g., Reference [16]). For evaluation purposes an additional baseline model was built, resembling the U-Net model but without skip connections, essentially creating a convolutional encoder-decoder, similar to the “Deconvnet” (see, e.g., Reference[19]). The three models were evaluated on the standard iKala (see, e.g., Reference [5]) and MedleyDB dataset (see, e.g., Reference [3]). The iKala dataset has been used as a standardized evaluation for the annual MIREX campaign for several years, so there are many existing results that can be used for comparison. MedleyDB on the other hand was recently proposed as a higher-quality, commercial-grade set of multi-track stems. Isolated instrumental and vocal tracks were generated by weighting sums of instrumental/vocal stems by their respective mixing coefficients as supplied by a MedleyDB Python API. The evaluation is limited to clips that are known to contain vocals, using the melody transcriptions provided in both iKala and MedleyDB. The following functions are used to measure performance: Signal-To-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR), and Signal-to-Artifact Ratio (SAR) (see, e.g., Reference [31]). Normalized SDR (NSDR) is defined as NSDR(Se,Sr,Sm)=SDR(Se,Sr)−SDR(Sm,Sr)  (F2) where Seis the estimated isolated signal, Sris the reference isolated signal, and Smis the mixed signal. Performance measures are computed using the mireval toolkit (see, e.g., Reference [22]). Table 2 and Table 3 show that the U-Net significantly outperforms both the baseline model and Chimera on all three performance measures for both datasets. TABLE 2iKala mean scoresU-NetBaselineChimeraNSDR Vocal11.0948.5498.749NSDR Instrumental14.43510.90611.626SIR Vocal23.96020.40221.301SIR Instrumental21.83214.30420.481SAR Vocal17.71515.48115.642SAR Instrumental14.12012.00211.539 TABLE 3MedleyDB mean scoresU-NetBaselineChimeraNSDR Vocal8.6817.8776.793NSDR Instrumental7.9456.3705.477SIR Vocal15.30814.33612.382SIR Instrumental21.97516.92820.880SAR Vocal11.30110.63210.033SAR Instrumental15.46215.33212.530 FIGS.9aand9bshow an overview of the distributions for the different evaluation measures. Assuming that the distribution of tracks in the iKala hold-out set used for MIREX evaluations matches those in the public iKala set, results of an example embodiment herein are compared to the participants in the 2016 MIREX Singing Voice Separation task. Table 4 and Table 5 show NSDR scores for the example models herein compared to the best performing algorithms of the 2016 MIREX campaign. TABLE 4iKala NSDR Instrumental, MIREX 2016ModelMeanSDMinMaxMedianU-Net14.4353.5834.16521.71614.525Baseline10.9063.2471.84619.64110.869Chimera11.6264.151−0.36820.81212.045LCP211.1883.6262.50819.87511.000LCP110.9263.8350.74219.96010.800MC29.6683.676−7.87522.7349.900 TABLE 5iKala NSDR Vocal, MIREX 2016ModelMeanSDMinMaxMedianU-Net11.0943.5662.39220.72010.804Baseline8.5493.428−0.69618.5308.746Chimera8.7494.001−1.85018.7018.868LCP26.3413.370−1.95817.2405.997LCP16.0733.462−1.65817.1705.649MC25.2892.914−1.30212.5714.945 In order to assess the effect of the U-Net's skip connections, masks generated by the U-Net and baseline models can be visualized. FromFIGS.10aand10bit is clear that, while the baseline model (FIG.10b) captures the overall structure, there is a lack of fine-grained detail observable. Subjective Evaluation Emiya et al. introduced a protocol for the subjective evaluation of source separation algorithms (see, e.g., Reference [7]). They suggest asking human subjects four questions that broadly correspond to the SDR/SIR/SAR measures, plus an additional question regarding the overall sound quality. These four questions were asked to subjects without music training, and the subjects found them ambiguous, e.g., they had problems discerning between the absence of artifacts and general sound quality. For better clarity, the survey was distilled into the following two questions in the vocal extraction case:Quality: “Rate the vocal quality in the examples below.”Interference: “How well have the instruments in the clip above been removed in the examples below?” For instrumental extraction similar questions were asked:Quality: “Rate the sound quality of the examples below relative to the reference above.”Extracting instruments: “Rate how well the instruments are isolated in the examples below relative to the full mix above.” Data was collected using CrowdFlower, an online platform where humans carry out micro-tasks, such as image classification, simple web searches, etc., in return for small per-task payments. In the survey, CrowdFlower users were asked to listen to three clips of isolated audio, generated by U-Net, the baseline model, and Chimera. The order of the three clips was randomized. Each question asked one of the Quality and Interference questions. In an Interference question a reference clip was included. The answers were given according to a 7 step Likert scale (see, e.g., Reference [13]), ranging from “Poor” to “Perfect”.FIG.12is a screen capture of a CrowdFlower question. In other examples, alternatives to 7-step Likert scale can be employed, such as, e.g., the ITU-R scale (see, e.g., Reference [28]). Tools like CrowdFlower enable quick roll out of surveys, and care should be taken in the design of question statements. To ensure the quality of the collected responses, the survey was interspersed with “control questions” that the user had to answer correctly according to a predefined set of acceptable answers on the Likert scale. Users of the platform were unaware of which questions are control questions. If questions were answered incorrectly, the user was disqualified from the task. A music expert external to the research group was asked to provide acceptable answers to a number of random clips that were designated as control questions. For the survey 25 clips from the iKala dataset and 42 clips from MedleyDB were used. There were 44 respondents and 724 total responses for the instrumental test, and 55 respondents supplied 779 responses for the voice test. FIGS.13ato13dshow mean and standard deviation for answers provided on CrowdFlower. The U-Net algorithm outperformed the other two models on all questions. The example embodiments herein take advantage of a U-Net architecture in the context of singing voice separation, and, as can be seen, provide clear improvements over existing systems. The benefits of low-level skip connections were demonstrated by comparison to plain convolutional encoder-decoders. The example embodiments herein also relate to an approach to mining strongly labeled data from web-scale music collections for detecting vocal activity in music audio. This is achieved by automatically pairing original recordings, containing vocals, with their instrumental counterparts, and using such information to train the U-Net architecture to estimate vocal or instrumental components of a mixed signal. FIG.11is a block diagram showing an example computation system1100constructed to realize the functionality of the example embodiments described herein. Acoustic attribute computation system1100may include without limitation a processor device1110, a main memory1125, and an interconnect bus1105. The processor device1110(410) may include without limitation a single microprocessor, or may include a plurality of microprocessors for configuring the system1100as a multi-processor acoustic attribute computation system. The main memory1125stores, among other things, instructions and/or data for execution by the processor device1110. The main memory1125may include banks of dynamic random access memory (DRAM), as well as cache memory. The system1100may further include a mass storage device1130, peripheral device(s)1140, portable non-transitory storage medium device(s)1150, input control device(s)1180, a graphics subsystem1160, and/or an output display1170. A digital signal processor (DSP)1182may also be included to perform audio signal processing. For explanatory purposes, all components in the system1100are shown inFIG.11as being coupled via the bus1105. However, the system1100is not so limited. Elements of the system1100may be coupled via one or more data transport means. For example, the processor device1110, the digital signal processor1182and/or the main memory1125may be coupled via a local microprocessor bus. The mass storage device1130, peripheral device(s)1140, portable storage medium device(s)1150, and/or graphics subsystem1160may be coupled via one or more input/output (I/O) buses. The mass storage device1130may be a nonvolatile storage device for storing data and/or instructions for use by the processor device1110. The mass storage device1130may be implemented, for example, with a magnetic disk drive or an optical disk drive. In a software embodiment, the mass storage device1130is configured for loading contents of the mass storage device1130into the main memory1125. Mass storage device1130additionally stores a neural network system engine (such as, e.g., a U-Net network engine)1188that is trainable to predict an estimate or a vocal or instrumental component of a mixed original signal, a comparing engine1190for comparing an output of the neural network system engine1188to a target instrumental or vocal signal to determine a loss, and a parameter adjustment engine1194for adapting one or more parameters of the neural network system engine1188to minimize the loss. A machine learning engine1195provides training data, and an attenuator/volume controller1196enables control of the volume of one or more tracks, including inverse proportional control of simultaneously played tracks. The portable storage medium device1150operates in conjunction with a nonvolatile portable storage medium, such as, for example, a solid state drive (SSD), to input and output data and code to and from the system1100. In some embodiments, the software for storing information may be stored on a portable storage medium, and may be inputted into the system1100via the portable storage medium device1150. The peripheral device(s)1140may include any type of computer support device, such as, for example, an input/output (I/O) interface configured to add additional functionality to the system1100. For example, the peripheral device(s)1140may include a network interface card for interfacing the system1100with a network1120. The input control device(s)1180provide a portion of the user interface for a user of the system1100. The input control device(s)1180may include a keypad and/or a cursor control device. The keypad may be configured for inputting alphanumeric characters and/or other key information. The cursor control device may include, for example, a handheld controller or mouse, a trackball, a stylus, and/or cursor direction keys. In order to display textual and graphical information, the system1100may include the graphics subsystem1160and the output display1170. The output display1170may include a display such as a CSTN (Color Super Twisted Nematic), TFT (Thin Film Transistor), TFD (Thin Film Diode), OLED (Organic Light-Emitting Diode), AMOLED display (Activematrix Organic Light-emitting Diode), and/or liquid crystal display (LCD)-type displays. The displays can also be touchscreen displays, such as capacitive and resistive-type touchscreen displays. The graphics subsystem1160receives textual and graphical information, and processes the information for output to the output display1170. FIG.14shows an example of a user interface1400, which can be provided by way of the output display1170ofFIG.11, according to a further example aspect herein. The user interface1400includes a play button1402selectable for playing tracks, such as tracks stored in mass storage device1130, for example. Tracks stored in the mass storage device1130may include, by example, tracks having both vocal and non-vocal (instrumental) components (i.e., mixed signals), and one or more corresponding, paired tracks including only instrumental or vocal components (i.e., instrumental or vocal tracks, respectively). In one example embodiment herein, the instrumental tracks and vocal tracks may be obtained as described above, including, for example and without limitation, according to procedureFIG.4, or they may be otherwise available. The user interface1400also includes forward control1406and reverse control1404for scrolling through a track in either respective direction, temporally. According to an example aspect herein, the user interface1400further includes a volume control bar1408having a volume control1409(also referred to herein as a “karaoke slider”) that is operable by a user for attenuating the volume of at least one track. By example, assume that the play button1402is selected to playback a song called “Night”. According to one non-limiting example aspect herein, when the play button1402is selected, the “mixed” original track of the song, and the corresponding instrumental track of the same song (i.e., wherein the tracks may be identified as being a pair according to procedures described above), are retrieved from the mass storage device1130, wherein, in one example, the instrumental version is obtained according to one or more procedures described above, such as that shown inFIG.4, for example. As a result, both tracks are simultaneously played back to the user, in synchrony. In a case where the volume control1409is centered at position1410in the volume control bar1408, then, according to one example embodiment herein, the “mixed” original track and instrumental track both play at 50% of a predetermined maximum volume. Adjustment of the volume control1409in either direction along the volume control bar1408enables the volumes of the simultaneously played back tracks to be adjusted in inverse proportion, wherein, according to one example embodiment herein, the more the volume control1409is moved in a leftward direction along the bar1408, the lesser is the volume of the instrumental track and the greater is the volume of the “mixed” original track. For example, when the volume control1409is positioned precisely in the middle between a leftmost end1412and the center position1410of the volume control bar1408, then the volume of the “mixed” original track is played back at 75% of the predetermined maximum volume, and the instrumental track is played back at 25% of the predetermined maximum volume. When the volume control1409is positioned all the way to the leftmost end1412of the bar1408, then the volume of the “mixed” original track is played back at 100% of the predetermined maximum volume, and the instrumental track is played back at 0% of the predetermined maximum volume. Also according to one example embodiment herein, the more the volume control1409is moved in a rightward direction along the bar1408, the greater is the volume of the instrumental track and the lesser is the volume of the “mixed” original track. By example, when the volume control1409is positioned precisely in the middle between the center positon1410and rightmost end1414of the bar1408, then the volume of the “mixed” original track is played back at 25% of the predetermined maximum volume, and the instrumental track is played back at 75% of the predetermined maximum volume. When the volume control1409is positioned all the way to the right along the bar1408, at the rightmost end1414, then the volume of the “mixed” original track is played back at 0% of the predetermined maximum volume, and the instrumental track is played back at 100% of the predetermined maximum volume. In the above manner, a user can control the proportion of the volume levels between the “mixed” original track and the corresponding instrumental track. Of course, the above example is non-limiting. By example, according to another example embodiment herein, when the play button1402is selected, the “mixed” original track of the song, as well as the vocal track of the same song (i.e., wherein the tracks may be identified as being a pair according to procedures described above), can be retrieved from the mass storage device1130, wherein, in one example, the vocal track is obtained according to one or more procedures described above, such as that shown inFIG.4, or is otherwise available. As a result, both tracks are simultaneously played back to the user, in synchrony. Adjustment of the volume control1409in either direction along the volume control bar1408enables the volume of the simultaneously played tracks to be adjusted in inverse proportion, wherein, according to one example embodiment herein, the more the volume control1409is moved in a leftward direction along the bar1408, the lesser is the volume of the vocal track and the greater is the volume of the “mixed” original track, and, conversely, the more the volume control1409is moved in a rightward direction along the bar1408, the greater is the volume of the vocal track and the lesser is the volume of the “mixed” original track. In still another example embodiment herein, when the play button1402is selected to play back a song, the instrumental track of the song, as well as the vocal track of the same song (wherein the tracks are recognized to be a pair) are retrieved from the mass storage device1130, wherein, in one example, the tracks are each obtained according to one or more procedures described above, such as that shown inFIG.4. As a result, both tracks are simultaneously played back to the user, in synchrony. Adjustment of the volume control1409in either direction along the volume control bar1408enables the volume of the simultaneously played tracks to be adjusted in inverse proportion, wherein, according to one example embodiment herein, the more the volume control1409is moved in a leftward direction along the bar1408, the lesser is the volume of the vocal track and the greater is the volume of the instrumental track, and, conversely, the more the volume control1409is moved in a rightward direction along the bar1408, the greater is the volume of the vocal track and the lesser is the volume of the instrumental track. Of course, the above-described directionalities of the volume control1409are merely representative in nature, and, in other example embodiments herein, movement of the volume control1409in a particular direction can control the volumes of the above-described tracks in an opposite manner than those described above, and/or the percentages described above may be different that those described above, in other example embodiments. Also, in one example embodiment herein, which particular type of combination of tracks (i.e., a mixed original signal paired with either a vocal or instrumental track, or paired vocal and instrumental tracks) is employed in the volume control technique described above can be predetermined according to pre-programming in the system1100, or can be specified by the user by operating the user interface1400. Referring again toFIG.11, the input control devices1180will now be described. Input control devices1180can control the operation and various functions of system1100. Input control devices1180can include any components, circuitry, or logic operative to drive the functionality of system1100. For example, input control device(s)1180can include one or more processors acting under the control of an application. Each component of system1100may represent a broad category of a computer component of a general and/or special purpose computer. Components of the system1100(400) are not limited to the specific implementations provided herein. Software embodiments of the examples presented herein may be provided as a computer program product, or software, that may include an article of manufacture on a machine-accessible or machine-readable medium having instructions. The instructions on the non-transitory machine-accessible machine-readable or computer-readable medium may be used to program a computer system or other electronic device. The machine- or computer-readable medium may include, but is not limited to, floppy diskettes, optical disks, and magneto-optical disks or other types of media/machine-readable medium suitable for storing or transmitting electronic instructions. The techniques described herein are not limited to any particular software configuration. They may find applicability in any computing or processing environment. The terms “computer-readable”, “machine-accessible medium” or “machine-readable medium” used herein shall include any medium that is capable of storing, encoding, or transmitting a sequence of instructions for execution by the machine and that causes the machine to perform any one of the methods described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, unit, logic, and so on), as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action to produce a result. Some embodiments may also be implemented by the preparation of application-specific integrated circuits, field-programmable gate arrays, or by interconnecting an appropriate network of conventional component circuits. Some embodiments include a computer program product. The computer program product may be a storage medium or media having instructions stored thereon or therein which can be used to control, or cause, a computer to perform any of the procedures of the example embodiments of the invention. The storage medium may include without limitation an optical disc, a ROM, a RAM, an EPROM, an EEPROM, a DRAM, a VRAM, a flash memory, a flash card, a magnetic card, an optical card, nanosystems, a molecular memory integrated circuit, a RAID, remote data storage/archive/warehousing, and/or any other type of device suitable for storing instructions and/or data. Stored on any one of the computer-readable medium or media, some implementations include software for controlling both the hardware of the system and for enabling the system or microprocessor to interact with a human user or other mechanism utilizing the results of the example embodiments of the invention. Such software may include without limitation device drivers, operating systems, and user applications. Ultimately, such computer-readable media further include software for performing example aspects of the invention, as described above. Included in the programming and/or software of the system are software modules for implementing the procedures described herein. While various example embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein. Thus, the present invention should not be limited by any of the above described example embodiments, but should be defined only in accordance with the following claims and their equivalents. In addition, it should be understood that theFIG.11is presented for example purposes only. The architecture of the example embodiments presented herein is sufficiently flexible and configurable, such that it may be utilized (and navigated) in ways other than that shown in the accompanying figures. Further, the purpose of the foregoing Abstract is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the example embodiments presented herein in any way. It is also to be understood that the procedures recited in the claims need not be performed in the order presented. REFERENCES [1] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for scene segmentation.IEEE Transactions on Pattern Analysis and Machine Intel-ligence,2017.[2] Aayush Bansal, Xinlei Chen, Bryan Russell, Ab-hinav Gupta, and Deva Ramanan. Pixelnet: To-wards a general pixel-level architecture.arXiv preprint arXiv:1609.06694, 2016.[3] Rachel M. Bittner, Justin Salamon, Mike Tierney, Matthias Mauch, Chris Cannam, and Juan Pablo Bello. Medley D B: A multitrack dataset for annotation-intensive MIR research. InProceedings of the15th International Society for Music Information Retrieval Conference, ISMIR2014, Taipei, Taiwan, Oct. 27-31, 2014, pages 155-160, 2014.[4] Kevin Brown.Karaoke Idols: Popular Music and the Performance of Identity. Intellect Books, 2015.[5] Tak-Shing Chan, Tzu-Chun Yeh, Zhe-Cheng Fan, Hung-Wei Chen, Li Su, Yi-Hsuan Yang, and Roger Jang. Vocal activity informed singing voice separation with the iKala dataset. InAcoustics, Speech and Signal Processing(ICASSP), 2015IEEE International Conference on, pages 718-722. IEEE, 2015.[6] Pritish Chandna, Marius Miron, Jordi Janer, and Emilia Gómez. Monoaural audio source separation using deep convolutional neural networks. InInternational Conference on Latent Variable Analysis and Signal Separation, pages 258-266. Springer, 2017.[7] Valentin Emiya, Emmanuel Vincent, Niklas Harlander, and Volker Hohmann. Subjective and objective quality assessment of audio source separation.IEEE Transactions on Audio, Speech, and Language Processing,19(7):2046-2057, 2011.[8] Emad M Grais and Mark D Plumbley. Single channel audio source separation using convolutional denoising autoencoders.arXiv preprint arXiv:1703.08019, 2017.[9] Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, and Paris Smaragdis. Singing-voice separation from monaural recordings using deep recurrent neural net-works. InProceedings of the15th International Society for Music Information Retrieval Conference, IS-MIR2014, Taipei, Taiwan, Oct. 27-31, 2014, pages 477-482, 2014.[10] Eric Humphrey, Nicola Montecchio, Rachel Bittner, Andreas Jansson, and Tristan Jehan. Mining labeled data from web-scale collections for vocal activity detection in music. InProceedings of the18th ISMIR Conference,2017.[11] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks.arXiv preprint arXiv:1611.07004, 2016.[12] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980, 2014.[13] Rensis Likert. A technique for the measurement of attitudes.Archives of psychology,1932.[14] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431-3440, 2015.[15] Yi Luo, Zhuo Chen, and Daniel P W Ellis. Deep clustering for singing voice separation. 2016.[16] Yi Luo, Zhuo Chen, John R Hershey, Jonathan Le Roux, and Nima Mesgarani. Deep clustering and conventional networks for music separation: Stronger together.arXiv preprint arXiv:1611.06265, 2016.[17] Annamaria Mesaros and Tuomas Virtanen. Automatic recognition of lyrics in singing.EURASIP Journal on Audio, Speech, and Music Processing,2010(1):546047, 2010.[18] Annamaria Mesaros, Tuomas Virtanen, and Anssi Klapuri. Singer identification in polyphonic music using vocal separation and pattern recognition methods. InProceedings of the8th International Conference on Music Information Retrieval, ISMIR2007, Vienna, Austria, Sep. 23-27, 2007, pages 375-378, 2007.[19] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. InProceedings of the IEEE International Conference on Computer Vision, pages1520-1528, 2015.[20] Nicola Orio et al. Music retrieval: A tutorial and re-view.Foundations and Trends R in Information Retrieval,1(1′):1-90, 2006.[21] Alexey Ozerov, Pierrick Philippe, Frdric Bimbot, and Rmi Gribonval. Adaptation of bayesian models for single-channel source separation and its application to voice/music separation in popular songs.IEEE Transactions on Audio, Speech, and Language Processing,15(5):1564-1578, 2007.[22] Colin Raffel, Brian McFee, Eric J. Humphrey, Justin Salamon, Oriol Nieto, Dawen Liang, and Daniel P. W. Ellis. Mir eval: A transparent implementation of com-mon MIR metrics. InProceedings of the15th International Society for Music Information Retrieval Conference, ISMIR2014, Taipei, Taiwan, Oct. 27-31, 2014, pages 367-372, 2014.[23] Zafar Rafii and Bryan Pardo. Repeating pattern ex-traction technique (REPET): A simple method for music/voice separation.IEEE transactions on audio, speech, and language processing,21(1):73-84, 2013.[24] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InInternational Conference on Medical Image Computing and Computer Assisted Intervention, pages 234-241. Springer, 2015.[25] Andrew J R Simpson, Gerard Roma, and Mark D Plumbley. Deep karaoke: Extracting vocals from musical mixtures using a convolutional deep neural net-work. InInternational Conference on Latent Variable Analysis and Signal Separation, pages 429-436. Springer, 2015.[26] Paris Smaragdis, Cedric Fevotte, Gautham J Mysore, Nasser Mohammadiha, and Matthew Hoffman. Static and dynamic source separation using nonnegative factorizations: A unified view.IEEE Signal Processing Magazine,31(3): 66-75, 2014.[27] Philip Tagg. Analysing popular music: theory, method and practice.Popular music,2:37-67, 1982.[28] Thilo Thiede, William C Treurniet, Roland Bitto, Christian Schmidmer, Thomas Sporer, John G Beerends, and Catherine Colomes. Peaq—the itu standard for objective measurement of perceived audio quality.Journal of the Audio Engineering Society,48(1/2):3-29, 2000.[29] George Tzanetakis and Perry Cook. Musical genre classification of audio signals.IEEE Transactions on speech and audio processing,10(5):293-302, 2002.[30] Shankar Vembu and Stephan Baumann. Separation of vocals from polyphonic audio recordings. InISMIR2005, 6th International Conference on Music Information Retrieval, London, UK, 11-15 Sep. 2005, Proceedings, pages 337-344, 2005.[31] Emmanuel Vincent, Remi Gribonval, and Cedric Fevotte. Performance measurement in blind audio source separation.IEEE transactions on audio, speech, and language processing,14(4):1462-1469, 2006.[32] Tuomas Virtanen. Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria.IEEE transactions on audio, speech, and language processing,15(3):1066-1074, 2007.[33] Richard Zhang, Phillip Isola, andAlexeiA Efros. Colorful image colorization. InEuropean Conference on Computer Vision, pages 649-666. Springer, 2016.[34] Ellis, Daniel P W, Whitman, Brian, and Porter, Alastair. Echoprint: An open music identification service. InProceedings of the12th International Society for Music Information Retrieval Conference(ISMIR). ISMIR, 2011 (2 sheets).[35] Rosenblatt F.: The perceptron: A probabilistic model for information storage and organization in the brain,Psychological review, Vol. 65, No. 6, pp. 386-408.[36] Goodfellow, Ian, et al.Deep learning. Vol. 1. Cambridge: MIT press, 2016. Chapter9: Convolutional Neural Networks.[37] Jannson, Andreas et al.,Singing Voice Separation With Deep U-Net Convolutional Networks,18thInternational Society for Music Information Retrieval Conference, Suzhou, China, 2017. Reference [37] is incorporated by reference herein in its entirety, as if set forth fully herein.[38] Schlüter, Jan, and Sebastian Böck. “Musical onset detection with convolutional neural networks.” 6th International Workshop on Machine Learning and Music (MML), Prague, Czech Republic. 2013.[39] Griffin, Daniel, and Jae Lim. “Signal estimation from modified short-time Fourier transform.” IEEE Transactions on Acoustics, Speech, and Signal Processing 32.2 (1984): 236-243.
68,133
11862192
These drawings may be better understood when observed in connection with the following detailed description. DETAILED DESCRIPTION Modern computing devices often provide features to detect and understand human speech. The features may be associated with a virtual assistant that may be accessible via a computing device that is resource constrained, such as smart speaker, mobile phone, smart watch, or other user device. The computing device may be associated with a microphone that can record the human speech and may use a combination of local and remote computing resources to analyze the human speech. Analyzing speech is typically a resource intensive operation and the computing device may be configured to perform some of the processing local to a computing device and has some of processing performed remotely at a server or via a cloud service. Many virtual assistants use some form of a remote speech recognition service that takes audio data as input and converts the audio data into text that is returned to the computing device. Many technical problems arise when a computing device attempts to use traditional virtual assistant features to follow along as a user reads a text source aloud. Some of the problems arise because a traditional virtual assistant may be unable to detect when a user has finished providing audio input (e.g., when the user continues to talk about something else). This may potentially result in the unnecessary utilization of computing resources of the virtual assistant, such as processing capacity memory, data storage and/or network bandwidth, that may otherwise be consumed by the virtual assistant continuing to follow the user's reading of the text after the user has finished reading the text. Additionally or alternatively, this may result in the computing device continuing to record and/or process the audio of the user, which may be problematic if the user transitions to discussing something private. Detecting when a user has stopped reading from a text source may be more challenging when a user does not follow along with the text and skips, repeats, or adds new content while reading the text source aloud. Aspects and implementation of the present technology address the above and other deficiencies by enhancing the ability of a computing device to detect when a user has discontinued reading a text source. In one example, the technology may enable the virtual assistant to more accurately detect that the user has taken a break from reading the text source and may deactivate a microphone to avoid capturing private audio content. This may involve receiving audio data comprising the spoken word associated with a text source and comparing the audio data with data of the text source. The technology may calculate a correspondence measure between content of the audio data and the content of the text source. The correspondence measure may be probabilistic value that is based on a comparison of phoneme data, textual data, or other data and may involve using fuzzy matching logic. When the correspondence measure satisfies a threshold (e.g., is below a minimum correspondence threshold), the technology may cause a signal to be transmitted that will cease the analysis of subsequent audio data. Systems and methods described herein include technology that enhances the technical field of computer-based recognition of human speech. In particular, the technology may address technical problems, such as avoiding an inadvertent recording of a user's private conversation by using comparisons that better compensate for non-linear reading of the text source (e.g., skipping, repeating, adding content). The technology may also enable the computing device to reduce power and/or other computing resource consumption by deactivating an audio sensor (e.g., microphone) and associated data processing when the computing device detects the user has stopped reading the text. The technology discussed below includes multiple enhancements to a computing device with or without virtual assistant features. The enhancements may be used individually or together to optimize the ability of the computing device to follow along while a text source is being read aloud and to provide special effects to supplement an environment of a listening user. In one example, the environment may include a parent reading a book aloud to one or more children. In another example, the environment may include one or more users providing a presentation, speech, or other performance to an audience. In either example, the technology may be used to enhance the environment with special effects based on an analysis of data associated with the text source. The special effects may be synchronized with particular portions of the text source, such as a particular spoken word or a page turn. FIG.1illustrates an example environment100that includes a text source that is being read aloud and one or more devices that supplement the environment to enhance the listening experience of a user, in accordance with one or more aspects of the disclosure. Environment100may be a physical environment such as an indoor setting (e.g., bedroom, conference room), outdoor setting (park, field), or other location. Environment100may be referred to as a ubiquitous computing environment or pervasive computing environment and may include embedded computing functionality. The embedded computing functionality may provide ambient intelligence that is sensitive and responsive to the presence of humans. In one example, environment100may include one or more users110A and110B, a text source120, one or more computing devices130A and130B, and one or more physical effect devices140A-C. Users110A and110B may include human users that are able to perceive content of a text source. User110A may be an individual user that is reading the content of the text source or may be multiple users that are each reading a portion of one or more text sources. User110A may be referred to as a reader, presenter, announcer, actor, other term, or a combination thereof. User110B may listen to content of the text source that is being read aloud. User110B may or may not read along with user110A. In one example, user110A may be a parent that is reading to a child user110B. In another example, user110A may include one or more presenters speaking to one or more users110B that are members of an audience. In either example, the content of text source120may be announced for one or more other users to hear. Text source120may be any source of content that can be interpreted and read aloud. Text source120may include content that contains numbers, characters, words, symbols, images, or a combination thereof. The content may be arranged into a sequence that can be uttered by a user when read or after being memorized. Text source120may be a physical or electronic book, magazine, presentation, speech, script, screenplay, memorandum, bulletin, article, blog, post, message, other arrangement of text, or a combination thereof. In the example ofFIG.1, text source120may be a children's book that includes a sequence of words and images that can be read aloud to a child. Audible actions112A-C may be any action or combination of actions that produce a sound that can be detected by a user or computing device. Audible actions112A-C may be heard, perceived, or observed by an ear of a user or by audio sensors associated with a computing device (e.g., microphones). As shown inFIG.1, there may be multiple types of audible actions and they may depend on where the sound originates from. Audible actions112A may be a first type of audible action that includes a vocal sound (e.g., an utterance) that may originate from a human voice or a computer synthesized voice. The vocal sound may be a linguistic vocal sound (e.g., spoken word), a non-linguistic vocal sound (e.g., laughing, crying, coughing), other sound, or a combination thereof. Audible action112B may be a second type of audible action that includes non-vocal sounds that originate from the user or another source and may include clapping, finger snapping, other sound, or a combination thereof. Audible action112C may be a third type of audible action that includes non-vocal sounds that are caused by a user interacting with an object and may include a page turning, a book closing, a door opening/closing, an object falling, tapping on the floor, other sound, or a combination thereof. One or more of the audible actions112A-C may be detected by one or more of sensors131A-C. Sensors131A-C may be coupled to the computing device130A and may enable the computing device to sense aspects of environment100. Sensors131A-C may include one or more audio sensors (e.g., microphones), optical sensors (e.g., ambient light sensor, camera), atmospheric sensor (e.g., thermometer, barometer, hydrometer), motion sensors (e.g., accelerometer, gyroscope, etc.), location sensors (e.g., global positioning system sensors (GPS)), proximity sensors, other sensing devices, or a combination thereof. In the example shown inFIG.1, sensor131A may be an audio sensor, sensor131B may be an optical sensor, and sensor131C may be a temperature sensor. One or more of sensors131A-C may be internal to computing device130A, external to computing device130A, or a combination thereof and may be coupled to computing device130A via a wired or wireless connection (e.g., Bluetooth®, WiFi®). Computing device130A may be any computing device that is capable of receiving and processing data derived from sensors131A-C. Computing device130A may function as a voice command device and provide access to an integrated virtual assistant. In one example, computing device130A may include a smart speaker, mobile device (e.g., phone, tablet), a wearable device (e.g., smart watch), a digital media player (e.g., smart TV, micro console, set-top-box), a personal computer (e.g., laptop, desktop, workstation), home automation device, other computing device, or a combination thereof. In some implementations, computing device130A may also be referred to as a “user device,” “consumer device,” or “client device.” Data generated by sensors131A-C may be received by computing device130A and may be processed locally by computing device130A or may be transmitted remotely from computing device130A to another computing device (e.g.,130B). Computing device130A may include one or more components for processing the sensor data. In the example shown inFIG.1, computing device130A may include an audio analysis component132, a text source analysis component133, a comparison component134, a non-linear reading recognition component135, a physical effect determination component136, a predictive loading component137, and an effect providing component138. In other examples, one or more of these components or one or more features of the components may be performed by another computing device (e.g., computing device130B). These components will be discussed in more detail in regards toFIGS.2-4and may function to detect a current reading location and to instruct one or more physical effect devices140A-C to enhance the listening experience. Physical effect devices140A-C may be any computing device capable of causing or providing a physical effect. The physical effect may be perceived via a sense of users110A and110B (e.g., hearing, sight, touch, smell, and taste). Each of the physical effect devices140A-C may produce one or more of the physical effects and computing device130A may function as one or more of the physical effect device130A-C. Physical effect devices140A-C may provide a physical effect145or may instruct another device to provide physical effect145. In one example, one or more of the physical effect devices140A-C may be part of or integrated with a home automation system or may be separate from a home automation system. As shown inFIG.1, physical effect device130A may include a speaker or other device capable of causing or emitting an acoustic effect. Physical effect device130B may include one or more light sources (e.g., lightbulbs, pixels) or other device capable of altering the amount of light present in environment100(e.g., motorized shades or blinds). Physical effect device130C may include one or more devices that can cause a haptic effect and may include a vibration source (e.g., massaging chair), a fan producing wind (e.g., ceiling fan or air conditioner), a heating or cooling source (e.g., thermostat), other device, or a combination thereof. Physical effect145may be any modification of environment100that can be perceived by a user or a computing device and may include acoustic effects, haptic effects, optical effects, other effects, or a combination thereof. An acoustic effect may be a physical effect that relates to sound and can be propagated via sound waves. An acoustic effect may include human or animal sounds (e.g., voices or noises), atmospheric sounds (e.g., thunder, rain, wind, or other weather sounds), musical sounds (e.g., instruments, background music, theme music), object sounds (e.g., knocking, door opening, window shutting, glass breaking, object crashing, automobile running), other sound effects, or a combination thereof. A haptic effect may be a physical effect that relates to a user's sense of touch. The haptic effect may include a breeze, a vibration, a temperature change, other touch sensation, or a combination thereof. An optical effect may be a physical effect that relates to light and can be propagated via visible electromagnetic radiation. The optical effect may include the increase or decrease in ambient lighting, light flashes, animations, other change in an amount of light, or a combination thereof. The optical effect may originate from lamps (e.g., ceiling lamps, desk lamps), flash lights (e.g., phone light), window coverings (e.g., blinds or shades), projectors, electronic displays, holographic displays, lasers, other light sources, or a combination thereof. Other effects may include smell or taste related effects (e.g., olfactory effects). Computing device130B may be a server that is coupled to computing device130A and may be local to or remote from environment100. Computing device130B may include one or more computing devices (such as rackmount servers, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, a router, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, and/or hardware components. In one example, computing device130B may function to provide remote processing and may function as a speech processing service as discussed in more detail in regards toFIG.2. In another example, computing device130B may provide computing device130A with access to media items. The media items may correspond to physical effects, text sources, profile information, speech models, instructions, other data, or a combination thereof. Example media items may include, but are not limited to, digital sound effects, digital music, digital animations, social media information, electronic books (e-books), electronic magazines, digital newspapers, digital audio books, digital video, digital photos, website content, electronic journals, web blogs, real simple syndication (RSS) feeds, electronic comic books, software applications, etc. In some implementations, a media item may be referred to as a content item and may be provided over the Internet and/or via computing device130A (e.g., smart speaker). As used herein, “media,” “media item,” “digital media,” “digital media item,” “content,” and “content item” can include an electronic file or record that can be loaded or executed using software, firmware, or hardware configured to present the content to one or more users in environment100. In one implementation, computing device130B may store media items using one or more data stores and provide the media items to computing device130A over network150. Network150may include one or more of a private network (e.g., a local area network (LAN), a public network (e.g., the Internet), a wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., Wi-Fi or bluetooth connection), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combination thereof. In general, functions described in one implementation as being performed by computing device130A, computing device130B, or physical effect devices140A-C may be performed by one or more of the other devices in other implementations. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together. The computing device130A and130B may also be accessed as a service provided to other systems or devices through appropriate application programming interfaces. Although implementations of the disclosure are discussed in terms of a smart speaker, the implementations may also incorporate one or more features of a cloud service or content sharing platform. In situations in which the systems discussed herein collect personal information about client devices or users, or may make use of personal information, the users may be provided with an opportunity to control whether the computing devices can collect user information (e.g., information about a user's audio input, a user's preferences, a user's current location, social network, social actions, activities, or profession), or to control whether and/or how to receive content from a computing device that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by the computing device. FIG.2-4depict block diagrams illustrating an exemplary computing device130that can detect a reading location within a text source and supplement the environment with physical effects to enhance a listening experience, in accordance with one or more aspects of the disclosure. Computing device130may be the same or similar to computing devices130A, computing device130B, or a combination thereof.FIG.2discusses features that enable computing device130to receive and compare audio data of a user with data of a text source.FIG.3discusses features that enable computing device130to analyze data to detect a reading location based on the audio data and text source data.FIG.4discusses features that enable computing device130to provide physical effects to modify the environment of one or more listeners. The components and modules provided inFIGS.2-4are exemplary and more or less components or modules may be included without loss of generality. For example, two or more of the components may be combined into a single component, or features of a component may be divided into two or more components. In one implementation, one or more of the components may reside on different computing devices (e.g., a client device and a server device). Referring toFIG.2, computing device130may include an audio analysis component132, a text source analysis component133, a comparison component134, and a data store240. Audio analysis component132may receive and access the audio data extracted from an environment while a user reads a text source aloud. In one example, audio analysis component132may include an audio data receiving module212and an acoustic modeling module214. Audio data receiving module212may receive audio data241that includes one or more audible actions of a user. The audio data may include spoken words, page turns, or other audible actions that are captured from an environment of a user. Audio data241may be received directly from one or more of the sensors in the form of an audio signal or may be received indirectly from a data store240or other computing device after the sensors store the audio data241. Audio data241may be in any digital or analog format and may be accessed or received from within one or more storage objects (e.g., files, database records), data streams (e.g., audio stream, video stream), data signals, other data transmission or storage protocol, or a combination thereof. Audio data241may be an audio recording and may be segmented into one or more durations (e.g., portions, chunks, or other units) before, during, or after it is analyzed by acoustic modeling module214. Acoustic modeling module214may analyze audio data241using an acoustic model to identify phoneme data243A. The acoustic model may represent known relationships between audible actions and phonemes. A phoneme may be a unit of sound and may correspond to a sound pattern of an audible action (e.g., spoken word). A phoneme may be a linguistic unit, non-linguistic unit, other unit, or a combination thereof. Acoustic modeling module214may translate the audio data into phonemes that are stored in data store240as phoneme data243A. Phoneme data243A may include values that represent the one or more phonemes extracted from audio data241. Phoneme data243A may represent a sequence of phonemes using a standard or proprietary notation. The notation may include a particular arrangement of one or more bits, bytes, symbols, or characters that represent a phoneme. In one example, the particular arrangement may include a symbol placed beside or between one or more delimiters. The delimiters may include slashes, brackets, pipes, parenthesis, commas, tabs, spaces, new line character, other separator, or a combination thereof. The phonemes may be arranged into a sequence of phonemes that represent a portion of one or more audible actions. Text source analysis component133may receive and analyze data related to text source120. Text source120may be determined in view of user input that is text based, speech based, touch based, gesture based, or other manner of user input. For example, a user may identify text source120by saying the name of the text source120(e.g., title or author of a book), by typing and searching for the text source, by selecting a displayed text source, other selection mechanism, or a combination thereof. In the example shown inFIG.2, text source analysis component133may include a data access module222and a phoneme determination module224. Data access module222may access data associated with text source120and may store the accessed data as text source data242. Data access module222may access data from one or more sources, which may include a local source, a remote source, or a combination thereof. A local source may be storage of computing device130and a remote source may be storage of a computing device that is accessible over a network connection. In one example, the remote source may be the same or similar to computing device130B (e.g., server or cloud service). The local or remote source may store data of one or media items discussed above and computing source may access the data. The data may then be analyzed, filtered, combined, or modified and subsequently stored as text source data242. Text source data242may be any data associated with text source120and may be provided by or accessible from an author, publisher, distributor, partner, a remote server, a third party service, other source, or combination thereof. Text source data242may include descriptive data, textual data, phoneme data, other data, or a combination thereof. The descriptive data may indicate the title, summary, source (e.g., author, publisher, distributor), table of content (e.g., chapters, sections, pages), index (e.g., phrases, page indicators), other data, or a combination thereof. The textual data may include one or more words of text source120. In one example, the words may be organized as a sequence of words122with or without one or more images124. The textual data may be a data structure that arranges the words in the same or similar manner as they are read by a user (e.g., series of consecutive words). The sequence of words may be limited to only the words appearing in text source120or may be supplemented with words or data that indicate the existence or content of non-textual information (e.g., illustrations, images, tables, formatting, paragraph, pages). In another example, the words may also or alternatively be arranged in an index data structure that indicates unique words that are present in the text source120but are not arranged consecutively in a manner spoken by a user. Either data structure may be supplemented with additional information that may include a word location within the text source (e.g., page, line, slide), a number of occurrences, variations of the word (e.g., tense, plural), other data, or a combination thereof. In one example, text source120may be a physical book and text source data242may include words from a corresponding electronic book (e.g., e-book), a third party service, other source, or a combination thereof. The phoneme data of text source120may be the same or similar to phoneme data243B and may be a phonetic encoding of text source120that is in a format that is the same or similar to the phoneme data derived from the audio (e.g., phoneme data243A). In the example discussed above, phoneme data243B of text source120may be included as part of text source data242and be accessed by phoneme determination module224. In another example, phoneme data243B may be absent from text source data242and may be generated by phoneme determination module224. Phoneme determination module224may determine the phoneme data for a particular text source120. This may involve phoneme determination module224accessing existing phoneme data243B from a remote source, generating phoneme data243B based on textual data, or a combination thereof. When generating phoneme data243B, phoneme determination module224may access and analyze textual data of text source data242and convert (e.g., derive, translate, transform, encode) the textual data into phoneme data243B. The generated phoneme data may then be associated with text source120for future use by computing device130or by one or more other computing devices. In one example, the textual data may include a sequence of words and the generated phoneme data may include a phonetic encoding that includes sequences of phonetic values representing the sequence of words. The same sequence of phonetic values may correspond to two words that sound the same but are spelled different (e.g., homophones). Likewise, a different sequence of phonetic values may correspond to words that sound different even though they are spelled the same (e.g., homographs). As discussed above, phoneme data243A and243B may each include a sequence of phonemes that are represented using a standard or proprietary notation. The notation may be referred to as a phonetic transcription or a phoneme transcription and may include a particular arrangement of phoneme values that represent a linguistic segment. The linguistic segment may be any discrete unit that can be identified, either physically or auditorily, in a stream of speech. A phoneme value may include one or more symbols, characters, bytes, bits, other value, or combination thereof. In one example, a phoneme value may be represented by one or more Unicode characters, American Standard code for Information Interchange (ASCII) characters, other characters, or a combination thereof. A sequence of phoneme values may represent a single word and each individual phoneme value may represent a portion of the word. For example, a first sequence of phonemes may be /θΔm/ and represent the spoken word “thumb” and a second sequence of phonemes me be /d∧m/ and represent the spoken word “dumb.” In the examples discussed below, phoneme data243A and243B may include a sequence of values and each value may represent a phoneme of a phoneme vocabulary. The phoneme vocabulary may include the set of possible phoneme values for one or more languages. The phoneme vocabulary may be an alphabetic system of phonetic notation and may represent qualities of speech that are part of oral language: phones, phonemes, intonation and the separation of words and syllables. The phoneme vocabulary may or may not also represent additional qualities of speech and variations of speech annunciations (e.g., lisps, mispronunciations, accents, dialects). The phoneme vocabulary may be the same or similar to a phoneme alphabet, character set, lexis, lexicon, other variation, or a combination thereof. In one example, the phoneme vocabulary may be the based on the International Phonetic Alphabet (IPA). IPA symbols may be composed of one or more elements related to letters and diacritics. For example, the sound of the English lettertmay be transcribed in IPA with a single letter, [t], or with a letter plus diacritics, [h]. Delimiters (e.g., slashes) may be used to signal broad or phonemic transcription; thus, /t/ may be less specific than, and could refer to, either [h] or [t], depending on the context and language. In other examples, the phoneme vocabulary may be the same or similar to the Extended Speech Assessment Methods Phonetic Alphabet (X-SAMPA), Kirshenbaum (e.g., ASCII-IPA, erkIPA), other phoneme vocabulary, or a combination thereof. Comparison component134may compare the audio of user110A with the content of text source120. The examples discussed below compare the audio and the text source using their corresponding phoneme data and in the absence of converting the audio into text using speech recognition. Other examples may also or alternatively use textual data, descriptive data, audio data, other data, or a combination thereof. The comparisons may be performed by computing device130, by a remote computing device (e.g., cloud service), or a combination thereof. In one example, comparison component134may select a phoneme sequence derived from the audio and compare it with multiple phonemes sequences derived from the text source. In another example, comparison component134may compare a phoneme sequence of the text source with multiple phoneme sequences derived from the audio. In either example, the calculation of the similarity measurement data may be based on a phoneme edit distance. Phoneme edit distance module232may quantify how similar two phoneme sequences are to one another by determining a minimum number of operations required to convert one phoneme sequence into an exact match of the other phoneme sequence. The operations may include any modification of a phoneme value (e.g., symbol) within one of the sequence of phonemes. Example operations may include primitive operations such as phoneme removals, insertions, substitutions, transpositions, other operation, or a combination thereof. In the example discussed above, the first sequence of phonemes may be /θ∧m/ and represent “thumb” and a second sequence of phonemes may be /d∧m/ and represent “dumb.” Although the two words differ by two letters, their phoneme edit distance is the numeric value 1 because converting the sequences to an exact match involves a substitution of a single phoneme (e.g., θ with d). In one example, the phoneme edit distance may be a linear edit distance that is the same or similar to a Levenshtein distance. The Levenshtein distance may be based on a minimum number of removal, insertion, or substitution operations needed to make the two phoneme sequences equal. In other examples, the phoneme edit distance may also or alternatively include transposition or other operations. In either example, the phoneme edit distance may be a numeric value that is used to determine the similarity measurement data244. Similarity measurement module234may access data of phoneme edit distance module to determine the similarity or dissimilarity between the audio and the text source. Similarity measurement module234may analyze data of phoneme edit distance module to calculate the similarity measurement data244. Similarity measurement data244may represent the similarity between two or more sequences of phonemes (e.g., phonetic representation of words or sets of words) and may include numeric data, non-numeric data, other data, or a combination thereof. Similarity measurement data244may be based on the edit distance of one or more sequences of phonemes. In one example, the similarity measurement data244may include the numeric value of the phoneme edit distance. In another example, similarity measurement data244may include a probabilistic value derived from the numeric value of the phone edit distance. For example, similarity measurement data may be a percentage, ratio, or other value that is based on one or more phoneme edit distances and one or more other values. The other value may be the number of phonemes in the one or more phoneme sequences or in a portion of the text source. Data store240may be a memory (e.g., random access memory), a cache, a drive (e.g., solid state drive, hard drive, flash drive), a database system, or another type of component or device capable of storing data. Data store240may also include multiple storage components (e.g., multiple drives or multiple databases) that may span one or more computing devices (e.g., multiple server computers). FIG.3depicts a block diagram illustrating exemplary components that enable computing device130to analyze data discussed above to determine a reading location or the absence of a reading location within the text source. As discussed above, portions of the audio may not match the text source identically because the user may add, skip, repeat, or reorder content of the text source when reading aloud. As a result, the phoneme data derived from the audio and the phoneme data derived from the text source may be challenging to compare and align. In the example shown inFIG.1, computing device130may include a non-linear reading recognition component135that enables the computing device to determine a location within the text source that best aligns with the audio data. In one example, non-linear reading recognition component135may include a fuzzy matching module352, a location identification module354, a reading speed module356, and a reading discontinuation module358. Fuzzy matching module352may enable computing device130to determine whether there is a match between the audio and the text source. The match may be the same or similar to a probabilistic match, best match, a closest match, or any match that may not be an exact match but satisfies a predetermined threshold. In one example, determining a match between the audio and text source may involve detecting that a fragment of audio comprises one or more words of the text source. The match may be detected even if the audio or text source contains other words, is missing words, or includes variations of the words (e.g., mispronunciation, missing plural). The match may be referred to as a fuzzy match or an approximate match and may be detected using fuzzy matching logic. The fuzzy matching logic may be used to compare sequences of phoneme values and may operate at a syllable level segment, a word level segment, a phrase level segment, a sentence level segment, other segment, or a combination thereof. In one example, the fuzzy matching may perform the fuzzy matching using an audio segment that has a predetermined length. The predetermined length may be customizable and may be any duration (e.g., 3+ seconds) or any number of word tokens (e.g., 3-4 words). Having a predetermined length that is much smaller than length of the text source may enhance accuracy and performance when accounting for non-linear reading. Fuzzy matching module352may impose one or more constraints to determine the match. In one example, detecting a match may involve using one or more global unweighted costs. A global unweighted cost may be related to the total number of primitive operations necessary to convert a candidate sequence of phonemes (e.g., candidate pattern from text source) to a selected sequence of phonemes (e.g., pattern from audio). In another example, detecting a match may involve specifying a number of operations of each type separately, while still others set a total cost but allow different weights to be assigned to different primitive operations. Fuzzy matching module352may also apply separate assignments of limits and weights to individual phoneme values in the sequence. Location identification module354may access data of fuzzy matching module352to identify a location within the text source that corresponds to the audible actions of the audio (e.g., spoken words). In one example, the text source may be a book for children and the location may be a reading location within a sequence of words of the book. In other examples, the location may be within a speech, presentation, script, screenplay, other text source, or a combination thereof. In either example, the location may be a past, current, or future reading location within the text source and may be stored as location data345. The location data may be numeric or non-numeric data that identifies one or more particular phonemes, words, paragraphs, pages, sections, chapters, tables, images, slides, other locations, or a combination thereof. Location identification module354may determine an audible action matches with multiple different portions of the text source. This may occur when the same word or phrase (e.g., sequence of phonemes) is repeated multiple times within the text source. Location identification module354may detect the spoken word by analyzing the phoneme data and detect the spoken words match with multiple candidate locations within the text source. Location identification module354may select one or more of the multiple candidate locations based on data of the fuzzy matching module352. The location identification module354may further narrow the candidate locations by selecting a particular location based on phoneme data of the audio that occurred before, during, or after the spoken word (e.g., expanding predetermined segment length or using adjacent segments). Reading speed module356may access and analyze location data345to determine a reading speed of the user. The reading speed data may be determined in view of the location data, text source data, audio data, other data, or a combination thereof and may be stored as reading speed data346. The reading speed may be based on a portion of location data345that identifies at least two locations in the text source. The locations may correspond to particular times and determining the reading speed may be based on a quantity of words and a quantity of time between the two or more locations. In one example, the quantity of words may be based on the content of the text source and may not take into account content that was added, skipped, or repeated by the user. In another example, the quantity of words may be based on the content of text source and also on content of the audio. This may be advantageous because the content of the audio may indicate words were added, skipped, repeated, other action, or a combination thereof. In either example, reading speed module356may update reading speed data to represent the reading speed of the user over one or more durations of time. Reading discontinuation module358may access and analyze any data discussed above to detect whether the user has discontinued reading the text source or is still reading the text source. This may be challenging because the user may have stopped reading the text source but is discussing a concept related to the text source. As a result, there may be an overlap in the spoken words and the content of the text source. Detecting a discontinuation of reading may be important because it may enable computing device to avoid recording a private discussion. Reading discontinuation module358may determine whether the user has discontinued reading the text source by calculating one or more correspondence measures. A correspondence measure may indicate the similarity or dissimilarity between a segment of audio and a corresponding portion of the text source. The correspondence measure may be a probabilistic value that indicates a probability that the segment of audio corresponds to the location of the text source. The probabilistic value may be a numeric or non-numeric value and may be same or similar to a percentage, ratio, decimal, or other value, or a combination thereof. In one example, the value may be between 0 and 1 (e.g., 0.97), 0 and 100 (e.g., 98), or other range of values. One end of the range may indicate that the segment of audio definitely corresponds to the location of the text source (e.g., 1.0 or 100) and the other range may indicate that the segment of the audio definitely does not correspond to the location of the text source (e.g., value of 0). The correspondence measure may be based on or related to multiple similarity measurements. For example, both measurements may be used to compare or contrast data derived from the audio (e.g., phoneme data243A) with data derived from the text source (e.g., phoneme data243B). The similarity measurements (e.g., phoneme edit distances) may be used to compare or contrast a written word of the text source with a spoken word and the correspondence measure may be used to compare or contrast a set of written words with a set of words spoken over a duration of time. The duration of time of the audio (e.g., segment) may be any length of time and may include the set of words as well as one or more other audible actions (e.g., page turn, book close). In one example, the audio of the user may include a first duration and a second duration and the reading discontinuation module358may calculate one or more correspondence measures for the first duration and one or more correspondence measures for the second duration. The correspondence measures may be stored as correspondence measurement data347. In other examples, the correspondence measure may also or alternatively take into account one or more signals such as the absence of speech input for a duration of time, the absence of recognition of story text or the recognition of specific words or phrases that may indicate a stop. The words or phrase may include “let's stop reading,” “let's finish tomorrow,” “OK, I'm done,” “let's pause,” other phrase, or a combination thereof. Reading discontinuation module358may compare the correspondence measurement data347for each duration against one or more predetermined thresholds. In response to the correspondence measurement data347for the first duration not satisfying the threshold (e.g., above or below the threshold), the reading discontinuation module358may determine the duration of audio corresponds to the text source and that the user audio data corresponds to a user reading the text source. In response to the correspondence measurement data347for the second duration satisfying the threshold (e.g., below or above the threshold), the reading discontinuation may determine the duration of audio does not correspond to the text source and that the user has stopped reading the text source. In one example, determining the correspondence measure satisfies the threshold may indicate the audio data is absent a match with the data of the text source or that the audio data is different from content of the text source. Reading discontinuation module358may perform one or more actions in response to determining that the user has discontinued reading the text source. In one example, reading discontinuation module358may transmit a signal to deactivate one or more microphones associated with computing device to avoid capturing or recording additional audio data. In another example, reading discontinuation module358may transmit a signal to cease analyzing the audio data (e.g., comparing audio data with the data of the text source). The latter example may record the audio but may not access or analyze the audio data. In yet another example, reading discontinuation module may cause computing device130to interact with the user before, during, or after transmitting the signal. For example, the computing device may interact with the user by providing a prompt (e.g., audio, visual, or a combination thereof). The prompt may ask the user whether to exit a storytime mode or may inform the user that the storytime mode has been exited and may or may not enable the user to re-enable the storytime mode. FIG.4depicts a block diagram illustrating exemplary components that enable computing device130to provide physical effects to enhance the experience of a user. As discussed above, the physical effects may modify the environment and may include acoustic effects, haptic effects, optical effects, other effects, or a combination thereof. In the example shown, computing device130may include a physical effect determination component136, a predictive loading component137, and an effect providing component138. Physical effect determination component136enables computing device130to identify and provide physical effects that correspond to particular portions of the text source. In one example, physical effect determination component136may include an audible action correlation module462, a contextual data module464, and an effect selection module466. Audible action correlation module462may enable computing device to correlate particular physical effects with particular audible actions that are associated with the text source. Audible action correlation module462may determine the correlation based on effects data448for the text source. Effects data448may indicate which physical effects correspond to which portions of the text source. Effects data448may correlate a particular physical effect with either a particular location in the text source, with a particular audible action of a user, a particular triggering condition (discussed below), or a combination thereof. The location in the text source may relate to an audible action (e.g., spoken word or page flip) or may be unrelated to an audible action (e.g., user looking at a graphical image). In one example, effects data448may identify an audible action that includes a particular spoken word (e.g., dog) of the text source and the physical effect may involve initiating an acoustic effect (e.g., barking sound) corresponding to the spoken word. In another example, effects data448may identify an audible action (e.g., a page turn) and the physical effect may involve modifying an existing physical effect (e.g., readjusting the ambient sound, light, or temperature). Effects data448may be accessible by computing device130or may be created by computing device130. In one example, computing device130may access or receive effects data448directly or indirectly from an author, publisher, distributor, partner, third party service, other source, or combination thereof. Effects data448may be included within text source data242or may be separate from text source data242. In another example, computing device130may create the effects data based on text source data242. For example, audible action correlation module462may analyze textual data or phoneme data and identify physical effects that correspond to particular portions of the text source. In either example, effects data448may be stored in data store240for enhanced access by computing device130. Contextual data module464may enable computing device130to gather contextual data449associated with the user. Contextual data449may be based on an environment of the user and may be obtained using one or more sensors (e.g., sensors131A-C). Contextual data449may also or alternatively be based on profile data about the user, which may be accessible to computing device130via direct user input or via a remote source (e.g., network connection with a content platform or social network). In one example, contextual data449may include sound data (e.g., ambient sound measurements), light data (e.g., ambient light measurements), time data (e.g., morning or night), calendar data (early appointment tomorrow), geographic location data (e.g., zip code, address, latitude/longitude), weather data (e.g., raining, lighting, thunder, windy, cloudy), user profile data (e.g., child's name, age, or gender), user audio feedback (e.g., child crying or clapping), other data, or a combination thereof. Effect selection module466may enable computing device130to select and modify physical effects based on the effects data448, contextual data449, text source data242, other data, or a combination thereof. Effect selection module466may be used to select a particular physical effect (e.g., acoustic effect) or to modify an attribute of a physical effect. The attribute may relate to a physical effect's intensity, timing, tone, transition (e.g., fade in/out), other feature, or a combination thereof. The intensity may be related to a magnitude of the modification to the environment and may relate the volume (e.g., loudness) or luminance (e.g., brightness) of the physical effect. The timing may relate to the speed or duration of the physical effect. Computing device130may select the physical effect based on the word of the text source and may update an attribute of the physical effect based on the contextual data. In one example, the contextual data may include sound data of an environment of the user and the physical effect may be an acoustic effect at a volume based on the sound data. In another example, the contextual data may include light data of an environment of the user and the physical effect may be an optical effect that modifies a luminance of a light source based on the light data (e.g., dim or brighten a light). In yet another example, the contextual data may include user profile data of a parent or a child and indicates an age of a listener and wherein the physical effect comprises an acoustic effect selected based on the age of the user (e.g., a more playful dog bark for a young child and a more serious dog bark for an older child). Effect selection module466may use the contextual data to identify timing aspects related to the reading of the text source. For example, the time data or calendar data may be used to distinguish between the text source being read in the evening or being read in the morning. In the evening, effect selection module466may select physical effects that are more calming (e.g., less stimulating) to encourage a listener to get ready for bed. This may involve decreasing the brightness and volume settings for the acoustic and optical effects and/or selecting effects that have a lower tone (e.g., softer crash effects or whispers as opposed to shouting). In the morning, effect selection module466may select physical effects that are more stimulating to encourage a user to get ready for the day. This may involve increasing the brightness and volume settings for the acoustic and optical effects. The calendar data may also indicate whether the reading time is associated with a weekend or weekday or if there is an appointment coming up soon (e.g., later in the day or early the next morning). Either of these may affect how fast the user may read the text source and how long or often the physical effects should be provided. Predictive loading component137may enable computing device130to predictively load content of a physical effect before it is needed. Predictive loading may speed up the ability of computing device130to provide the physical effect by loading the content of the physical effect prior to the physical effect being initiated. Predictive loading may be the same or similar to prefetching, pre-caching, cache prefetching, other concept, or a combination thereof. In one example, predictive loading component137may include a prediction module472, a trigger determination module474, and a content loading module476. Prediction module472may enable computing device130to predict a time that the user will reach a particular portion of a text source. For example, prediction module472may determine a time that a word of the text source will be spoken before the word is spoken by the user. The predicted time may be a time in the future and may be determined based on a reading speed of the user, a reading location of the text source, other data, or a combination thereof. In one example, the time may be calculated based on a user's reading speed (e.g., words per minute, pages per minute) and the difference (e.g., number of words, paragraphs, or pages) between a current reading location and a target location in the text source. In other examples, prediction module472may use a predictive model, machine learning, neural networks, or other technique to enhance the predictions based on current data, historical data, or a combination thereof. Trigger determination module474may enable computing device130to determine a triggering condition associated with a particular physical effect. The triggering condition may be a loading triggering condition or an initiation triggering condition. A loading triggering condition indicates when to start loading the content of the physical effect. A initiation triggering condition indicates when to start providing (e.g., playing) the content of the physical effect. Either triggering condition may correspond to a particular time or a particular location within the text source and may be based on the effects data, text source data, other data, or a combination thereof. The particular time may be an absolute time (e.g., at 8:32:02 pm) or a relative time (e.g., 5 seconds before predicted time of word or page turn). The particular location may be a location within the text source that is prior to the word that the physical effect is to be aligned with. The particular location may be an absolute location (e.g., word 397) or a relative location (e.g., 5 words before word “bark”). The determination of the triggering condition may be based on one or more factors that are related to the content, computing device, user, other aspect of the environment, or a combination thereof. The factors related to the content may include the amount of content (e.g., 1 MB file size), the location of the content (e.g., remote storage), the format of the content (e.g., downloadable files, streaming chunks, or format that needs to be transcoded), the duration of the content (e.g., 2 second sound effect), other aspect of the content, or a combination thereof. The factors related to the computing device may correspond to the amount and/or availability of computing resources of computing device130or other computing device. The computing resources may include connection speed (e.g., networking bandwidth), storage space (e.g., available solid state storage), processing power (e.g., CPU speed or load), other computing resource, or a combination thereof. The factors related to the user may include the user's reading speed, current reading location, speech clarity, other aspect, or a combination thereof. Trigger determination module474may use one or more of the factors to calculate a duration of time to load or provide the content of the physical effect. The duration of time related to loading the content may be referred to as a predicted load time and may or may not include the duration of time to provide (e.g., play) the content. In one example, trigger determination module474may determine the duration of time to load the content of the physical effect based on the size of the content and the network bandwidth of the computing device130. Trigger determination module474may use the predicted load time to identify the particular time or location of the triggering condition. In one example, the triggering condition may be set to a time that is greater than or equal to the predicted time of the audible action (e.g., spoken word) minus the predicated load time (e.g., 5 seconds). In another example, the triggering condition may be set to be a location within the text source that is equal to or prior to a location that the physical effect is intended to align with. This may involve selecting a location in the text source based on the predicted load time and the reading speed. For example, if the user reads at a speed of 120 words per minute (i.e., 2 words a second) and the predicted load time is 5 seconds then the triggering location may be 10 or more words prior to the word that the physical effect should align with. Content loading module476may enable computing device130to load the content of one or more physical effects in advance of the physical effect being initiated. Loading the content may involve computing device130transmitting or receiving one or more requests and responses and may involve downloading, streaming, copying, other operation, or a combination thereof. The content may include executable data (e.g., instructions), informational data (e.g., audio files or chunks), other data, or a combination thereof. The content may be stored by computing device130as content data451in data store240. Computing device130may load the content of the physical effect from a local device (e.g., data store240), a remote device (e.g., server or cloud service), or a combination thereof. Effect providing component138may enable computing device130to provide the physical effect to modify the environment of a user. Effect providing component138may be initiated after the content for the physical effect is loaded and may be timed so that the physical effect is provided at a time that aligns with the audible action it is intended to align with. In one example, effect providing component138may include an instruction access module482and a physical effect initiation module484. Instruction access module482may access instruction data associated with the physical effect. The instruction data may include a set of one or more commands, operations, procedures, tasks, other instructions, or a combination thereof. The instructions may indicate the physical effect and one or more attributes for the physical effect. Physical effect initiation module484may access the instruction data and execute the instruction data to initiate the physical effect. The physical effect initiation module484may initiate an instruction before, during, or after detecting the initiation triggering condition (e.g., audible action) that corresponds to the physical effect. In one example, the text source may include a particular word and initiating the physical effect may be in response to detecting the audio data comprises the word (e.g., matching phonemes). In another example, physical effect initiation module484may determine the initiation triggering condition to initiate the physical effect. The process to determine the triggering condition to initiate the physical effect may be the same or similar as the triggering condition used to initiate loading the content of the physical effect. The instruction may cause computing device130to provide the physical effect or may cause computing device130to communicate with one or more physical effect devices to provide the physical effect. In either example, computing device130may directly or indirectly cause the physical effect to modify the environment of the user to enhance the experience of a listening user. In one example, physical effect initiation module484or effect selection module466may use one or more confidence thresholds to select and/or initiate a physical effect. The one or more confidence thresholds may be grouped into one or more confidence intervals that categorize the probability that the audio matches a particular location of the text source (e.g., spoken word matches word of text source). There may be any number of confidence interval and a first confidence interval may indicate that there is a low probability that the audio matches the text source location (e.g., >50%) and each successive confidence interval may be higher (e.g., >75%, >95%, etc). The effects data that correlate the physical effect with a location may also include a particular confidence threshold (e.g., minimum confidence interval). For example, providing a sound effect may be associated with a higher confidence interval then transitioning a background effect. Computing device130may determine if a confidence threshold is satisfied prior to selecting or initiating the physical effect. This may involve comparing the correspondence measure data, similarity measure data, other data associated with the fuzzy matching, or a combination thereof. In one example, a particular location in the text source may be associated with multiple different physical effects and each may correspond to a different confidence interval associated with the current reading location. When the confidence interval is higher (e.g., trust the current reading location is accurate) a particular acoustic effect may be initiated (e.g., a sound effect of a single dog barking at a higher volume) and when the confidence interval is lower (e.g., unsure current reading location is accurate) a different acoustic effect may be initiated (e.g., background noise of multiple dogs barking at a lower volume). FIGS.5-8depict flow diagrams of respective methods500,600,700, and800for enhancing the ability of a computing device to follow along as a text source is being read aloud and to provide special effects in real-time, in accordance with one or more aspects of the present disclosure. Method500may involve using phoneme data and fuzzy matching estimate reading progress. Method600may optimize the ability of the computing device to detect when a user has stopped reading from the text source and is having a private discussion. Method700may enable the computing device to provide physical effects that take into account the context of the user and the user's environment. Method800may enable the computing device to pre-cache content of the physical effects to reduce delay and better synchronize the physical effect with the audible actions associated with the text source. The methods ofFIGS.5-8and each of their individual functions, routines, subroutines, or operations may be performed by one or more processors of the computer device executing the method. In certain implementations, one or more of the methods may be performed by a single computing device. Alternatively, one or more of the methods may be performed by two or more computing devices, each computing device executing one or more individual functions, routines, subroutines, or operations of the method. For simplicity of explanation, the methods of this disclosure are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term “article of manufacture,” as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In one implementation, the methods may be performed by one or more of the components inFIGS.1-4. Referring toFIG.5, method500may be performed by processing devices of a client device (e.g., smart speaker), a server device (e.g., cloud service), other device, or a combination thereof and may begin at block502. At block502, the processing device may determine phoneme data of a text source. The text source may include a sequence of words and the phoneme data may be a phonetic encoding of the sequence of words that includes one or more sequences of phonetic values. Each of the phonetic values may correspond to a phoneme and the sequence of phonemes may correspond to a spoken word. The same sequence of phonetic values may correspond to words that sound the same but are spelled different (e.g., homophones) and different sequences of phonetic values may correspond to words that are spelled the same but sound different (e.g., homographs). The processing device may access the phoneme data from a source of the text source or may generate the phoneme data for the text source. The processing device may generate the phoneme data by phonetically encoding the sequence of words. This may involve accessing textual data of the text source and generating (e.g., converting, transforming, deriving) the phoneme data based on the textual data. The phoneme data may then be associated with the phoneme data for future use. At block504, the processing device may receive audio data comprising a spoken word associated with the text source. The audio data may include one or more audible actions of a user and may include spoken words, page turns, or other audible actions that are captured from an environment of the user. In one example, the processing device may receive the audio data directly from one or more of the sensors in the form of an audio signal. In another example, the processing device may receive the audio data from a data store or another computing device. The audio data may be in any digital or analog format and may be accessed or received via one or more storage objects (e.g., files, database records), data streams (e.g., audio stream, video stream), data signals, other data transmission or storage protocol, or a combination thereof. At block506, the processing device may compare the phoneme data of the text source and phoneme data of the audio data. The comparison of the audio data and the text source may occur in the absence of converting the audio data to text (e.g. recognized words) using speech recognition and may involve comparing phoneme data corresponding to the audio data and phoneme data corresponding to the text source. The comparing may include calculating a numeric value representing a similarity between two or more sequences of phonetic values. The numeric value may be a phoneme edit distance between phoneme data of the audio data and phoneme data of the text source. The comparison may also involve performing fuzzy matching between phoneme data corresponding to the audio data and the phoneme data of the text source. At block508, the processing device may identify a location in the sequence of words based on the comparison of the phoneme data of the text source and the phone data of the audio. The identification of the location may involve determining a spoken word matches a word in the sequence of words of the text source. In one example, the text source may be a book and the location may be a current reading location in the book. Responsive to completing the operations described herein above with references to block508, the method may terminate. Referring toFIG.6, method600may be performed by the same processing device discussed above or a different processing device and may begin at block602. At block602, the processing device may receive audio data comprising a spoken word associated with a text source. The audio data may be segmented (e.g., tokenized, fragmented, partitioned, divided) into a first duration and a second duration. In one example, the text source may be a book and the first portion of the audio data may correspond to the content of the book (e.g., include a spoken word of the book) and the second portion of the audio data may not correspond to the content of the book (e.g., may be absent a spoken word from the book). At block604, the processing device may compare the audio data with data of the text source. The data of the text source may include phoneme data and comparing the audio data and the data of the text source may involve phoneme comparisons. In one example, comparing phoneme data may involve calculating a phoneme edit distance between the phoneme data of the text source and phoneme data of the audio data. At block606, the processing device may calculate a correspondence measure between the second duration of the audio data and the data of the text source. Calculating the correspondence measure may involve calculating the correspondence measure based on a plurality of phoneme edit distances. In one example, the processing device may select a set of spoken words (e.g., 3, 4, 5+ words) and compare the set of spoken words to content of the text source. A phoneme edit distance may be determined for each word in the set or for a combination of one or more of the words. The resulting numeric value may then be weighted, aggregated, or modified, to determine the correspondence measure. At block608, the processing device may transmit a signal to cease comparing audio data with the data of the text source responsive to determining the correspondence measure satisfies a threshold. Determining the correspondence measure satisfies the threshold may involve determining the correspondence measure is below a threshold value or above a threshold value. The determination may also be based on the duration of time that the correspondence measure satisfies or does not satisfy the threshold. Determining the correspondence measure satisfies the threshold may indicate the second duration of the audio data includes content that is different from content of the text source and may or may not indicate the audio data is absent any match with the data of the text source. Transmitting the signal may involve transmitting a signal to deactivate one or more microphones capturing audio data. In one example, the processing device may cause a computing device to prompt the user to exit a storytime mode in response to determining the second duration of the audio data is absent content of the text source. The prompt may be an audio prompt, a visual prompt, other prompt, or a combination thereof. Responsive to completing the operations described herein above with references to block608, the method may terminate. Referring toFIG.7, method700may be performed by the same processing device discussed above or a different processing device and may begin at block702. At block702, the processing device may receive audio data comprising a spoken word of a user. The spoken word may be associated with a text source the user is reading aloud and may include one or more other audible actions, such as page turns, spoken words not within the text source, and other audible actions captured from an environment of the user. In one example, the processing device may receive the audio data directly from one or more of the sensors in the form of an audio signal (e.g. for use in real time or perceived real time). In another example, the processing device may receive the audio data from a data store or another computing device. The audio data may be in any digital or analog format and may be accessed or received from within one or more storage objects (e.g., files, database records), data streams (e.g., audio stream, video stream), data signals, other data transmission or storage protocol, or a combination thereof. At block704, the processing device may analyze contextual data associated with the user. The contextual data may include sound data, light data, time data, weather data, calendar data, user profile data, other data, or a combination thereof. In some examples, the contextual data can be associated with a physical effect so that the processing device can provide physical effects that take into account the context of the user and the user's environment. In one example, the contextual data may include sound data of an environment of the user and the physical effect may include an acoustic effect at a volume based on the sound data. In another example, the contextual data may include light data of an environment of the user and the physical effect may include an optical effect that modifies a luminance of a light source based on the light data. In yet another example, the contextual data may include user profile data indicating an age of a child and the physical effect may include an acoustic effect selected based on the age of the child. At block706, the processing device may determine a match between the audio data and data of a text source. The processing device may identify the text source based on user input (e.g., audio data or touch data) and retrieve the data of the text source. The data of the text source may include phoneme data and determining the match may involve calculating a phoneme edit distance between the phoneme data of the text source and phoneme data of the audio data. In one example, determining the match between the audio data and data of a text source may involve detecting the audio data comprises a word of the text source using phoneme data of the text source. At block708, the processing device may initiate a physical effect in response to determining the match. The physical effect may correspond to the text source and be based on the contextual data. The physical effect may modify an environment of the user and may include at least one of an acoustic effect, an optical effect, and a haptic effect. The text source may include a word and initiating the physical effect may be responsive to detecting the audio data comprises the word. In one example, the processing device may select the physical effect based on the word of the text source and may update an attribute (e.g., volume or brightness) of the physical effect based on the contextual data. Responsive to completing the operations described herein above with references to block708, the method may terminate. Referring toFIG.8, method800may be performed by processing devices of a server device or a client device and may begin at block802. At block802, the processing device may identify effects data for a text source, wherein the effects data correlates a physical effect with an audible action of a user. The effects data may indicate the physical effect and indicate a location in the text source that relates to the audible action. The location may correspond to a word, paragraph, page, or other location of the text source. In one example, the audible action may be a spoken word of the text source and the physical effect may be an acoustic effect corresponding to the spoken word. In another example, the audible action may include a page turn and the physical effect may be a modification of an existing acoustic effect, optical effect, or haptic effect. At block804, the processing device may receive audio data comprising a plurality of audible actions. The plurality of audible actions may include one or more spoken words of the text source and one or more other audible actions, such as page turns, spoken words not within the text source, and other audible actions captured from an environment of the user. In one example, the processing device may receive the audio data directly from one or more of the sensors in the form of an audio signal (e.g. for use in real time or near/perceived real time). In another example, the processing device may receive the audio data from a data store or another computing device. The audio data may be in any digital or analog format and may be accessed or received from within one or more storage objects (e.g., files, database records), data streams (e.g., audio stream, video stream), data signals, other data transmission or storage protocol, or a combination thereof. At block806, the processing device may determine a triggering condition based on the effects data and the text source. In one example, determining the triggering condition may involve determining the physical effect is associated with a first location in the text source and selecting a second location in the text source that is before the first location. The selection may be based on a reading speed and a load time associated with the physical effect and the second location may be associated with at least one of a particular instance of a word, a paragraph, a page, or a chapter of the text source. The processing device may then set the triggering condition to correspond with the second location in the text source. In another example, determining the triggering condition may involve calculating a duration of time to load the content based on an amount of the content for the physical effect and an amount of available computing resources. The computing resources may relate to one or more of networking bandwidth, storage space, or processing power and the duration may be longer when the available computing resources are lower. In one example, determining the time in the future that the audible action will occur may involve identifying a time to initiate the loading based on the calculated duration and the determined time of the audible action and initiating the loading of the content at or before the identified time. In another example, determining the time comprises calculating the time in the future based on a reading speed and a current reading location in the text source. In yet another example, determining the time comprises predicting a time a word of the text source will be spoken before the word is spoken. At block808, the processing device may load content for the physical effect responsive to satisfying the triggering condition. The triggering condition may be satisfied prior to the occurrence of the audible action. At block810, the processing device may provide the physical effect to modify an environment of the user. Responsive to completing the operations described herein above with references to block810, the method may terminate. The technology discussed herein includes multiple enhancements to a computing device with or without virtual assistant features. The below discussion includes multiple different enhancements that may be used individually or together to optimize the ability of the computing device to follow along while a text source is being read aloud and to provide special effects to supplement an environment of a user. In one example, the environment may include a parent reading a book aloud to one or more children. In another example, the environment may include one or more users providing a presentation, speech, or other performance to an audience. In either example, the technology may be used to enhance the environment with special effects based on an analysis of data associated with the text source. The special effects may be synchronized with particular portions of the text source, such as a particular spoken word or a page turn. In a first example, an enhancement may be related to reading progress estimation based on phonetic fuzzy matching and confidence intervals and may relate to the field of computer-based recognition of human speech and, in particular, to enhancing the ability of a computer device to identify a reading location in a text source as a user reads the text source aloud. Many technical problems arise when a computing device attempts to use traditional virtual assistant features to follow along as a user reads a text source aloud. Some of the problems arise because traditional virtual assistant features perform speech recognition to translate audio into text/recognized words. The speech recognition typically involves an acoustic step that translates the audio into phonemes and a language step that translates the phonemes into text/recognized words. The language step often waits for subsequent spoken words to establish context before translating a spoken word into text. The language step introduces an unnecessary time delay and consumes additional computing resources. In addition, using the recognized text to perform a traditional text based comparison with the text source may be more error prone then performing a phonetic comparison (e.g., phoneme comparison). This often arises because many words that sound the same or sound similar may be spelled very differently and would yield false negatives when textually compared. In addition, a traditional textual comparison may not properly account for situations where a user may jump around when reading a text source. For example, portions of the text source may be skipped, repeated, or new content may be added. This may make it challenging to identify a current reading location within the text source and to correctly detect a reading speed. Aspects and implementations of the present technology address the above and other deficiencies by providing enhancements to enable a computing device to detect a current reading location in a text source as the text source is being read aloud. In one example, the technology may avoid the language step of traditional speech recognition by comparing the phoneme data derived from the audio with phoneme data derived from the text source. The text source may be a book, magazine, presentation, speech, script, or other source that includes a sequence of words. The technology may receive audio data that includes the words spoken by a user and may convert the audio data to phoneme data locally or with the assistance or a remote server (e.g., cloud service). The phoneme data of the audio and the text source may then be compared via a phonetic comparison as opposed to a more traditional textual comparison. The phonetic comparison may be accompanied with fuzzy matching to identify a location within the sequence of words (e.g., current reading location). Systems and methods described herein include technology that enhances the technical field of computer-based recognition of human speech. In particular, the technology disclosed enhances the latency, accuracy, and computing resources required to identify a current reading position. This may be the result of modifying the speech analysis process (e.g., speech recognition) to avoid translating the audio data into text/words. The technology may use a speech analysis process that translates the audio into phoneme data using an acoustic model but may avoid the language step that translates the phoneme data to text/words using a language model. Avoiding the language step reduces latency and the consumption of computing resources. Performing phoneme comparisons and using fuzzy matching may enhance the accuracy in identifying a current reading position because it may better compensate for non-linear reading of the text source (e.g., skipping, repeating, or adding content). In a second example, an enhancement may be related to algorithmic determination of a story readers discontinuation of reading and may be related to the field of computer-based recognition of human speech and, in particular, to enhancing the ability of a computer device to determine a user is no longer reading content of a text source aloud. Many technical problems arise when a computing device attempts to use traditional virtual assistant features to follow along as a user reads a text source aloud. Some of the problems arise because a traditional virtual assistant may be unable to detect when a user has finished providing audio input if the user continues to talk about something else. This may result in the computing device to continue recording the audio of the user, which may be problematic if the user transitions to discussing something private. Detecting when a user has stopped reading from a text source may be more challenging when a user does not follow along with the text and skips, repeats, or adds new content while reading the text source aloud. Aspects and implementation of the present technology address the above and other deficiencies by enhancing the ability of a computing device to detect when a user has discontinued reading a text source. In one example, the technology may enable the virtual assistant to more accurately detect that the user has taken a break from reading the text source and may deactivate a microphone to avoid capturing private audio content. This may involve receiving audio data comprising the spoken word associated with a text source and comparing the audio data with data of the text source. The technology may calculate a correspondence measure between content of the audio data and the content of the text source. The correspondence measure may be probabilistic value that is based on a comparison of phoneme data, textual data, or other data and may involve using fuzzy matching logic. When the correspondence measure satisfies a threshold (e.g., below a minimum correspondence threshold), the technology may cause a signal to be transmitted that will cease the analysis of subsequent audio data. Systems and methods described herein include technology that enhances the technical field of computer-based recognition of human speech. In particular, the technology may address technical problems by avoiding an inadvertent recording of a user's private conversation by using comparisons that better compensate for non-linear reading of the text source (e.g., skipping, repeating, adding content). For example, the above technology may facilitate more accurate and/or more prompt automated control of the virtual assistant to record and/or process only pertinent audio. The technology may also enable the computing device to reduce power consumption by deactivating an audio sensor (e.g., microphone) and associated data processing when the computing device detects the user has stopped reading the text. Furthermore, the above technology may enable the computing device to reduce utilization of computing resources, such as processing capacity, network bandwidth, data storage and/or the like, that may otherwise be used to record and/or process the audio data once the user has stopped reading the text. In a third example, an enhancement may be related to dynamic adjustment of story time special effects based on contextual data and may be related to the field of virtual assistants and, in particular, to enhancing the ability of a virtual assistant to provide special effects while a text source is being read aloud. Modern computing devices may be configured to adopt traditional virtual assistant features to provide sound effects that supplement an environment when a user is reading a book aloud. For example, when a user reads the word “bark” aloud the computing device may provide a barking sound effect. The sound effects are often provided by the same entity that provided the text source and may directly correspond with a portion of the text source. As a result, the special effects may be the same independent of the user or environment and may not be optimized to the particular reading environment of a user. Aspects and implementation of the present technology address the above and other deficiencies by enabling a computing device to provide a wide variety of special effects that are based on the user's environment. In one example, the technology may enable the computing device to analyze contextual data of an environment of the user and to select or customize the special effects. The special effects may be physical effects that alter the environment of the user to include acoustic effects (e.g., music, sound effects music), optical effects (e.g., flashing lights, ambient light), haptic effects (e.g., vibrations, wind, temperature changes), other effects, or a combination thereof. The technology may involve receiving and analyzing contextual data associated with the user. The contextual data may be related to the weather, lighting, time of day, user feedback, user profile, other information, or a combination thereof. The technology may select or modify a physical effect that corresponds to the text source based on the contextual data. For example, this may result in selecting or modifying a volume, brightness, speed, tone, or other attribute of the physical effect. Systems and methods described herein include technology that enhances the technical field of virtual assistants and home automation. In particular, the technology may enable a computing device to optimize an environment by using contextual data about the user and the environment to add, remove, or modify physical effects to enhance a listening experience of the user. In a fourth example, an enhancement may be related to the detection of story reader progress for pre-caching special effects and may be related to the field of virtual assistants and, in particular, to enhancing the ability of a virtual assistant to pre-cache special effects for a text source that is being read aloud. Many technical problems arise when attempting to use traditional virtual assistant features to provide sound effects that are syncronized with the spoken content of a text source. Some of the problems arise because a traditional virtual assistant performs speech recognition to translate audio into text and then performs a comparison based on the text. The speech recognition typically involves an acoustic step that translates the audio into phonemes and a language step that translates the phonemes into text. The language step often waits for subsequent spoken words to establish context before translating a spoken word into text. The language step introduces a time delay and consumes additional computing resources on a computing device that may be resource constrained. The delay may be further compounded because the sound effects may be large audio files that are downloaded from a remote data source. A traditional approach may involve downloading the sound effects in response to detecting a spoken word but the delay may result in the special effect being provided long after the word is spoken. Another approach may involve downloading all of the sound effects when the text source is initially identified but that may be problematic when the computing device is a resource constrained device (e.g., smart speaker). Aspects and implementation of the present technology address the above and other deficiencies by providing enhancements to a computing device to enhance its ability to conserve computing resources and still provide special effects that are syncronized with a text source being read aloud. This may be accomplished by using data of a text source (e.g., book) to predict future audible actions and to prefetch the associated physical effects before the respective audible action arises. In one example, the technology may enable the computing device to predict when a user will reach a word in the text source before the word is spoken. This may involve identifying effects data for a text source that correlate a physical effect with one or more audible actions of a user. The audible actions may include a word spoken by the user or may be a page turn, book close, or other action that produces an audible response. The technology may determine a triggering condition based on a current reading location, reading speed, other data, or a combination thereof. In response to detecting that the triggering condition is satisfied, the technology may cause the computing device to load content for the physical effect and subsequently provide the physical effect to modify an environment of the user. Systems and methods described herein include technology that enhances the technical field of pre-caching based on recognition of human speech. In particular, the technology disclosed may address technical problems associated with resource consumption when analyzing speech and downloading special effects. The technology may also reduce a delay in providing special effects so that the special effects are better synchronized with the human speech. FIG.9depicts a block diagram of a computer system operating in accordance with one or more aspects of the present disclosure. In various illustrative examples, computer system900may correspond to computing device130ofFIGS.2-4. The computer system may be included within a data center that supports virtualization. In certain implementations, computer system900may be connected (e.g., via a network, such as a Local Area Network (LAN), an intranet, an extranet, or the Internet) to other computer systems. Computer system900may operate in the capacity of a server or a client computer in a client-server environment, or as a peer computer in a peer-to-peer or distributed network environment. Computer system900may be provided by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, the term “computer” shall include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods described herein. In a further aspect, the computer system900may include a processing device902, a volatile memory904(e.g., random access memory (RAM)), a non-volatile memory906(e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)), and a data storage device916, which may communicate with each other via a bus908. Processing device902may be provided by one or more processors such as a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor). Computer system900may further include a network interface device922. Computer system900also may include a video display unit910(e.g., an LCD), an alphanumeric input device912(e.g., a keyboard), a cursor control device914(e.g., a mouse), and a signal generation device920. Data storage device916may include a non-transitory computer-readable storage medium924on which may store instructions926encoding any one or more of the methods or functions described herein, including instructions for implementing method500,600,700, or800and any components or modules inFIGS.1-4. Instructions926may also reside, completely or partially, within volatile memory904and/or within processing device902during execution thereof by computer system900, hence, volatile memory904, and processing device902may also constitute machine-readable storage media. While computer-readable storage medium924is shown in the illustrative examples as a single medium, the term “computer-readable storage medium” shall include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of executable instructions. The term “computer-readable storage medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a computer and cause the computer to perform any one or more of the methods described herein. The term “computer-readable storage medium” shall include, but not be limited to, solid-state memories, optical media, and magnetic media. The methods, components, and features described herein may be implemented by discrete hardware components or may be integrated in the functionality of other hardware components such as ASICS, FPGAs, DSPs or similar devices. In addition, the methods, components, and features may be implemented by firmware modules or functional circuitry within hardware resources. Further, the methods, components, and features may be implemented in any combination of hardware resources and computer program components, or in computer programs. Unless specifically stated otherwise, terms such as “initiating,” “transmitting,” “receiving,” “analyzing,” or the like, refer to actions and processes performed or implemented by computer systems that manipulates and transforms data represented as physical (electronic) quantities within the computer system registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Also, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not have an ordinal meaning according to their numerical designation. Examples described herein also relate to an apparatus for performing the methods described herein. This apparatus may be specially constructed for performing the methods described herein, or it may comprise a general purpose computer system selectively programmed by a computer program stored in the computer system. Such a computer program may be stored in a computer-readable tangible storage medium. The methods and illustrative examples described herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used in accordance with the teachings described herein, or it may prove convenient to construct more specialized apparatus to perform methods500,600,700,800and/or each of its individual functions, routines, subroutines, or operations. Examples of the structure for a variety of these systems are set forth in the description above. The above description is intended to be illustrative, and not restrictive. Although the present disclosure has been described with references to specific illustrative examples and implementations, it will be recognized that the present disclosure is not limited to the examples and implementations described. The scope of the disclosure should be determined with reference to the following claims, along with the full scope of equivalents to which the claims are entitled.
98,602
11862193
DETAILED DESCRIPTION In general, according to one embodiment, a magnetic disk device comprises an actuator, a controller that controls the actuator, a loop shaping filter connected in parallel with the controller, the loop shaping filter having filter coefficients for suppressing a rotation asynchronous disturbance affecting a position of the actuator, the filter coefficients of the loop shaping filter being determined using a transfer function from an output of the loop shaping filter to before an input of the rotation asynchronous disturbance, and a notch filter that suppresses mechanical resonance of the actuator, wherein a parameter of the notch filter is changed according to the frequency response of the actuator, the frequency response of the actuator being changed under an influence of manufacturing variations and the like, and simultaneously the loop shaping filter is redesigned by reflecting a change in the transfer function. In a magnetic disk device, the frequency response of an actuator is affected by manufacturing variations of components or the like and is different from the one used at a previous design time. This requires a change in a parameter of a notch filter that suppresses vibration of the actuator from the design time. However, the change in the parameter of the notch filter affects disturbance suppression performance of a loop shaping filter that suppresses a rotation asynchronous disturbance (NRRO). An object of embodiments is to provide a magnetic disk device and a parameter setting method of the magnetic disk device capable of, even when the frequency response of an actuator is different from the one used at a design time, suppressing the NRRO as intended and improving the positioning accuracy of a magnetic head. Hereinafter, embodiments will be described with reference to the drawings. Note that the disclosure is merely an example, and the invention is not limited by the contents described in the following embodiments. Modifications easily conceivable by those skilled in the art are naturally included in the scope of the disclosure. In the drawings, in order to make the description clearer, the sizes, shapes, and the like of parts may be schematically represented with a change with respect to an actual embodiment. In the drawings, corresponding elements may be denoted by the same reference numerals, and may not be described in detail. EMBODIMENT FIG.1is a block diagram illustrating an example of a configuration of a magnetic disk device1. The magnetic disk device1includes a head disk assembly (HDA)10, a head amplifier integrated circuit (hereinafter, head amplifier IC)17, and a system-on-chip (SOC)20. The HDA10includes a magnetic disk11, a spindle motor (SPM)12, an arm13, and a voice coil motor (VCM)16. The SPM12rotates the magnetic disk11. A load beam14is attached to a distal end of the arm13, and a magnetic head15is attached to a distal end of the load beam14. The arm13is driven by the VCM16, and controls the magnetic head15to move it to a designated position on the magnetic disk11. The magnetic head15has a structure in which a read head element and a write head element are separately mounted on one slider. The read head element reads data recorded on the magnetic disk11. The write head element writes data to the magnetic disk11. The head amplifier IC17includes a read amplifier and a write driver. The read amplifier amplifies a read signal read by the read head element and transmits the amplified read signal to a read/write (R/W) channel22. Meanwhile, the write driver transmits a write current corresponding to write data output from the R/W channel22to the write head element. The SOC20includes a microprocessor (CPU)21, the R/W channel22, a disk controller23, and a positioning controller24. The CPU21is a main controller of a drive. The CPU21executes servo control for positioning the magnetic head15via the positioning controller24, and data read/write control via the head amplifier IC17. The R/W channel22includes a read channel for executing signal processing of read data, and a write channel for executing signal processing of write data. The disk controller23executes interface control for controlling data transfer between a host system (not illustrated) and the R/W channel22. Note that the positioning controller24may be implemented as hardware or software (firmware). A memory25includes a volatile memory and a nonvolatile memory. For example, the memory25includes a buffer memory including a DRAM, and a flash memory. The memory25includes, as the nonvolatile memory, a storage (not illustrated) that stores programs and the like necessary for the processing of the CPU21, and a coefficient storage26that stores parameters when a parameter setting process described later is performed. The parameters stored in the coefficient storage26will be described later. Note that the coefficient storage26may be stored in any storage area in the magnetic disk device1if not stored in the memory25. Here, a technique related to a loop shaping filter that suppresses the NRRO will be described with reference toFIGS.2A and2B. FIGS.2A and2Bare block diagrams illustrating configurations of a control system for suppressing the disturbance in comparison.FIG.2Ais a conventional configuration, andFIG.2Bis a configuration according to the embodiment. InFIGS.2A and2B, reference numeral30denotes a controller (C[z]), reference numeral40denotes a loop shaping filter (A[z], Ā[z]), reference numeral50denotes an actuator (P[z],P[z]), and reference numeral60denotes a notch filter (N[z],N[z]). At this time, a reference signal is represented by r[k], a position signal is represented by y[k], an output of the loop shaping filter40is represented by ud[k], and the disturbance is represented by d[k]. The loop shaping filter40is disposed in parallel with the controller30, and a combined output thereof is input to the actuator50via the notch filter60, so that the actuator50operates. In this manner, the output of the loop shaping filter40is reflected and filtering of the notch filter60is applied to operate the actuator50, resulting in control that cancels the influence of the disturbance. Specifically, in the magnetic disk device1of the present embodiment, the controller30, the loop shaping filter40, and the notch filter60are included in the positioning controller24, and the actuator50corresponds to the VCM16. Note that, when the magnetic disk device is a type in which a microactuator is mounted on the magnetic head to minutely operate the write element and the read element, the microactuator may also be included in the actuator together with the VCM16. In the above configuration, the following Formula (1) is used as the loop shaping filter (A[z])40disposed in parallel with the controller (C[z])30. A[z]=μ2⁢z2⁢cos⁢ϕ-η⁢z⁢cos⁡(ω0⁢T+ϕ)z2-2⁢η⁢z⁢cos⁢ω0⁢T+η2(1) where T is a sampling period, and η, μ, and ω0are design parameters. Furthermore, parameters α and φ in the coefficients of the loop shaping filter (A[z])40are expressed by the following Formula (2). Mud⁢d[z]:=P[z]⁢N[z]1+P[z]⁢N[z]⁢C[z],α=❘"\[LeftBracketingBar]"Mud⁢d[ej⁢ω0⁢T]❘"\[RightBracketingBar]",ϕ=arg⁡(Mud⁢d[ej⁢ω0⁢T])(2) where the parameters α and φ are parameters for matching a gain (α) and a phase (φ) of a transfer function Mud⁢d[z] from the output ud[k] of the loop shaping filter (A[z])40to before an input of the disturbance d[k] at a suppression target angular frequency ω0. That is, designing the parameters α and φ means designing the loop shaping filter such that an estimate value for suppressing the disturbance d[k] estimated from a position error signal by the loop shaping filter (A[z])40cancels the disturbance d[k] in consideration of a change in the gain (α) and the phase (φ) that occurs until the signal ud[k] output from the loop shaping filter (A[z])40reaches the point where the disturbance d[k] is input. For example, in a case of designing the control system including the notch filter (N[z])60, the controller (C[z])30, and the like for the nominal actuator (P[z])50, the notch filter60is changed later from N[z] toN[z] according to a change of the actuator (P[z])50of the actual head. At this time, the transfer function Mud⁢d[z] from output ud[k] of the loop shaping filter (A[z])40to before the input of the disturbance d[k] changes, resulting in failure to obtain desired characteristics. Thus, in the embodiment, as illustrated inFIG.2B, when the actuator50is changed from design-time P[z] toP[z], the notch filter60is changed from N[z] toN[z] according to the change of the actuator50, and simultaneously, the loop shaping filter40is changed from A[z] to Ā[z]. FIG.3is a flowchart illustrating an example of a main flow of a filter coefficient (parameter) setting process according to the embodiment. First, at the design time in the beginning of manufacture, the parameters of the loop shaping filter (A[z])40are designed for P[z], N[z], and C[z] of the actuator50, the notch filter60, and the controller30(step S1). Then, the gain a and the phase φ of the transfer function Mud⁢d[z] at the suppression target angular frequency ω0of the loop shaping filter (A[z])40, which are expressed by Formula 2, are stored (step S2). Subsequently, a frequency response P[ejω0T] of the actuator50at the suppression target angular frequency ω0and P[ejω0T]N[ejω0T]C[ejω0T]/(1+P[ejω0T]N[ejω0T]C[ejω0T]) are stored (step S3).P[z] is measured andP[ejω0T] is stored (step S4), andN[z] corresponding toP[z] is designed (step S5). Here, a ratio M_ud⁢d[z]/Mud⁢d[z] before and after parameter change of the notch filter60is approximately calculated, and α, φ, and A[z] are updated (step S6). Then, the series of processing ends. As described above, in the present embodiment, when the change in the notch filter60from N[z] toN[z] according to individual mechanical characteristics causes the change in the transfer function Mud⁢d[z], reflecting the change in the notch filter60and the like in the original parameters α and φ allows for obtaining desired characteristics. Specifically, when Mud⁢d[z], α, and φ after the parameter change of the notch filter60are defined as M¯ud⁢d[z], αandφ, respectively, they are expressed by the following Formula (3). α¯=α⁢❘"\[LeftBracketingBar]"M_ud⁢d[ej⁢ω0⁢T]Mud⁢d[ej⁢ω0⁢T]❘"\[RightBracketingBar]",ϕ_=ϕ+arg⁡(M_ud⁢d[ej⁢ω0⁢T]Mud⁢d[ej⁢ω0⁢T]),M¯ud⁢d[z]Mud⁢d[z]=P¯[z]⁢N¯[z]⁢(1+P[z]⁢N[z]⁢C[z])P[z]⁢N[z]⁢(1+P¯[z]⁢N¯[z]⁢C[z])(3) The design-time notch filter60is expressed by the following Formula (4). N[z]=∏i=1n(2⁢(z-1)T⁡(z+1))2+2⁢dpi⁢ζi⁢ωn⁢pi⁢2⁢(z-1)T⁡(z+1)+ωn⁢pi2(2⁢(z-1)T⁡(z+1))2+2⁢ζi⁢ωn⁢pi⁢2⁢(z-1)T⁡(z+1)+ωn⁢pi2⁢ωn⁢pi=2T⁢arctan⁡(ωn1⁢T2)(4) where dpi, ζi, and ωnpiare a depth, an attenuation, and a suppression target angular frequency, respectively, of the design-time notch filter60parameters. From the above formulas, the ratio M¯ud⁢d[z]/Mud⁢d[z] before and after the change in the transfer function is as the following Formula (5). M¯ud⁢d[z]Mud⁢d[z]=P¯[z]P[z]⁢N¯[z]N[z][1+P[z]⁢N[z]⁢C[z][P_[z]P[z]⁢N¯[z]N[z]-1)1+P[z]⁢N[z]⁢C[z]]-1(5) whereP[ejω0T]/P[ejω0T] can be acquired by actual measurement at the time of the parameter change of the notch filter60. In addition, P[ejω0T]N[ejω0T]C[ejω0T]/(1+P[ejω0T]N[ejω0T]C[ejω0T]) can be obtained at the design time. Let they be Q and R, respectively. Then, M_ud⁢d[ej⁢ω⁢0⁢T]/Mud⁢d[ej⁢ω⁢0⁢T] becomes a function ofN[ejω0T]/N[ejω0T] as in the following Formula (6). M¯ud⁢d[ej⁢ω0⁢T]Mud⁢d[ej⁢ω0⁢T]=Q⁢N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T][1+R⁡(Q⁢N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T]-1)]-1(6) As described above, according to the present embodiment, when the parameters of the notch filter are changed according to the actuator, the loop shaping filter is redesigned (adjusted) by reflecting the change in the transfer function. Therefore, it is possible to improve the positioning accuracy of the magnetic head while suppressing the rotation asynchronous disturbance even when the actuator50is different from that at the design time. Hereinafter, examples of a calculation method for the parameter setting (change) in the above configuration will be described. Example 1 In Formula (6),N[ejω0T]/N[ejω0T] may be calculated from N[ejω0T] obtained in advance andN[ejω0T] obtained after the parameter change. Example 2 An approximate expression such as the following Formula (7) may be used asN[ejω0T]/N[ejω0T]. N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T]=∏i=1n(2⁢(z-1)T⁡(z+1))2+2⁢d¯pi⁢ζ¯i⁢ω¯n⁢pi⁢2⁢(z-1)T⁡(z+1)+ω¯n⁢pi2(2⁢(z-1)T⁡(z+1))2+2⁢ζ¯i⁢ω¯n⁢pi⁢2⁢(z-1)T⁡(z+1)+ω¯n⁢pi2⁢(2⁢(z-1)T⁡(z+1))2+2⁢ζi⁢ωn⁢pi⁢2⁢(z-1)T⁡(z+1)+ωn⁢pi2(2⁢(z-1)T⁡(z+1))2+2⁢dpi⁢ζi⁢ωn⁢pi⁢2⁢(z-1)T⁡(z+1)+ωn⁢pi2≃∏i=1nω_ni2-ω02+2⁢j⁢d_pi⁢ζ¯i⁢ω¯ni⁢ω0ω_ni2-ω02+2⁢j⁢ζ¯i⁢ω¯ni⁢ω0⁢ωni2-ω02+2⁢j⁢ζi⁢ωni⁢ω0ωni2-ω02+2⁢jdpi⁢ζi⁢ωni⁢ω0≃∏i=1n[1+(d¯pi-dpi)⁢f1(dpi,ζi,Ωi)+(ζ_i-ζi)⁢g1(dpi,ζi,Ωi)+(Ω_i-Ωi)⁢h1(dpi,ζi,Ωi)](7) In Formula (7), f1(dpi, ζi, Ωi), g1(dpi, ζi, Ωi), and h1(dpi, ζi, Ωi) can be calculated using a Taylor series or the like in advance at the design time. Thus, Formula (7) can be easily calculated by the sum of products with parameter differences. Here, Ωi:=ωni/ω0, and n is the number of stages of the notch filter. FIG.4illustrates a result of comparing sensitivity function differences with/without loop shaping when only the notch filter60is changed using the present example. InFIG.4, “initial difference” indicates a case before the notch filter is changed, “ideal after notch change” indicates a case where the loop shaping filter is recalculated by the normal method after the notch filter is changed (ideal after the notch filter is changed), “after notch change” indicates a case where the loop shaping filter is not updated after the notch filter is changed, and “proposed technique” indicates a case where the loop shaping filter is updated using the proposed technique of Example 2 after the notch filter is changed. As can be seen fromFIG.4, the approximation by the proposed technique yields loop shaping close to the ideal after notch change. Example 3 For a region where the angular frequency ω0is low, |1+P[ejω0T]N[ejω0T]C[ejω0T]|−1|P[ejω0T]N[ejω0T]|  (8) holds, and thus, α¯≃α⁢❘"\[LeftBracketingBar]"Q❘"\[RightBracketingBar]"⁢❘"\[LeftBracketingBar]"N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T]❘"\[RightBracketingBar]",ϕ¯=ϕ+arg⁡(Q)+arg⁢(N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T]).(9) At this time, ❘"\[LeftBracketingBar]"N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T]❘"\[RightBracketingBar]"(10) can be approximated by the following Formula (12), and arg⁡(N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T])(11) can be approximated by the following Formula (13). ❘"\[LeftBracketingBar]"N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T]❘"\[RightBracketingBar]"≃∏i=1n{1+f2(dpi,ζi,Ωi)⁢(d¯pi-dpi)+g2(dpi,ζi,Ωi)⁢(ζ¯i-ζi)+h2(dpi,ζi,Ωi)⁢(Ω¯i-Ωi)}(12) arg⁡(N¯[ej⁢ω0⁢T]N[ej⁢ω0⁢T])≃∑i=1n{-f3(dpi,ζi,Ωi)⁢(d¯pi-dpi)-g3(dpi,ζi,Ωi)⁢(ζ¯i-ζi)-h3(dpi,ζi,Ωi)⁢(Ω¯i-Ωi)}(13) In Formulas (12) and (13), f1(dpi,ζi,Ωi),g1(dpi,ζi,Ωi),h1(dpi,ζi,Ωi),i=2,3  (14) can be calculated using a Taylor series or the like in advance at the design time. Thus, Formulas (12) and (13) can be easily calculated by the sum of products with the parameter differences. Example 4 The relationship between setting values and resetting values for each of the actuator50and the notch filter60may be given as follows: P¯[ej⁢ω0⁢T]≃P[ej⁢ω0⁢T]⁢N¯[ej⁢ω0⁢T]≃N[ej⁢ω0⁢T].(15) Some embodiments of the present invention have been described. However, these embodiments are presented as examples, and are not intended to limit the scope of the invention. These novel embodiments can be implemented in various other forms, and various omissions, substitutions, and modifications can be made without departing from the gist of the invention. These embodiments and variations thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalent scope thereof.
15,794
11862194
DETAILED DESCRIPTION Magnetic tape drives are a widespread example of data storage systems where nonlinear effects within the recording channel can have a significant influence on the signal. Tape channels are characterized by significant nonlinear distortion that are caused by, for example, nonlinear transition shifts (NLTS), nonlinearities of magneto-resistive sensors, and transition jitter. It is desirable to take into account such nonlinear effects to increase signal fidelity at the output of a finite-state noise-predictive maximum-likelihood detector. Embodiments of the invention are not limited to data recovery from magnetic tapes but can rather be implemented with any data channel having non-negligible nonlinear distortion. However, in favor of better comprehensibility, the present disclosure refrains from describing analogous technical features in other storage systems. Without limitation, other kinds of storage systems where embodiments of the present invention may be advantageously deployed may include other magnetic storage systems such as hard disk drives, floppy disks, and the like, as well as further storage techniques where reading the stored information includes measuring the physical state of a moving medium at high data rates, such as optical storage systems. Embodiments of the data storage system as well as individual functions thereof, of the method as well as individual steps thereof, and of the computer program product as well as individual program instructions embodied therewith may be implemented by or using analog or digital circuitry, including, but not limited to, logic gates, integrated logic circuits, electronic circuitry and devices including processors and memory, optical computing devices and components, machine-learning devices and structures including artificial neural networks, and the like, and combinations thereof. This disclosure presents adaptive detection schemes that may reduce nonlinearity in the equalized signal prior to noise prediction. Embodiments of the invention include or make use of an estimator that, e.g., is configured to determine a superposition of an estimated nonlinear symbol that may occur as an output of a linear partial-response equalizer and an estimated nonlinear portion of the signal. For this purpose, the estimator may subtract a current estimate of a signal representing a symbol from the incoming signal. The symbol to be subtracted may comprise a superposition of an estimated linear portion of the partial-response equalizer output (i.e., the symbol as it would be provided by the equalizer, e.g., a PR4 equalizer, if the read channel was purely linear) and an estimated nonlinear portion of the signal that may be continuously updated. By determining said difference, the estimator may obtain an estimation error that coincides with the actual current nonlinear portion of the signal if the subtracted estimated symbol is identical to the symbol currently encoded by the incoming signal. The difference between the signal and the estimated linear symbol may be used to update the estimated symbol. Different embodiments of the estimator will be discussed in more detail in the following and shown in the drawings. The estimated symbol may be initialized to ensure ordinary functioning of the estimator at all times. An exemplary choice for an initial value may be an undisturbed, theoretical value of the symbol that is free of linear or nonlinear distortion. By virtue of the function provided by the estimator, embodiments of the invention may have the advantage of enabling a reduction in the bit error rate at the output of detectors for channels that suffer from nonlinear distortion. This may allow an increase in the linear density and/or a reduction in the number of temporary or permanent errors at the output of an error correction code (ECC) decoder within the data storage system. By modeling the partial-response equalizer output ykto be a superposition of an estimated linear portion of the partial-response equalizer output and an estimated nonlinear portion of the signal, the deterministic signal nonlinearity may be cancelled from the signal before noise prediction is performed. According to an embodiment, the estimator is configured for storing the estimated signal as an element of an array of estimated signals, each estimated signal within the array of estimated signals being addressable by an array index comprising a possible tuple of bits in the data stream output by the adaptive data-dependent noise-predictive maximum likelihood sequence detector. According to an embodiment, the tuple comprises a possible sequence of bits in the data stream output by the adaptive data-dependent noise-predictive maximum likelihood sequence detector. To perform a subtraction in the course of determining the estimated signal, it may be beneficial to store the number to be subtracted, such as the estimated linear portion of a partial-response equalizer output or the estimated nonlinear portion of the signal, in a memory such as a random-access memory (RAM). In this way, the estimator may be implemented as an adaptive nonlinear table look-up filter. Estimating multiple estimated signals, wherein each of the multiple estimated signals is uniquely assigned to one out of a plurality of bit tuples or sequences that may occur in the output of the adaptive data-dependent noise-predictive maximum likelihood sequence detector, may have the advantage of making it possible to keep track of different nonlinear estimated signals that may occur for different sequences of symbols at the input of the detector. This may account for channel nonlinearity causing an influence of one or more surrounding symbols on the symbol that is currently used as a subtrahend from the signal to determine the noise residue of the signal. For instance, if the detector is a class-4 partial-response (PR4) detector that outputs a sequence {âk, âk−1, âk−2, âk−3, âk−4} with k being a current time index or current value of a bit counter increasing with time, a good choice for a symbol to be subtracted from the signal may be âk−âk−2so that the assumed linear noise portion of the signal ykwould be ñk=yk−(âk−âk−2). However, in a nonlinear channel, past symbols relative to âk−2such as âk−3and âk−4as well as future symbols relative to âk−2such as âk−1may also contribute to the distortion of âk−2. Thus, still in the example, the nonlinear portion of the signal may vary as a function of the current sequence {âk, âk−1, âk−2, âk−3, âk−4}. Of course, embodiments of the invention are not limited to tuples or sequences of five detector output symbols, but the estimator may likewise account for less or more bits that may even include future symbols such as âk+1in some cases. Moreover, it may be feasible to have the estimator account for multiple detector output symbols that are not a sequence, but a more general tuple of symbols (e.g., {âk, âk−2, âk−4}) that may be found to be more likely to influence the present signal than others, while these other, less significant symbols in the sequence (in the same example, âk−1and âk−3) may be skipped. The sequence or tuple to be taken into account by the estimator forms an addressa, i.e., the array index used for looking up elements of the array of numbers stored by the estimator. The content of the array (the values in the array cells) may represent the estimated non-linear symbols at the output of the equalizer, e.g.: ŝk=âk−âk−2+nonlinear offset for a PR4 equalizer. In another embodiment discussed further below, the array may only store the nonlinear offsets while ŝk(a) is determined after looking up the suitable nonlinear offset from the array. It may be advisable to dimension the memory for storing the array such that real numbers can be used to represent the nonlinear ŝksymbols. In general, the array index may be formed by all possible binary sequences of a predefined fixed length, or by all possible tuples of binary numbers that may occur at predefined fixed positions within a binary sequence of predefined fixed length. In a non-limiting example, if the predefined sequence length is 5, there will be 25=32 bit sequences (e.g., {âk, . . . , âk−4}) that form the array index, or address space, of the array of estimated signals or offsets. Hence, in the same example, there may be 32 estimated signals, each representing one binary sequence under which a different nonlinear signal portion may arise. Embodiments are not limited to the length of 5 bits in the output stream, there may likewise be 1, 2, 3, 4, 6, or any other fixed number of bits spanning the index of the array of estimated signals. Again, the array containing the full estimated signals may be initialized with ideal (noiseless, non-distorted) signals; and the array containing only the estimated nonlinear portions may be initialized with zeroes. For the case of the full estimated signals and PR4 equalization/symbols, the array may be initialized with ŝ(a)=âk−âk−2for all possible combinations ofa={âk, . . . , âk−4}. It may be beneficial to implement the memory for storing the array so as to minimize the number of changes to an existing detector design such as a 16-state adaptive data-dependent noise-predictive maximum likelihood (D3-NPML) detector. In this scenario, a recommendable RAM size for storing the nonlinear symbols or offsets may be 32 cells, where the 32 RAM addresses are defined by the 32 branches of the 16-state detector trellis. On the other hand, more comprehensive designs may be possible to get the full benefit of nonlinear signal estimation. In this case, larger RAM sizes for storing the nonlinear offsets may be required, where path history/memory decisions and look-ahead bits in addition to the bits on a trellis branch may be used as the address of the RAM storing the nonlinear offsets. Using a large array RAM (e.g., a 256-cell RAM) may also yield the advantage of lowering the error rate at the output of the sequence detector by further improving nonlinear signal estimation. According to an embodiment, the estimator comprises a memory adapted for storing the estimated signal, the estimator being configured to repeatedly: determine an estimation error comprising a difference between a previously stored estimated signal and the signal, and update the previously stored estimated signal by a superposition of the previously stored estimated signal and the estimation error. When updating the estimated signal ŝ, the estimation error ē=y−ŝ may be used to form a correction term ê for the estimated nonlinear portion of the signal y. Updating the estimated signal, i.e., determining an updated estimated signal and storing the updated estimated signal in the estimator's memory, may have the advantage of enabling convergence of the estimated signal on a good approximation of the nonlinearly distorted signal. Within this disclosure, a good approximation shall be defined as a value that changes from one time instance k to a subsequent time instance k+1 in an amount that is within the same order of magnitude as the amount of change of the actual nonlinear portion of the signal encoding the same symbol as the estimated signal in both time instances k and k+1. In this consideration, two values shall be defined to lie within the same order of magnitude if they differ from each other by not more than 50%, preferably 10%, and more preferable 3%, i.e., if their ratio differs from unity by not more than 0.5, preferably 0.1, and more preferably 0.03. A possible way of determining a superposition of the previously stored estimated signal and the estimation error may be a linear combination. For instance, an updated value ŝkof the estimated signal may be calculated as ŝk=cŝk−1+ηē, with c and η being (e.g., positive, real) coefficients that may, but not necessarily, be chosen to fulfil the criterion c+η=1. It may however also be useful to decrease complexity by setting c to a constant value of 1 so that only η needs to be used as a non-trivial coefficient. Preferably, but not necessarily, some or all coefficients that may be used to combine a previously stored estimated signal with a current estimation error may be constant for all time instances k. Likewise, it may be preferable, but not necessary, to use identical coefficients for all estimated signals if multiple estimated signals are maintained as disclosed herein. The aforementioned techniques and advantages of storing the estimated signal(s) or portions thereof in a memory are likewise true for the present embodiment. According to an embodiment, the superposition of the stored estimated signal and the estimation error comprises the estimation error multiplied by a weighting factor larger than zero and smaller than one. According to an embodiment, the weighting factor has a value between 0.0001 and 0.1. This may limit the contribution of each update to the estimated signal such that there is a factual update but convergence toward a current nonlinear symbol may take multiple iterations. Non-limiting examples of possible values of the weighting factor that may balance a higher flexibility of reacting on temporal changes of the nonlinear portion of the signal against a convergence that is more robust against high-frequency irregularities or noise are η=0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001. Choosing a weighting factor in the range between 0.1 and 0.0001 may be especially beneficial if there is a large amount of noise in the readback signal. In a magnetic tape storage device, for instance, the readback signal may contain strong portions of magnetic and thermal noise. According to an embodiment, the estimator comprises a memory adapted for storing the estimated nonlinear portion of the signal, the estimator being configured to repeatedly: determine an estimation error comprising a difference between a previously stored estimated nonlinear portion of the signal and a difference between the signal and the estimated linear portion of a partial-response equalizer output, and update the previously stored estimated nonlinear portion of the signal by a superposition of the previously stored estimated nonlinear portion of the signal and the estimation error. This may provide an alternative way of maintaining numerical knowledge of channel nonlinearity, with the additional benefit that current values of the estimated nonlinear portion of the signal may be accessible directly, i.e., without including the linear portion of the estimated signal. Nonetheless, embodiments of the estimator adapted for storing the estimated nonlinear portion rather than the estimated signal may be configured to provide the estimated signal as an output for further downstream processing as described herein. The aforementioned techniques and advantages of storing and updating the estimated signal(s) or portions thereof in a memory as well as of determining a superposition of a stored value and the estimation error are likewise true for the present embodiment. According to an embodiment, the superposition of the stored estimated nonlinear portion of the signal and the estimation error comprises the estimation error multiplied by a weighting factor larger than zero and smaller than one. As discussed above, this may limit the contribution of each update to the estimated nonlinear portion of the signal such that there is a factual update but convergence toward a current nonlinear symbol may take multiple iterations. Non-limiting examples of possible values of the weighting factor that may balance a higher flexibility of reacting on temporal changes of the nonlinear portion of the signal against a convergence that is more robust against high-frequency irregularities are η=0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001. According to an embodiment, each of the one or more branch metric calculations comprises: mk=[(yk-sˆk)⁢(1-∑i=1J⁢pi⁢Di)]2, where mkis the branch metric, ykis the signal input to the estimator, ŝkis the estimated signal, J is a highest order of the applicable noise whitening filter, piare filter parameters of the applicable noise whitening filter, and D represents a delay corresponding to a bit duration. This may be a suitable way to determine a branch metric in cases where the difference between the signal and the estimated signal (also referred to as “noise residue” or “noise portion” herein) is processed by a noise whitening filter of order J. A noise whitening filter may provide equalization of the noise portion such that the equalized noise bears a closer resemblance to a white-noise spectrum. A noise whitening filter may therefore be useful in data storage systems where channel distortion regularly causes a non-uniform frequency spectrum of the noise portion of the signal. This may contribute to a more accurate detection by (i.e., a lower error rate at the output of) the adaptive data-dependent noise-predictive maximum likelihood sequence detector due to a lower content of insignificant information in the input signals of the branch metric calculator and/or the detector. In cases where the estimator is adapted for determining multiple estimated signals, it may be advantageous to provide a bank of noise whitening filters with at least one noise whitening filter for each combination of symbolsarepresented by the multiple estimated signals. In a non-limiting example, the bank of noise whitening filters comprises a separate noise whitening filter for each estimated signal provided by the estimator to enable, e.g., parallel filtering of the noise residues for each difference between the signal and one of the estimated signals. For a particular estimated signal, the at least one noise whitening filter provided for processing the difference between the signal and the particular estimated signal is referred to as the applicable noise whitening filter. Up to a technical limit, the highest filter order J may improve filtering accuracy with growing values of J. For instance, if J=3, the branch metric formula given above may represent, without limitation, a three-tap finite impulse response (FIR) filter that may be configured using three filter coefficients p1, p2, p3. A noise whitening filter may equalize parts of the noise spectrum in addition to any equalizing effects of the nonlinear signal estimation on the noise spectrum, i.e., the noise spectrum of the difference between the signal and the estimated nonlinear signal may be closer to a white-noise spectrum than the difference between the signal and an estimated linear (e.g., PR4 equalizer output) signal. Hence, it may be possible to choose a lower value of J (e.g., a smaller number of taps and coefficients) for a noise whitening filter in a storage system with a nonlinear signal estimator than in a system with linear signal subtraction. On the other hand, a larger value of J may provide an even closer approximation of the filter output to a white-noise spectrum. According to an embodiment, the data storage system further comprises a data-dependent noise mean calculator configured to estimate a data-dependent noise mean from the filtered signal, each of the one or more branch metric calculations comprising: mk=[(yk-sˆk)⁢(1-∑i=1J⁢pi⁢Di)-μ]2, where mkis the branch metric, ykis the signal input to the estimator, ŝkis the estimated signal, J is a highest order of the applicable noise whitening filter, piare filter parameters of the applicable noise whitening filter, D represents a delay corresponding to a bit duration, and μ is the data-dependent noise mean. This may be a suitable way to determine a branch metric in cases where the noise portion of the signal is processed by a noise whitening filter of order J and a data-dependent noise mean calculator. In addition to the techniques and advantages outlined above for embodiments using a noise whitening filter, a noise mean calculator may be useful for setting a time average of the noise residue closer to zero by subtracting the noise mean provided by the noise mean calculator from the noise residue. A data-dependent noise mean calculator may therefore be useful in data storage systems where channel distortion regularly causes a non-zero time average in the noise portion of the signal. Noise mean calculation and subtraction may contribute to a more accurate detection by (i.e., a lower error rate at the output of) the adaptive data-dependent noise-predictive maximum likelihood sequence detector due to a lower content of insignificant information in the input signals of the branch metric calculator and/or the detector. If the estimator is adapted for providing multiple estimated signals as a function of an addressaformed by detector output symbols as described herein, the noise mean calculator may be configured for calculating one or more data-dependent noise mean(s) for some or all the different estimated signals ŝk(a). For that reason, the noise means may also depend on the data sequence, i.e., they may be functions μ(a) of the addressa. For the purpose of illustration, a possible update routine that may be implemented by the noise mean calculator for a particular noise mean μ(a*) may be μnew(a*)=(1−ϵ) μold(a*)+ϵ{tilde over (e)}k(a*), where {tilde over (e)}k(a*) may be identical to ek(a*) or a version of ek(a*) that has been processed further downstream of the bank of noise whitening filters (e.g., the metric input signal), and ϵ is a parameter that is preferably larger than zero and smaller than one and may be used for tuning the convergence behavior of the noise mean calculator similar to the parameters η and α described herein. A noise mean calculator may calculate systematic, low-frequency (compared to the inverse of the bit duration) deviations of the noise residue from zero in addition to any mean-reducing effects of the nonlinear signal estimation on the noise residue, i.e., the difference between the signal and the estimated nonlinear signal may have a baseline that is closer to zero than the difference between the signal and an estimated linear (e.g., PR4 equalizer output) signal. Hence, it may be possible that the noise mean calculator in a storage system with a nonlinear signal estimator may converge in a shorter time than it would in a system with linear signal subtraction. According to an embodiment, the data storage system further comprises a filter parameter calculator configured to calculate one or more filter parameters for one or more of the noise whitening filters. Filter parameters such as the parameters piintroduced above may be used to tune the effect of the noise whitening filter(s) on the noise residue where, for instance, each parameter may tune a term (e.g., tap) of one order in a polynomial filter function. In this way, the noise whitening filter may be brought to react to temporal changes in the noise spectrum. The filter parameter calculator may receive noise residue downstream of the bank of noise whitening filters as an input for determining how to update the filter parameter(s). In a non-limiting example, the filter parameter calculator comprises a bank of filter parameter calculation units to calculate multiple filter parameters, where each filter parameter calculation unit may, not necessarily, be adapted for calculating one filter parameter. If the bank of noise whitening filters comprises multiple noise whitening filters, as may be the case when the estimator is adapted for providing multiple estimated signals as described herein, the filter parameter calculator may be configured for calculating one or more filter parameter(s) for some or all of the different noise whitening filters. For that reason, filter parameters may depend on the output of the adaptive data-dependent noise-predictive maximum likelihood sequence detector, i.e., they may be functions pi(a) of the addressa. Furthermore, the filter parameter calculation units may take into account characteristics of the filtering order to be tuned by its respective filter parameter by receiving a corresponding internal noise residue component signal from the respective noise whitening filter as an input. In a non-limiting example, for updating the coefficients of a 2-tap noise whitening filter providing an output signal ek(a)=ñk−p(a)ñk−1−p2(a)ñk−2, the filter parameter calculator may comprise two filter parameter update units, each of which being adapted for updating one of the filter parameters pi(i∈{1, 2}) and receiving the respective past noise residue component signal ñk−ias an input alongside ek(a). For the purpose of illustration, a possible update routine that may be implemented by the filter parameter calculator for a particular instance pi(a*) of one filter parameter pimay be pi,new(a*)=pi,old(a*)+α{tilde over (e)}k(a*)ñk−i, where {tilde over (e)}k(a*) may be identical to ek(a*) or a version of ek(a*) that has been processed further downstream of the bank of noise whitening filters (e.g., the metric input signal), and α is a parameter that is preferably larger than zero and smaller than one and may be used for tuning the convergence behavior of the noise mean calculator similar to the parameters η and ϵ described herein. A filter parameter calculator may calculate parameter(s) for tuning a particular noise whitening filter based on the difference ñkbetween the signal and the estimated nonlinear signal (and/or one or more of its respective predecessor(s) ñk−1, ñk−2, . . . ) that is input to the particular noise whitening filter and/or based on the output of the particular noise whitening filter that in turn may be a function of one or more of said difference(s) ñk, ñk−1, ñk−2, . . . between the signal and the estimated nonlinear signal. As the estimated signal comprises an estimated nonlinear portion of the signal, said difference(s) may comprise a smaller nonlinear portion than a difference between the signal and an estimated linear (e.g., PR4 equalizer output) signal. Hence, it may be possible that the filter parameter calculator in a storage system with a nonlinear signal estimator may converge in a shorter time than it would in a system with linear signal subtraction. To lower the error rate at the output of the adaptive data-dependent noise-predictive maximum likelihood sequence detector, it may be possible to improve noise prediction (whitening) by using more predictor coefficients. In a non-limiting example, path history/tentative decisions may be used to compute the branch metric by using more than two predictor coefficients in a 16-state detector, or more than three predictor coefficients in a 32-state detector in order to achieve better detector performance by improved noise whitening (noise prediction). According to an embodiment, the data storage system further comprises a data-dependent noise variance calculator configured to estimate a data-dependent noise variance from the metric input signal, each of the one or more branch metric calculations comprising: mk=ln⁡(σ2)+[(yk-sˆk)⁢(1-∑i=1J⁢pi⁢Di)]2/σ2, where mkis the branch metric, σ is the data-dependent noise variance, ykis the signal input to the estimator, ŝkis the estimated signal, J is a highest order of the applicable noise whitening filter, piare filter parameters of the applicable noise whitening filter, and D represents a delay corresponding to a bit duration. This may be a suitable way to determine a branch metric in cases where the noise portion of the signal is processed by a noise whitening filter of order J and a data-dependent noise variance calculator. In addition to the techniques and advantages outlined above for embodiments using a noise whitening filter, a noise variance calculator may be useful for canceling effects of unfavorable noise statistics that cause a non-negligible deviation of the noise variance from unity. One reason for such variance deviation may be a data dependence of the noise itself. A data-dependent noise variance calculator may therefore be useful in data storage systems where channel distortion regularly causes such noise statistics. Noise variance calculation and correction may contribute to a more accurate detection by (i.e., a lower error rate at the output of) the adaptive data-dependent noise-predictive maximum likelihood sequence detector due to a lower content of insignificant information in the input signals of the branch metric calculator and/or the detector. A noise variance calculator may calculate noise variances deviating from unity as far as such variances have not been already accounted for by the nonlinear signal estimation, i.e., the difference between the signal and the estimated nonlinear signal may have a noise variance closer to unity than the difference between the signal and an estimated linear (e.g., PR4 equalizer output) signal. Hence, it may be possible that the noise variance calculator in a storage system with a nonlinear signal estimator may converge in a shorter time than it would in a system with linear signal subtraction. According to an embodiment, the data storage system further comprises a delay line configured to delay the signal input to the estimator. According to an embodiment, the delay line is configured to delay the signal input to the estimator by at least 0.5 times a nominal delay time of the adaptive data-dependent noise-predictive maximum likelihood sequence detector for detecting one bit. A delay line upstream of the estimator may increase the time a present signal yktakes before it becomes available for updating the estimated signal, the estimated nonlinear portion of the signal, and/or any other updatable parameters or values depending on yksuch as noise whitening filter coefficients, noise mean values and/or noise variances, without limitation to the parameters mentioned here. During the delay imposed on the signal by the delay line, the bank of whitening filters, the branch metric calculator, and the adaptive data-dependent noise-predictive maximum likelihood sequence detector may process the signal further to determine which data sequence/addressa* is actually encoded by the signal. Therefore, there may be a higher probability thata* is known at the time when the estimated portion(s) of the signal and the parameters depending on yk, as applicable, are updated. A sufficiently large delay may increase the accuracy of the updates. Preferably, the delay is set to at least half the nominal delay time of the adaptive data-dependent noise-predictive maximum likelihood sequence detector for detecting one bit from the signal to ensure a sufficient update accuracy. It may be advantageous to select the delay imposed by the delay line such that the signal ykencoding a present symbol akarrives at the estimator just when the determination of the symbol akis complete. In this way, the probability of erroneously updating an estimated signal not representing the correct symbol akmay be minimized. According to an embodiment, the storage medium is a magnetic storage medium or an optical storage medium. Without limitation, examples of magnetic storage media may include magnetic tapes, hard disk drives, floppy disks, and the like; examples of optical storage media may include compact discs (CDs), digital versatile discs (DVDs), Blu-ray discs, and various other types of optical discs such as optical archiving media. Nonlinear channel distortion effects may be especially likely to become observable on magnetic media due to the highly sequential arrangement of flux regions on the medium; intrinsic nonlinear properties of the magnetic layer of the medium, the write head arranging flux regions representing the data in the magnetic layer, and/or the head producing the signal; and the comparably high readout speeds achieved in modern magnetic storage systems. Also, optical storage systems may be prone to nonlinear effects that may occur as a consequence of interaction between light and storage medium at high velocities. Hence, the embodiments disclosed herein may be especially effective when deployed with the read channel of a magnetic or optical drive. According to an embodiment, the adaptive data-dependent noise-predictive maximum likelihood sequence detector has N states, N being selected from the set of 4, 8, 16, 32, 64, 128. Binary sequences of fixed length L=2, 3, 4, . . . have a total number of N=2 L=4, 8, 16, . . . possible states. To lower the error rate at the output of the adaptive data-dependent noise-predictive maximum likelihood sequence detector, one may increase the number of detector states and/or use path memory decisions with less detector states. More than 16 detector states (up to 128 detector states) can be used to improve detector performance. Alternatively, a large (e.g., 256-cell) memory may be used in conjunction with, e.g., a 16-state or a 32-state detector where path memory decisions are used to estimate the nonlinear signal component on each branch. Referring now toFIG.1, a simplified tape drive100of a tape-based data storage system is shown, which may be employed as an example of the data storage system100in the context of the present invention. While one specific implementation of a tape drive is shown inFIG.1, it should be noted that the embodiments described herein may be implemented in the context of any type of tape drive system or any other kind of storage device where approaches of data recovery as disclosed herein can be applied. As shown, a tape supply cartridge120and a take-up reel121are provided to support a tape122. One or more of the reels may form part of a removable cartridge and are not necessarily part of the tape drive100. The tape drive, such as that illustrated inFIG.1, may further include drive motor(s) to drive the tape supply cartridge120and the take-up reel121to move the tape122over a tape head126of any type. Such head may include an array of readers, writers, or both. Guides125guide the tape122across the tape head126. Such tape head126is in turn coupled to a controller128via a cable130. The controller128, may be or include a processor and/or any logic for controlling any subsystem of the tape drive100. For example, the controller128typically controls head functions such as servo following, data writing, data reading, etc. The controller128may include at least one servo channel and at least one data channel, each of which include data flow processing logic configured to process and/or store information to be written to and/or read from the tape122. The controller128may operate under logic known in the art, as well as any logic disclosed herein, and thus may be considered as a processor for any of the descriptions of tape drives included herein. The controller128may be coupled to a memory136of any known type, which may store instructions executable by the controller128. Moreover, the controller128may be configured and/or programmable to perform or control some or all of the methodology presented herein. Thus, the controller128may be considered to be configured to perform various operations by way of logic programmed into one or more chips, modules, and/or blocks; software, firmware, and/or other instructions being available to one or more processors; etc., and combinations thereof. The cable130may include read/write circuits to transmit data to the head126to be recorded on the tape122and to receive data read by the head126from the tape122. An actuator132controls position of the head126relative to the tape122. An interface134may also be provided for communication between the tape drive100and a host (internal or external) to send and receive the data and for controlling the operation of the tape drive100and communicating the status of the tape drive100to the host, all as will be understood by those of skill in the art. Various aspects of the present disclosure are described by narrative text, flowcharts, block diagrams of computer systems and/or block diagrams of the machine logic included in computer program product (CPP) embodiments. With respect to any flowcharts, depending upon the technology involved, the operations can be performed in a different order than what is shown in a given flowchart. For example, again depending upon the technology involved, two operations shown in successive flowchart blocks may be performed in reverse order, as a single integrated step, concurrently, or in a manner at least partially overlapping in time. A computer program product embodiment (“CPP embodiment” or “CPP”) is a term used in the present disclosure to describe any set of one, or more, storage media (also called “mediums”) collectively included in a set of one, or more, storage devices that collectively include machine readable code corresponding to instructions and/or data for performing computer operations specified in a given CPP claim. A “storage device” is any tangible device that can retain and store instructions for use by a computer processor. Without limitation, the computer readable storage medium may be an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, a mechanical storage medium, or any suitable combination of the foregoing. Some known types of storage devices that include these mediums include: diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash memory), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanically encoded device (such as punch cards or pits/lands formed in a major surface of a disc) or any suitable combination of the foregoing. A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media. As will be understood by those of skill in the art, data is typically moved at some occasional points in time during normal operations of a storage device, such as during access, de-fragmentation or garbage collection, but this does not render the storage device as transitory because the data is not transitory while it is stored. Referring now toFIG.2, computing environment150contains an example of an environment for the execution of at least some of the computer code involved in performing the inventive methods, such as improved adaptive data detection code200. In addition to block200, computing environment150includes, for example, computer151, wide area network (WAN)152, end user device (EUD)153, remote server154, public cloud155, and private cloud156. In this embodiment, computer151includes processor set160(including processing circuitry170and cache171), communication fabric161, volatile memory162, persistent storage163(including operating system172and block200, as identified above), peripheral device set164(including user interface (UI) device set173, storage174, and Internet of Things (IoT) sensor set175), and network module165. Remote server154includes remote database180. Public cloud155includes gateway190, cloud orchestration module191, host physical machine set192, virtual machine set193, and container set194. COMPUTER151may take the form of a desktop computer, laptop computer, tablet computer, smart phone, smart watch or other wearable computer, mainframe computer, quantum computer or any other form of computer or mobile device now known or to be developed in the future that is capable of running a program, accessing a network or querying a database, such as remote database180. As is well understood in the art of computer technology, and depending upon the technology, performance of a computer-implemented method may be distributed among multiple computers and/or between multiple locations. On the other hand, in this presentation of computing environment150, detailed discussion is focused on a single computer, specifically computer151, to keep the presentation as simple as possible. Computer151may be located in a cloud, even though it is not shown in a cloud inFIG.2. On the other hand, computer151is not required to be in a cloud except to any extent as may be affirmatively indicated. PROCESSOR SET160includes one, or more, computer processors of any type now known or to be developed in the future. Processing circuitry170may be distributed over multiple packages, for example, multiple, coordinated integrated circuit chips. Processing circuitry170may implement multiple processor threads and/or multiple processor cores. Cache171is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set160. Cache memories are typically organized into multiple levels depending upon relative proximity to the processing circuitry. Alternatively, some, or all, of the cache for the processor set may be located “off chip.” In some computing environments, processor set160may be designed for working with qubits and performing quantum computing. Computer readable program instructions are typically loaded onto computer151to cause a series of operational steps to be performed by processor set160of computer151and thereby effect a computer-implemented method, such that the instructions thus executed will instantiate the methods specified in flowcharts and/or narrative descriptions of computer-implemented methods included in this document (collectively referred to as “the inventive methods”). These computer readable program instructions are stored in various types of computer readable storage media, such as cache171and the other storage media discussed below. The program instructions, and associated data, are accessed by processor set160to control and direct performance of the inventive methods. In computing environment150, at least some of the instructions for performing the inventive methods may be stored in block200in persistent storage163. COMMUNICATION FABRIC161is the signal conduction path that allows the various components of computer151to communicate with each other. Typically, this fabric is made of switches and electrically conductive paths, such as the switches and electrically conductive paths that make up busses, bridges, physical input/output ports and the like. Other types of signal communication paths may be used, such as fiber optic communication paths and/or wireless communication paths. VOLATILE MEMORY162is any type of volatile memory now known or to be developed in the future. Examples include dynamic type random access memory (RAM) or static type RAM. Typically, volatile memory162is characterized by random access, but this is not required unless affirmatively indicated. In computer151, the volatile memory162is located in a single package and is internal to computer151, but, alternatively or additionally, the volatile memory may be distributed over multiple packages and/or located externally with respect to computer151. PERSISTENT STORAGE163is any form of non-volatile storage for computers that is now known or to be developed in the future. The non-volatility of this storage means that the stored data is maintained regardless of whether power is being supplied to computer151and/or directly to persistent storage163. Persistent storage163may be a read only memory (ROM), but typically at least a portion of the persistent storage allows writing of data, deletion of data and re-writing of data. Some familiar forms of persistent storage include magnetic disks and solid state storage devices. Operating system172may take several forms, such as various known proprietary operating systems or open source Portable Operating System Interface-type operating systems that employ a kernel. The code included in block200typically includes at least some of the computer code involved in performing the inventive methods. PERIPHERAL DEVICE SET164includes the set of peripheral devices of computer151. Data communication connections between the peripheral devices and the other components of computer151may be implemented in various ways, such as Bluetooth connections, Near-Field Communication (NFC) connections, connections made by cables (such as universal serial bus (USB) type cables), insertion-type connections (for example, secure digital (SD) card), connections made through local area communication networks and even connections made through wide area networks such as the internet. In various embodiments, UI device set173may include components such as a display screen, speaker, microphone, wearable devices (such as goggles and smart watches), keyboard, mouse, printer, touchpad, game controllers, and haptic devices. Storage174is external storage, such as an external hard drive, or insertable storage, such as an SD card. Storage174may be persistent and/or volatile. In some embodiments, storage174may take the form of a quantum computing storage device for storing data in the form of qubits. In embodiments where computer151is required to have a large amount of storage (for example, where computer151locally stores and manages a large database) then this storage may be provided by peripheral storage devices designed for storing very large amounts of data, such as a storage area network (SAN) that is shared by multiple, geographically distributed computers. IoT sensor set175is made up of sensors that can be used in Internet of Things applications. For example, one sensor may be a thermometer and another sensor may be a motion detector. NETWORK MODULE165is the collection of computer software, hardware, and firmware that allows computer151to communicate with other computers through WAN152. Network module165may include hardware, such as modems or Wi-Fi signal transceivers, software for packetizing and/or de-packetizing data for communication network transmission, and/or web browser software for communicating data over the internet. In some embodiments, network control functions and network forwarding functions of network module165are performed on the same physical hardware device. In other embodiments (for example, embodiments that utilize software-defined networking (SDN)), the control functions and the forwarding functions of network module165are performed on physically separate devices, such that the control functions manage several different network hardware devices. Computer readable program instructions for performing the inventive methods can typically be downloaded to computer151from an external computer or external storage device through a network adapter card or network interface included in network module165. WAN152is any wide area network (for example, the internet) capable of communicating computer data over non-local distances by any technology for communicating computer data, now known or to be developed in the future. In some embodiments, the WAN152may be replaced and/or supplemented by local area networks (LANs) designed to communicate data between devices located in a local area, such as a Wi-Fi network. The WAN and/or LANs typically include computer hardware such as copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and edge servers. END USER DEVICE (EUD)153is any computer system that is used and controlled by an end user (for example, a customer of an enterprise that operates computer151) and may take any of the forms discussed above in connection with computer151. EUD153typically receives helpful and useful data from the operations of computer151. For example, in a hypothetical case where computer151is designed to provide a recommendation to an end user, this recommendation would typically be communicated from network module165of computer151through WAN152to EUD153. In this way, EUD153can display, or otherwise present, the recommendation to an end user. In some embodiments, EUD153may be a client device, such as thin client, heavy client, mainframe computer, desktop computer and so on. REMOTE SERVER154is any computer system that serves at least some data and/or functionality to computer151. Remote server154may be controlled and used by the same entity that operates computer151. Remote server154represents the machine(s) that collect and store helpful and useful data for use by other computers, such as computer151. For example, in a hypothetical case where computer151is designed and programmed to provide a recommendation based on historical data, then this historical data may be provided to computer151from remote database180of remote server154. PUBLIC CLOUD155is any computer system available for use by multiple entities that provides on-demand availability of computer system resources and/or other computer capabilities, especially data storage (cloud storage) and computing power, without direct active management by the user. Cloud computing typically leverages sharing of resources to achieve coherence and economies of scale. The direct and active management of the computing resources of public cloud155is performed by the computer hardware and/or software of cloud orchestration module191. The computing resources provided by public cloud155are typically implemented by virtual computing environments that run on various computers making up the computers of host physical machine set192, which is the universe of physical computers in and/or available to public cloud155. The virtual computing environments (VCEs) typically take the form of virtual machines from virtual machine set193and/or containers from container set194. It is understood that these VCEs may be stored as images and may be transferred among and between the various physical machine hosts, either as images or after instantiation of the VCE. Cloud orchestration module191manages the transfer and storage of images, deploys new instantiations of VCEs and manages active instantiations of VCE deployments. Gateway190is the collection of computer software, hardware, and firmware that allows public cloud155to communicate through WAN152. Some further explanation of virtualized computing environments (VCEs) will now be provided. VCEs can be stored as “images.” A new active instance of the VCE can be instantiated from the image. Two familiar types of VCEs are virtual machines and containers. A container is a VCE that uses operating-system-level virtualization. This refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances, called containers. These isolated user-space instances typically behave as real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can utilize all resources of that computer, such as connected devices, files and folders, network shares, CPU power, and quantifiable hardware capabilities. However, programs running inside a container can only use the contents of the container and devices assigned to the container, a feature which is known as containerization. PRIVATE CLOUD156is similar to public cloud155, except that the computing resources are only available for use by a single enterprise. While private cloud156is depicted as being in communication with WAN152, in other embodiments a private cloud may be disconnected from the internet entirely and only accessible through a local/private network. A hybrid cloud is a composition of multiple clouds of different types (for example, private, community or public cloud types), often respectively implemented by different vendors. Each of the multiple clouds remains a separate and discrete entity, but the larger hybrid cloud architecture is bound together by standardized or proprietary technology that enables orchestration, management, and/or data/application portability between the multiple constituent clouds. In this embodiment, public cloud155and private cloud156are both part of a larger hybrid cloud. InFIG.3, a block diagram representing components of a multi-state adaptive data-dependent noise-predictive maximum likelihood sequence detector300is shown. One primary input to the system, GAINADJ302, is a sequence of digitized samples from the data channel of the data storage system, such as a magnetic tape channel, which is synchronized and gain-adjusted, according to one approach. The output of the system, Data Out304, is a binary data associated with the input sequence. Several of the blocks are described in detail in additional figures, starting with the Whitening Filter306and proceeding in a clockwise manner. As shown inFIG.3, a multi-state adaptive data-dependent noise-predictive maximum likelihood sequence detector300may preferably use five primary blocks, according to one approach. Of course, more or less blocks are also possible, depending on the amount of processing desired for the input signal. Also, some blocks may be used multiple times, as indicated inFIG.3as a stack of blocks (e.g., the Whitening Filter306, Branch Metric Calculation308, etc.). One possibility of implementing a multi-state adaptive data-dependent noise-predictive maximum likelihood sequence detector300is a 16-state adaptive data-dependent noise-predictive maximum likelihood sequence detector. In this example, the 16-state adaptive data-dependent noise-predictive maximum likelihood sequence detector300may use a single 16-State NPML Detector 310, 32 copies of a two-tap Whitening Filter306, 32 copies of a Branch Metric Calculation308, a single LMS Engine312to calculate an array of 64 Predictor Coefficients316, (e.g., 32 sets of a first predictor coefficient (W1) and 32 sets of a second predictor coefficient (W2)), and a single block314comprising a Variance Engine and/or a Mean-Error Engine to calculate an array of Variance Calculations318(e.g., an array of 64 variance coefficients comprising 32 sets of 1/σ2coefficients and 32 sets of ln [σ2] coefficients, or an array of 32 sets of σ2coefficients) and/or an array of 32 Mean-Error Calculations318. Of course, other configurations are also possible, such as an 8-state adaptive data-dependent noise-predictive maximum likelihood sequence detector, a 32-state adaptive data-dependent noise-predictive maximum likelihood sequence detector, a 4-state adaptive data-dependent noise-predictive maximum likelihood sequence detector, etc. Each of these configurations may also use multiple blocks, possibly in numbers different from those described in the example above. In one example, the number of multiple blocks (e.g., the Whitening Filter306, Branch Metric Calculation308, etc.) used may be twice the number of states of the multi-state NPML detector310. For the remainder of the descriptions provided herein, it is assumed that a 16-state adaptive data-dependent noise-predictive maximum likelihood sequence detector is being described, and the number of components thereof are selected based on using a 16-state NPML detector. This, however, is a convention for illustrative purposes only and is not meant to be limiting on the scope of the present invention in any manner. Of course, other configurations are also possible, such as an 8-state adaptive data-dependent noise-predictive maximum likelihood sequence detector, a 32-state adaptive data-dependent noise-predictive maximum likelihood sequence detector, a 4-state adaptive data-dependent noise-predictive maximum likelihood sequence detector, etc. Each of these configurations may also use multiple blocks, possibly in numbers different from those described in the drawings. In one example, the number of multiple blocks (e.g., the noise whitening filter(s)306, etc.) used may be twice the number of states of the multi-state adaptive data-dependent noise-predictive maximum likelihood sequence detector core310,402. In addition to the detector path illustrated byFIG.3, the data storage system100may comprise a separate signal path, which is referred to herein as the update path, for updating the parameters, coefficients, and other values that the detector path uses to correctly determine the data encoded by the signal provided by the head126. Without limitation, values to be updated for a given detector outputa* on the update path may include the estimated signal ŝk(a*), the estimated nonlinear offset êk(a*), the filter parameter(s) pi(a*) used to tune the one or more noise whitening filters provided by the bank of noise whitening filters306, the noise mean μ(a*), the variance σ2(a*) or any alternative feasible quantities representing the variance such as 1/σ2(a*), ln σ2(a*). While the detector path may be configured to perform functions, such as filtering the signal using one or more whitening filters306and calculating multiple branch metrics mk(a) using the branch metric calculation308, on multiple branches representing the different symbol sequences that may possibly be encoded by the signal, it may be advantageous to design the update path so as to update only the values pertaining to a single most probable brancha* at a time. FIGS.4-9show diagrams of multiple possible configurations of components within the update path of a data storage system. Without limitation, the drawings assume for illustration purposes that the data-dependent noise-predictive maximum likelihood sequence detectors shown inFIGS.4-9are 16-state detectors.FIGS.4-7show exemplary configurations for systems with data-independent noise, whileFIGS.8and9show further examples of systems with data-dependent noise. Thus, two classes of finite-state machines (FSM) perturbed by additive noise may be used to model the channel of the data storage system (e.g., the magnetic tape channel). In both models, the nonlinear signal at the output of the channel is represented as the output of a nonlinear table look-up filter. The additive noise in the first channel model may be treated as non-data-dependent colored Gaussian noise, whereas the additive noise in the second channel model may be treated as data-dependent colored Gaussian noise. For both classes of FSM channel models perturbed by additive noise, adaptive detection schemes for mitigating the nonlinearities associated with the write and read processes are disclosed. The disclosed two families of detector structures for non-data-dependent and data-dependent noise may employ the Euclidean and the data-dependent branch metric, respectively. A least mean squares (LMS) algorithm may be used to update the coefficients of the adaptive prediction error filter. Exponential smoothing may be used to update data-dependent means, variances, and nonlinear offsets from the linear PR4 estimate. A block diagram illustrating a configuration400of components within the update path of an adaptive multi-state data-dependent noise-predictive maximum likelihood sequence detector is shown inFIG.4. The configuration400may use any number of discrete blocks or modules, according to various approaches, indicated by the individual blocks, as well as the components within the dashed lines inFIG.4. Of course, more or less blocks and/or modules are also possible, depending on the amount of adaptability and processing desired for the input signal. Also, some blocks may be used multiple times where appropriate. Further examples of systems with different kinds or configurations500,600,700,800,900of blocks as shown inFIGS.5-9. The different configurations400,500,600,700,800,900may comprise further components that are not shown in the drawings for the sake of simplicity of presentation. The update path configuration400receives a detected output stream (e.g., bit stream) from the multi-state data-dependent noise-predictive maximum likelihood sequence detector core402. The most recent five symbols (e.g., bits) of the received stream are used as a 5-bit addressa* assuming M=25=32 possible bit sequences. Configuration400comprises a nonlinear signal estimator420configured for calculating an array of M estimated nonlinear signals422, a multi-tap FIR filter410(as a non-limiting example of a noise whitening filter with J=2 taps) configured for calculating a filtered error signal ek(a*), at least one LMS engine412(as a non-limiting example of a filter parameter calculator) configured for calculating an array of M predictor coefficient sets414(e.g., 32 sets each comprising a first predictor coefficient (p1) and a second predictor coefficient (p2)) for the bank of noise whitening filters308in the detector path, and a data-dependent noise mean calculator404configured to calculate an array of M noise means p406, each respectively calculated for each branch metric of the data-dependent noise-predictive maximum likelihood sequence detector core402. Of course, more or less than M noise mean estimates, predictor coefficients, and/or estimated signals may be respectively included in the array of noise means406, the array of predictor coefficient sets414, and/or the array of estimated nonlinear signals422in order to provide more or less intensive computation for inclusion of the term comprising the respective values in the branch metric calculation. The number of states of the data-dependent noise-predictive maximum likelihood sequence detector core402may be related to the number of entries in each array M, according to some predetermined relationship, such as 2 M, 0.5 M, etc. Moreover, M may be related to and/or equal to a number of branches that are possible in the branch metric calculation. In one example, all noise mean estimates in the array of noise means406may be set to zero during initialization or startup of an adaptive data-dependent noise-predictive maximum likelihood sequence detector comprising the update path configuration400. Moreover, the estimated signals may be initialized with their theoretical values, which may be âk-âk−2for each of the 32 possible addressesa=(ak, ak−1, ak−2, ak−3, ak−4) when modeling the output of a PR4 equalizer. Similarly, predictor coefficients may be initialized with suitable values such as p1=p2=0 or p1=⅓, p2= 1/9. It should be noted too that other address lengths and configurations than that shown inFIGS.4-9for the sake of simplicity of presentation may be useful or beneficial. For instance, it has been shown that an optimal FSM modeling accuracy for thin-film magnetic-recording channels may be achieved when using 5 past+1 current+2 future symbols as an address of a 256-cell RAM. The configuration shown inFIG.4may be advantageous for data storage systems having time-dependent nonlinearity in the read/write channel, noise characteristics with a spectrum showing a time-dependent and data-dependent deviation from the uniform distribution of white noise, and a time-dependent and data-dependent low-frequency bias of the noise residue. The estimator420shown inFIG.4may comprise a memory (e.g., a RAM) for storing the estimated signals422as well as an update circuitry configured for calculating an estimation error comprising the difference ēk(a*)=yk−ŝold(a*) between the estimated signal ŝold(a*) previously stored for the given most probable sequencea* and the signal yk, and configured for updating the estimated signal using the estimation error as a small correction term, ŝnew(a*)=ŝold(a*)+ηēk(a*) with a small weighting factor η. The output ŝk(a*) may be subtracted from the signal to obtain a noise residue signal ñk(a*)=yk−ŝk(a*) at the input of update path noise whitening filter410. The whitening filter410may calculate a filtered error signal ek(a*)=nk−p1(a*)ñk−1−p2(a*)ñk−2. The filtered error signal may be used by the noise mean calculator404to determine an updated noise mean μnew(a*)=(1−ϵ)=(1−ϵ)μold(a*)+ϵek(a*) with a small weighting factor ϵ. The difference {tilde over (e)}k(a*)=ek(a*)−μ(a*) may then be used by the filter parameter calculator412to provide updated filter parameters pi(a*) (i∈{1, 2}) by calculating pi,new(a*)=pi,old(a*)+α{tilde over (e)}k(a*)ñk−iwith a small weighting coefficient α. Over a space of 32 addresses, the total number of updated values in configuration400is 128=32 estimated signals+32 noise means+32·2 filter coefficients. In the detector path, the bank of one or more noise whitening filters308may receive the stored filter coefficients414to tune the applicable noise whitening filter(s) for each branch. Likewise, the branch metric calculator308may use the stored estimated signals422and noise means406to calculate a Euclidean branch metric mk={tilde over (e)}k2(a) for each of the 32 candidate sequencesa. FIG.5shows a block diagram of a configuration500of components within the update path of an adaptive multi-state data-dependent noise-predictive maximum likelihood sequence detector that differs from configuration400in that noise mean calculator404is missing. This configuration500may be beneficial for data storage systems having time-dependent nonlinearity in the read/write channel and noise characteristics with a spectrum showing a time-dependent and data-dependent deviation from the uniform distribution of white noise. In the configuration500shown inFIG.5, the filtered error signal ek(a*) may be used as the metric input signal to be input to the filter parameter calculator412for the given brancha*. Over a space of 32 addresses, the total number of updated values in configuration500is 96=32 estimated signals+32·2 filter coefficients. In the detector path, the bank of one or more noise whitening filters308may receive the stored filter coefficients414to tune the applicable noise whitening filter(s) for each branch. Likewise, the branch metric calculator308may use the stored estimated signals422to calculate a Euclidean branch metric mk=ek2(a) for each of the 32 candidate sequencesa. FIG.6shows a block diagram of a configuration600of components within the update path of an adaptive multi-state data-dependent noise-predictive maximum likelihood sequence detector that differs from configuration400in that the filter parameters piand p2are data-independent, and correspondingly, that the filter parameter calculator is configured for storing and updating these two predictor coefficients614. Another difference is that also the noise mean μ is data-independent, and correspondingly, that the noise mean calculator604is configured for storing and updating a single noise mean value μ606. This configuration600may be beneficial for data storage systems having time-dependent nonlinearity in the read/write channel, noise characteristics with a spectrum showing a time-dependent but data-independent deviation from the uniform distribution of white noise, and a time-dependent but data-independent low-frequency bias of the noise residue. The update path noise whitening filter610may calculate a filtered error signal ek=ñk(a*)−p1ñk−1(a*)−p2ñk−2(a*) using the data-independent coefficients p1and p2. Similar to the discussion ofFIG.4above, the data storage system100may be configured to use a most probable sequencea* to update the estimated signal {tilde over (s)}k(a*). The filtered error signal ekmay be used by the noise mean calculator604to determine an updated noise mean μnew=(1−ϵ) μold+ϵekwith a small weighting factor ϵ. The filter parameter calculator612may provide updated filter parameters pi(i∈{1, 2}) by calculating pi,new=pi,old+α{tilde over (e)}kñk−i(a*) with a small weighting coefficient α and {tilde over (e)}k=ek−μ. The noise mean606and the filter parameters614may be considered data-independent if α and ϵ are chosen small enough so that the data dependence incurred by the noise residues ñ cancels out over multiple iterations. Over a space of 32 addresses, the total number of updated values in configuration600is 35=32 estimated signals+1 noise mean+2 filter coefficients. In the detector path, the bank of one or more noise whitening filters308may receive the stored filter coefficients614to tune the applicable noise whitening filter(s) for each branch. Likewise, the branch metric calculator308may use the stored estimated signals422and noise mean606to calculate a Euclidean branch metric mk={tilde over (e)}k2(a) for each of the 32 candidate sequencesa. While the noise mean606and the filter parameters614are data-independent, the branch-specific quantities in the detector path are still data-dependent as they depend on the difference between the signal and the estimated signal pertaining to the respective branch. FIG.7shows a block diagram of a configuration700of components within the update path of an adaptive multi-state data-dependent noise-predictive maximum likelihood sequence detector that differs from configuration600in that noise mean calculator604is missing. This configuration700may be beneficial for data storage systems having time-dependent nonlinearity in the read/write channel and noise characteristics with a spectrum showing a time-dependent but data-independent deviation from the uniform distribution of white noise. In the configuration700shown inFIG.7, the filtered error signal ekmay be used as the input to the filter parameter calculator612. Similar to the discussion ofFIG.6above, the data storage system100may be configured to use the most probable sequencea* to update the estimated signal ŝk(a*) and the filter parameters pi. Over a space of 32 addresses, the total number of updated values in configuration700is 34=32 estimated signals+2 filter coefficients. In the detector path. the branch metric calculator308may calculate a Euclidean branch metric mk=ek2. FIG.8shows a block diagram of a configuration800of components within the update path of an adaptive multi-state data-dependent noise-predictive maximum likelihood sequence detector that differs from configuration400in that a noise variance calculator808is added. This configuration800may be beneficial for data storage systems having time-dependent nonlinearity in the read/write channel, noise characteristics with a spectrum showing a time-dependent and data-dependent deviation from the uniform distribution of white noise, a data dependence or other effect causing a time-dependent and data-dependent deviation in the noise variance from unity, and a time-dependent and data-dependent low-frequency bias of the noise residue. In update path configuration800, the difference {tilde over (e)}k(a*)=ek(a*)−μ(a*) may be used as the input signal to the filter parameter calculator412and the noise variance calculator816. In addition to the functions explained for the modules shown inFIG.4, the noise variance calculator816may be configured for storing and updating M noise variances σ2(a)808. The M variances808may be set to unity during initialization or startup of a data-dependent noise-predictive maximum likelihood sequence detector comprising update path configuration800. In an alternative configuration, the noise variance calculator816may store and update an array of 32 coefficients representing an inverse variance 1/σ2(a) and 32 coefficients representing a logarithmic variance ln σ2(a), which may reduce computational complexity for the branch metric calculator308. In addition to the update schemes described before, the noise variance calculator816may provide updated noise variances808by calculating σnew2(a*)=τ{tilde over (e)}k2(a*)+(1−τ)σold2(a*) with a small weighting coefficient τ for the most probable sequencea*. Over a space of 32 addresses, the total number of updated values in configuration800is 160=32 estimated signals+32 noise means+32·2 filter coefficients+32 noise variances. As the noise variance deviates from its regular behavior, the branch metric calculator308in the detector path may calculate a modified Euclidean branch metric mk=ln(σ2(a))+{tilde over (e)}k2(a)/σ2(a). Substituting {tilde over (e)}kas applicable to the example ofFIG.8yields mk=ln⁡(σ2)+[(yk-sˆk)⁢(1-∑i=1J⁢pi⁢Di)-μ]2/σ2. FIG.9shows a block diagram of a configuration900of components within the update path of an adaptive multi-state data-dependent noise-predictive maximum likelihood sequence detector that differs from configuration800in that noise mean calculator404is missing. This configuration900may be beneficial for data storage systems having time-dependent nonlinearity in the read/write channel, a data dependence or other effect causing a time-dependent and data-dependent deviation in the noise variance from unity, and noise characteristics with a spectrum showing a time-dependent and data-dependent deviation from the uniform distribution of white noise. In the configuration900shown inFIG.9, the filtered error signal ekmay be used as the input to the noise variance calculator816and the filter parameter calculator412, which may perform their respective updates using the most probable sequencea* provided by the adaptive data-dependent noise-predictive maximum likelihood sequence detector core310. Over a space of 32 addresses, the total number of updated values in configuration900is 128=32 estimated signals+32 noise variances+32·2 filter coefficients. In the detector path, the branch metric calculator308may calculate a modified Euclidean branch metric mk=ln(σ2(a))+{tilde over (e)}k2(a)/σ2(a) using the values stored by the update modules in the update path. Now turning toFIG.10, a diagram illustrating an alternative configuration1020of the estimator is shown that is adapted for storing and updating the estimated nonlinear offset or portion of the signal. In the example shown, again an array of 32 (chosen without limitation for purely illustrative purposes) estimated nonlinear portions1022is maintained by the estimator1020. As the estimated nonlinear signal ŝk(a) is the sum of an estimated linear signal sk(a) and the estimated nonlinear portion êk(a), the noise residue to be processed further by components such as the noise whitening filter306can be expressed in terms of the stored nonlinear portion as ñk(a)=yk−sk(a)−êk(a), where the linear estimate can be obtained in a known way such as through sk(a)=âk−ak−2in the non-limiting example of a PR4 equalizer output. The estimator1020shown inFIG.10may comprise a memory (e.g., a RAM) for storing the estimated nonlinear offsets1022as well as an update circuitry configured for calculating an estimation error comprising the difference ēk(a*)=yk−sk(a*)−êold(a*) between the previously stored estimated nonlinear portion êold(a*) for the given most probable sequencea* and the difference between signal ykand the estimated linear signal portion sk(a*) and for updating the estimated nonlinear portion using the estimation error as a small correction term, ênew(a*)=êold(a*)+ηēk(a*) with a small weighting factor η. FIG.11shows a flow diagram illustrating a method1100that may be implemented using the data storage system as disclosed herein. In a step1102, the method comprises receiving a signal representing data stored on a storage medium. Furthermore, there is a step1104of receiving from the estimator an estimated signal comprising a superposition of an estimated linear portion of a partial-response equalizer output and an estimated nonlinear portion of the received signal. A noise whitening filter is applied1106to a difference between the received signal and the estimated signal output by the estimator to produce a filtered signal. A metric input signal that is based on the filtered signal is passed on to the branch metric calculator to obtain one or more branch metrics by performing1108one or more branch metric calculations. Based on the one or more branch metrics, the adaptive data-dependent noise-predictive maximum likelihood sequence detector establishes a sequence of most probable symbols identified as the data encoded by the signal and generates1110an output stream representing the data. Then, the estimator updates1112the estimated signal based on the detector output stream and the signal. Preferably, step1112may use a delayed version of the signal received in step1102to account for the time taken by the noise-whitening filter, the branch metric calculator, and the detector for generating the output stream from the signal.
76,301
11862195
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). The present disclosure is generally related to a tape head and a tape drive including a tape head. The tape head comprises one or more head assemblies, each head assembly comprising a plurality of write heads aligned in a row, at least one writer servo head aligned with the row of write heads, a plurality of read heads aligned in a row, and at least one reader servo head aligned with the row of read heads. The writer servo head and the reader servo head are independently controllable and are configured to operate concurrently. The tape head is able to accurately and independently position the write heads using the writer servo head(s) when writing data to a tape and position the read heads using the reader servo head(s) when reading data from the tape, even if the write heads and read heads are or become mis-aligned. FIGS.1A-1Cillustrate a perspective exploded view, a simplified top down, and side profile view of a tape drive100, in accordance with some embodiments. The tape drive100may be a tape embedded drive (TED). Focusing onFIG.1B, for example, the tape drive comprises a casing105, one or more tape reels110, one or more motors (e.g., a stepping motor120(also known as a stepper motor), a voice coil motor (VCM)125, etc.) a head assembly130with one or more read heads and one or more write heads, and tape guides/rollers135a,135b. In the descriptions herein, the term “head assembly” may be referred to as “magnetic recording head”, interchangeably, for exemplary purposes. Focusing onFIG.1C, for example, the tape drive further comprises a printed circuit board assembly (PCBA)155. In an embodiment, most of the components are within an interior cavity of the casing, except the PCBA155, which is mounted on an external surface of the casing105. The same components are illustrated in a perspective view inFIG.1A. In the descriptions herein, the term “tape” may be referred to as “magnetic media”, interchangeably, for exemplary purposes. In the illustrated embodiments, two tape reels110are placed in the interior cavity of the casing105, with the center of the two tape reels110on the same level in the cavity and with the head assembly130located in the middle and below the two tape reels110. Tape reel motors located in the spindles of the tape reels110can operate to wind and unwind the tape media115in the tape reels110. Each tape reel110may also incorporate a tape folder to help the tape media115be neatly wound onto the reel110. One or more of the tape reels110may form a part of a removable cartridge and are not necessarily part of the tape drive100. In such embodiments, the tape drive100may not be a tape embedded drive as it does not have embedded media, the drive100may instead be a tape drive configured to accept and access magnetic media or tape media115from an insertable cassette or cartridge (e.g., an LTO drive), where the insertable cassette or cartridge further comprises one or more of the tape reels110as well. In such embodiments, the tape or media115is contained in a cartridge that is removable from the drive100. The tape media115may be made via a sputtering process to provide improved areal density. The tape media115comprises two surfaces, an oxide side and a substrate side. The oxide side is the surface that can be magnetically manipulated (written to or read from) by one or more read/write heads. The substrate side of the tape media115aids in the strength and flexibility of the tape media115. Tape media115from the tape reels110are biased against the guides/rollers135a,135b(collectively referred to as guides/rollers135) and are movably passed along the head assembly130by movement of the reels110. The illustrated embodiment shows four guides/rollers135a,135b, with the two guides/rollers135afurthest away from the head assembly130serving to change direction of the tape media115and the two guides/rollers135bclosest to the head assembly130by pressing the tape media115against the head assembly130. As shown inFIG.1A, in some embodiments, the guides/rollers135utilize the same structure. In other embodiments, as shown inFIG.1B, the guides/rollers135may have more specialized shapes and differ from each other based on function. Furthermore, a lesser or a greater number of rollers may be used. For example, the two function rollers may be cylindrical in shape, while the two functional guides may be flat-sided (e.g., rectangular prism) or clip shaped with two prongs and the film moving between the prongs of the clip. The voice coil motor125and stepping motor120may variably position the tape head(s) transversely with respect to the width of the recording tape. The stepping motor120may provide coarse movement, while the voice coil motor125may provide finer actuation of the head(s). In an embodiment, servo data may be written to the tape media to aid in more accurate position of the head(s) along the tape media115. In addition, the casing105comprises one or more particle filters141and/or desiccants142, as illustrated inFIG.1A, to help maintain the environment in the casing. For example, if the casing is not airtight, the particle filters may be placed where airflow is expected. The particle filters and/or desiccants may be placed in one or more of the corners or any other convenient place away from the moving internal components. For example, the moving reels may generate internal airflow as the tape media winds/unwinds, and the particle filters may be placed within that airflow. There is a wide variety of possible placements of the internal components of the tape drive100within the casing105. In particular, as the head mechanism is internal to the casing in certain embodiments, the tape media115may not be exposed to the outside of the casing105, such as in conventional tape drives. Thus, the tape media115does not need to be routed along the edge of the casing105and can be freely routed in more compact and/or otherwise more efficient ways within the casing105. Similarly, the head(s)130and tape reels110may be placed in a variety of locations to achieve a more efficient layout, as there are no design requirements to provide external access to these components. As illustrated inFIG.1C, the casing105comprises a cover150and a base145. The PCBA155is attached to the bottom, on an external surface of the casing105, opposite the cover150. As the PCBA155is made of solid state electronics, environmental issues are less of a concern, so it does not need to be placed inside the casing105. That leaves room inside casing105for other components, particularly, the moving components and the tape media115that would benefit from a more protected environment. In some embodiments, the tape drive100is sealed. Sealing can mean the drive is hermetically sealed or simply enclosed without necessarily being airtight. Sealing the drive may be beneficial for tape film winding stability, tape film reliability, and tape head reliability. Desiccant may be used to limit humidity inside the casing105. In one embodiment, the cover150is used to hermetically seal the tape drive. For example, the drive100may be hermetically sealed for environmental control by attaching (e.g., laser welding, adhesive, etc.) the cover150to the base145. The drive100may be filled by helium, nitrogen, hydrogen, or any other typically inert gas. In some embodiments, other components may be added to the tape drive100. For example, a pre-amp for the heads may be added to the tape drive. The pre-amp may be located on the PCBA155, in the head assembly130, or in another location. In general, placing the pre-amp closer to the heads may have a greater effect on the read and write signals in terms of signal-to-noise ratio (SNR). In other embodiments, some of the components may be removed. For example, the filters141and/or the desiccant142may be left out. In various embodiments, the drive100includes controller integrated circuits (IC) (or more simply “a controller”) (e.g., in the form of one or more System on Chip (SoC)), along with other digital and/or analog control circuitry to control the operations of the drive. For example, the controller and other associated control circuitry may control the writing and reading of data to and from the magnetic media, including processing of read/write data signals and any servo-mechanical control of the media and head module. In the description below, various examples related to writing and reading and verifying of written data, as well as control of the tape head and media to achieve the same, may be controlled by the controller. As an example, the controller may be configured to execute firmware instructions for the various same gap verify embodiments described below. FIG.2is a schematic illustration of a tape head module assembly200and a tape204that are aligned. The tape head module assembly200comprises a tape head body202that is aligned with the tape204. The tape204moves past the tape head module assembly200during read and/or write operations. The tape head module assembly200has a media facing surface (MFS)214that faces the tape204. The tape head body202comprises a first servo head206A and a second servo head206B spaced therefrom. It is to be understood that while two servo heads have been shown, the disclosure is not limited to two servo heads. Rather, it is contemplated that more or less servo heads may be present. A plurality of data heads208A-208G is disposed between the first servo head206A and the second servo head206B. It is to be understood that while seven data heads have been shown, the disclosure is not limited to seven data heads. Rather, the number of data heads can be more or less than seven, depending on the requirements of the embodiment. For example there can be sixteen, thirty two, sixty four or more data heads utilized in the tape head body202. A plurality of pads220A-220N is electrically coupled to the data head body202. The plurality of pads220A-220N coupled to the data head body202is not limited to the number shown inFIG.2. Rather, more or less pads are contemplated. The pads220A-220N are used to connect the drive electronics to the servo heads206A,206B and to data read and writer elements. The pads220A-220N are used to establish the potential across the servo reader by means of a power supply (not shown) embedded in the tape head200. The tape204comprises a first servo track210A and a second servo track210B. The first servo track210A and the second servo track210B are spaced apart allowing the tape head200to monitor and control the average position of the data heads208A-208G relative to the data tracks212A-212G on the tape204. It is to be understood that while two servo tracks have been shown, the disclosure is not limited to two servo tracks. Rather, the number of servo tracks can be more or less than two, depending on the requirements of the embodiment. The tape204further comprises a plurality of data tracks212A-212G disposed between the first servo track210A and the second servo track210B. It is to be understood that while seven data tracks have been shown, the disclosure is not limited to the seven data tracks. Rather, the number of data tracks can be more or less than seven, depending on the requirements of the embodiment. In the embodiment ofFIG.2, the first servo head206A reads its lateral position information (e.g., alignment) over the first servo track210A. The second servo head206B is aligned with the second servo track210B. The combined information allows the servo actuator of the tape drive200to align the data heads208A-208G such that the center data track (e.g.,208D) is centered on tape204. The plurality of data heads208A-208G is thus individually aligned with the plurality of data tracks212A-212N for best case positioning. In this embodiment the first servo head206A, the second servo head206B, the first servo track210A, the second servo track210B, the plurality of data heads208A-208G, and the plurality of data tracks212A-212G are able to read and/or write the data accurately because all are aligned perpendicular to the direction of travel of the tape204. FIGS.3A-3Billustrate a media facing surface (MFS) view of same gap verify (SGV) head assemblies300,350, respectively, according to various embodiments. The SGV head assemblies300,350may be utilized within a tape drive comprising a controller, such as the TED or tape drive100ofFIG.1A. The SGV head assemblies300,350may be, or may be parts of, the tape head module assembly200ofFIG.2. The SGV head assembly300comprises a closure302, one or more write transducers306disposed adjacent to the closure302, one or more read transducers308disposed adjacent to the one or more write transducers306, and a substrate304disposed adjacent to the one or more read transducers308. The SGV head assembly350comprises a closure302, one or more read transducers308disposed adjacent to the closure302, one or more write transducers306disposed adjacent to the one or more read transducers308, and a substrate304disposed adjacent to the one or more write transducers306. Each of the one or more write transducers306and the one or more read transducers308are disposed on the substrate304. The write transducer(s)306may be referred to as a writer(s)306or write head(s)306, and the read transducer(s)308may be referred to as a reader(s)308or read head(s)308. While only one writer306and one reader308pair is shown inFIGS.3A-3B, the SGV head assembly300may comprise a plurality of writer306and reader308pairs, which may be referred to as a head array. For example, in some embodiments, the SGV head assemblies300,350each comprises a head array of 32 writers306and 32 readers308, forming 32 writer306and reader308pairs, along with one or more servo readers (not shown). In other embodiments, there may be more pairs such as 64, 128 or other numbers. In each of the SGV head assemblies300,350, a writer306is spaced a distance310from a reader308of about 6 μm to about 20 μm, such as about 6 μm to about 15 μm. In embodiments comprising a plurality of writer306and a plurality of reader308pairs, each writer306is spaced the distance310from an adjacent paired reader308. The closure302is spaced a distance324from the substrate304of about 20 μm to about 60 μm. In some embodiments, a shield332is disposed between the writer306and the reader308of each pair to reduce cross-talk signals to the reader308from the writer306. The shield332may comprise permalloy and may be combined with Ir for wear resistance, for example. Each of the writers306comprises a first write pole P1316and a second write pole P2318. A notch320may be disposed on the P1316. The notch320is disposed adjacent to a write gap326, where the P1316is spaced from the P2318by a distance in the x-direction at least twice the length of the write gap326. Each of the readers308comprises a first shield S1312, a second shield S2314, and a magnetic sensor328disposed between the S1312and the S2314. The magnetic sensor328may be a tunnel magnetoresistance (TMR) sensor, for example. The write gap326and the magnetic sensor328are aligned or centered upon a center axis322in the y-direction such that the center axis322is aligned with a centerline of the write gap326and a centerline of the magnetic sensor328. In embodiments in which the SGV assembly300is actively tilted, such as for compensating TDS, the writer306and the reader308may be offset relative to the center axis. In some embodiments, the distance310is measured from the write gap326to an MgO layer (not shown) of the magnetic sensor328. In the SGV assembly300ofFIG.3A, when writing data to a tape or other media, the tape moves over the writer306in the writing direction330(e.g., in the x-direction). In the SGV assembly350ofFIG.3B, when writing data to a tape or other media, the tape moves over the writer306in the writing direction331(e.g., in the −x-direction). Due at least in part to the distance310between the write gap326and the magnetic sensor328of a writer306and reader308pair, the writer306is able to write to the media, and the reader308is able to read the data to verify the data was written correctly. As discussed above, the shield332may be used to further reduce magnetic cross-talk between the writer306and the reader308. Thus, the writer306is able to write data to a portion of the tape, and the paired reader308is able to read verify the newly written portion of the tape immediately. As such, the SGV head assembly300is able to write data to and read verify data from a tape concurrently. The SGV head assembly350, similar constructed, also has this immediate verify capability. The SGV head assemblies300,350are each able to concurrently write and read data due in part to the separation distance310between the write gap326and the magnetic sensor328of a writer306and reader308pair. The write gap326and magnetic sensor328are spaced far enough apart that the amplitude of signals in the reader308that arise from coupling of magnetic flux from the paired writer306is reduced or substantially less than the readback signal of the reader308itself. As used herein, the SGV head assemblies300,350being able to “concurrently” write and read data refers to the fact that both the writer306and the reader308are concurrently turned “on” or able to operate simultaneously with respect to various data written to a tape. However, it is to be noted that the writer306and the reader308are not “concurrently” operating on the same data at the same time. Rather, the writer306first writes data, and as the tape moves over the reader308, the reader308is then able to read verify the newly written data as the writer306concurrently writes different data to a different portion of the tape. Furthermore, it is to be noted that a controller (not shown) is configured to operate the SGV head assemblies300,350, and as such, the controller is configured to independently operate both the writer306and the reader308. Thus, while the writer306is described as writing data and the reader308is described as reading the data, the controller enables the writer306to write and enables the reader308to read. FIG.4illustrates a side view of a tape head400comprising two SGV head assemblies300a,300b, according to one embodiment. The tape head400can be referred to as a tape head module or tape head module assembly, and for simplicity it is referred to as tape head below. The tape head400comprises a first SGV head assembly300aand a second SGV head assembly300b. Each SGV head assembly300a,300bmay be the SGV head assembly300shown inFIG.3A. The tape head400may be the tape head module assembly200ofFIG.2. The first SGV head assembly300aand the second SGV head assembly300bmay be coupled together. In some embodiments, the read and write transducers308,306in the first and second SGV head assemblies300a,300bmay be aligned, to example, to operate in a legacy mode where one SGV head assembly (e.g., the first SGV head assembly300a) writes data and the other SGV head assembly (e.g., the second SGV head assembly300b) reads the data written by the first SGV head assembly300a. The tape head400illustrates a SGV tape head400where the tape444contacts both the MFS401aof the first SGV head assembly300aand the MFS401bof the second SGV head assembly300bsimultaneously in both directions the tape444moves. In one embodiment, the first SGV head assembly300acomprises a first closure302a, one or more first writers306(shown inFIG.3A) disposed adjacent to the first closure302a, one or more first readers308(shown inFIG.3A) disposed adjacent to the one or more first writers306, and a first substrate304adisposed adjacent to the one or more first readers308. Similarly, in such an embodiment, the second SGV head assembly300bcomprises a second closure302b, one or more second writers306(shown inFIG.3A) disposed adjacent to the second closure302b, one or more second readers308(shown inFIG.3A) disposed adjacent to the one or more second writers306, and a second substrate304bdisposed adjacent to the one or more second readers308. The first SGV head assembly300ahas a first writing and reading direction442athat is opposite to a second writing and reading direction442bof the second SGV head assembly300b. In one embodiment, the first SGV head assembly300aand the second SGV head assembly300bare arranged in a face-to-face configuration or arrangement such that the first closure302aof the first SGV head assembly300ais disposed adjacent or proximate to the second closure302bof the second SGV head assembly300b. In other words, the first SGV head assembly300ais a mirror image of the second SGV head assembly300b, the second SGV head assembly300bis a right hand head assembly like that shown inFIG.3Aand the first SGV head assembly300ais a left hand head assembly. The first SGV head assembly300ais spaced a distance448from the second SGV head assembly300bof about 100 μm to about 1000 μm. In other embodiments, the first SGV head assembly300aand the second SGV head assembly300bare arranged in a substrate-to-substrate configuration or reversed configuration, where the first substrate304ais disposed adjacent to the second substrate304b, and tape444encounters or passes over either the first closure302aor the second closure302bprior to passing over either the first or second substrate304a,304b, respectively. In such a configuration where the first and second head assemblies300a,300bare arranged like shown inFIG.3A, the first head assembly300ahas the second writing and reading direction442bthat is opposite to the first writing and reading direction442aof the second SGV head assembly300b. Referring toFIG.4, which shows a SGV tape head400, a MFS401a,401bof each of the first and second SGV head assemblies300a,300bis configured to support a tape444or other magnetic media. The MFS401a,401bof each of the first and second SGV head assemblies300a,300bincludes surfaces of the writers306and the readers308of each SGV head assembly300a,300b. In some embodiments, the tape444may contact and wrap around a first substrate corner420aand a first closure corner422aof the first SGV head assembly300a, and contact and wrap around a second closure corner422band a second substrate corner420bof the second SGV head assembly300b, resulting in the tape444being bent or angled downwards from a 0° reference line426(e.g., parallel to the x-axis). In such a configuration, the tape444contacts both the MFS401aand the MFS401bsimultaneously in both directions the tape444moves. In other embodiments, the tape444may contact only one MFS (e.g., the first MFS401a) while flying over or being spaced from the other MFS (e.g., the second MFS401b). In such an embodiment, only one SGV head assembly300awrites and reads data while the other SGV head assembly300bdoes not write or read data. The first SGV head assembly300aand the second SGV head assembly300bare both able to independently write and read verify data. For example, a first writer306of the first SGV head assembly300ais able to write data to a portion of the tape444, and an aligned or paired first reader308of the first SGV head assembly300ais able to read verify the newly written portion of the tape444immediately. Similarly, a second writer306of the second SGV head assembly300bis able to write data to a portion of the tape444, and an aligned or paired second reader308of the second SGV head assembly300bis able to read verify the newly written portion of the tape444immediately. As such, the first SGV head assembly300ais able to write data to and read verify data from a tape independently from the second SGV head assembly300b, and the second SGV head assembly300bis able to write data to and read verify data from a tape independently from the first SGV head assembly300a. FIGS.5A-5Billustrate MFS views of the SGV head assemblies300,350ofFIGS.3A-3B, respectively, comprising a reader servo head540and a writer servo head542, according to various embodiments. Each servo head is a reader configured to read servo data on the tape media. A “reader servo head” denotes a servo head associated with a reader element, while a “writer servo head” denotes a servo head associated with a writer element.FIGS.5C-5Dillustrate MFS views of tape heads500,550, respectively, comprising the SGV head assemblies300,350ofFIGS.5A-5B, respectively, according to various embodiments.FIGS.5A-5Billustrate different configurations of the SGV head assemblies300,350. As such, while the configurations of the SGV head assemblies300,350vary inFIGS.5A-5B, the components of the SGV head assemblies300,350ofFIGS.5A-5Bare the same. The SGV head assemblies300,350may be utilized within a tape drive comprising a controller, such as the TED or tape drive100ofFIG.1A. The SGV head assemblies300,350may be, or may be parts of, the tape head module assembly200ofFIG.2. FIG.5Aillustrates the SGV head assembly300ofFIG.3A, where the reader308is disposed adjacent to the substrate304and the writer306is disposed adjacent to the closure302. The SGV head assembly300comprises a first servo head540, or a reader servo head540, which comprises a sensor541, such as a TMR sensor, disposed between a first shield552aand a second shield552b. The reader servo head540is coplanar with or aligned in the x-direction with the read head308, as shown by line548, and disposed adjacent to the substrate304. In some embodiments, the sensor541of the reader servo head540is aligned in the x-direction with the sensor328of the read head308. The SGV head assembly300further comprises a second servo head542, or a writer servo head542, which comprises a sensor543, such as a TMR sensor, disposed between a first shield554aand a second shield554b. The writer servo head542is disposed between the reader servo head540and the closure302in the y-direction, and the reader servo head540is disposed between the substrate304and the writer servo head542in the y-direction. The writer servo head542and the reader servo head540are each configured to read servo data from a tape. The writer servo head542is aligned with the reader servo head540in the y-direction, as shown by line546. In some embodiments, the sensor543of the writer servo head542is aligned with the sensor541of the reader servo head540in the y-direction. The writer servo head542and the reader servo head540are spaced apart a known distance or offset556in the y-direction of about 4 μm to about 20 μm. The writer servo head542is further coplanar with or aligned in the x-direction with the write head306, as shown by line544. In some embodiments, the sensor543of the writer servo head542is aligned in the x-direction with the writer306. The writer servo head542, or the sensor543, may be substantially aligned in the x-direction with the write gap326of the write head306. The sensor543of the writer servo head542may be offset from the write gap326of the write head306a distance of about 0 μm to about 5 μm in the x-direction, as discussed further below inFIGS.6A-6E. FIG.5Billustrates the SGV head assembly350ofFIG.3B, where the writer306is disposed adjacent to the substrate304and the reader308is disposed adjacent to the closure302. The SGV head assembly350comprises the first servo head540, or the reader servo head540, which comprises the sensor541disposed between the first shield552aand the second shield552b. The reader servo head540is coplanar with or aligned in the x-direction with the reader308, as shown by line548, and disposed adjacent to the closure302. In some embodiments, the sensor541of the reader servo head540is aligned in the x-direction with the sensor328of the read head308. The SGV head assembly350further comprises the second servo head542, or the writer servo head542, which comprises the sensor543disposed between the first shield554aand the second shield554b. The reader servo head540is disposed between the writer servo head542and the closure302in the y-direction, and the writer servo head542is disposed between the substrate304and the reader servo head540in the y-direction. The writer servo head542and the reader servo head540are each configured to read servo data from a tape. The writer servo head542is aligned with the reader servo head540in the y-direction, as shown by line546. In some embodiments, the sensor543of the writer servo head542is aligned with the sensor541of the reader servo head540in the y-direction. The writer servo head542and the reader servo head540are spaced apart a known distance or offset558in the y-direction of about 4 μm to about 20 μm. The writer servo head542is further coplanar with or aligned in the x-direction with the write head306, as shown by line544. In some embodiments, the sensor543of the writer servo head542is aligned in the x-direction with the writer306. The writer servo head542, or the sensor543, may be substantially aligned in the x-direction with the write gap326of the write head306. The sensor543of the writer servo head542may be offset from the write gap326of the write head306a distance of about 0 μm to about 5 μm in the x-direction, as discussed further below inFIGS.6A-6E. WhileFIGS.5A-5Beach shows one writer servo head542and one reader servo head540, the SGV head assemblies300,350ofFIGS.5A-5Bmay comprise additional writer servo heads542and reader servo heads540, as shown inFIGS.5C-5Dbelow. In bothFIGS.5A-5B, the reader servo head540may be used to accurately position a plurality of read heads308by reading servo data from a servo track of a tape and the writer servo head542may be used to accurately position a plurality of writer heads306by reading servo data from a servo track of a tape. The reader servo head540and the writer servo head542may further be configured to operation concurrently or independently. In bothFIGS.5A-5B, the reader servo head540and the writer servo head542may be the same or the reader servo head540and the writer servo head542may be different. For example, the reader servo head540and the writer servo head542may vary in design or have different parameters, such as a width of the sensors541,543being different, a spacing between the first and second shields552a,552b,554a,554bbeing different, a resistance area (RA) of each sensor541,543being different, an electro-potential of the sensors541,543being different, etc. Furthermore, as noted above, the writer servo head542and the reader servo head540are spaced apart a known distance or offset556,558in the y-direction of about 4 μm to about 20 μm. In some embodiments, the offset556between the writer servo head542and the reader servo head540inFIG.5Amay be about equal to the offset558between the writer servo head542and the reader servo head540inFIG.5B. In other embodiments, the offset556between the writer servo head542and the reader servo head540inFIG.5Adiffers from the offset558between the writer servo head542and the reader servo head540inFIG.5B. For example, the offset556may be greater than the offset558. The offsets556,558are utilized to accurately calibrate the writer servo head542and/or the reader servo head540. While the write head306and the read head308are shown as being substantially aligned in both the x-direction and the y-direction inFIGS.5A-5B, the writer306and the reader308may become or be fabricated mis-aligned or tilted from one another in the x-direction and/or the y-direction. In such scenarios, conventional tape heads and/or head assemblies comprising only one servo head disposed adjacent to the reader308(i.e., the reader servo head540) may struggle to accurately position the writer306when writing data to a tape. Thus, by including the writer servo head542, the writer servo head542can be used to accurately position the writer306even if the write head306and the read head308are or become mis-aligned. Moreover, the writer servo head542and the reader servo head540may be used concurrently to ensure the write heads306and/or read heads308are positioned as accurately as possible. FIG.5Cillustrates a tape head500comprising the SGV head assemblies300ofFIGS.3A and5A, according to one embodiment. The tape head500ofFIG.5Ccomprises two SGV head assemblies300a,300b, where each SGV head assembly300a,300bcomprises the plurality of writers306and the plurality of readers308. The plurality of write heads306of the first SGV head assembly300aare disposed adjacent to the plurality of write heads306of the second SGV head assembly300b. The first SGV head assembly300aand the second SGV head assembly300bmay be arranged in a face-to-face configuration or in a substrate-to-substrate configuration (i.e., a reversed configuration), like described above inFIG.4. While six writers306and six readers308are shown, the tape head500may comprise any number of writers306and readers308, and as such, the number of writers306and readers308is not intended to be limiting. In the embodiment ofFIG.5C, each SGV head assembly300a,300bfurther comprises a first reader servo head540a, a second reader servo head540b, a first writer servo head542a, and a second writer servo head542b. The first or the second writer servo head542a,542bmay be the writer servo head542shown inFIG.5A, and the first or the reader servo head540a,540bmay be the reader servo head540shown inFIG.5A. However, in some embodiments, such as the embodiment shown inFIG.5A, each SGV head assembly300a,300bmay comprise only one writer servo head542, which may be either the first or the second writer servo head542a,542b, and/or only one reader servo head540, which may be either the first or the second reader servo head540a,540b. As such, each SGV head assembly300a,300bis not limited to having two reader servo heads540a,540band two writer servo heads542a,542b. In other embodiments, each SGV head assembly300a,300bmay comprise a different number of writer servo heads542aand reader servo heads540a. For example, each SGV head assembly300a,300bmay comprise one writer servo head542and two reader servo heads540a,540b, or vice versa. The first and second reader servo heads540a,540bare disposed at either end of the row560of the plurality of read heads308such that the plurality of read heads308are disposed between the first and second reader servo heads540a,540bin the x-direction. The first and second writer servo heads542a,542bare disposed at either end of the row562of the plurality of write heads306such that the plurality of write heads306are disposed between the first and second writer servo heads542a,542bin the x-direction. Each first reader servo head540ais aligned in the y-direction with each first writer servo head542a, and each second reader servo head540bis aligned in the y-direction with each second writer servo head542b. Moreover, each first reader servo head540ais offset the distance556from each first writer servo head542a, and each second reader servo head540bis offset the distance556from each second writer servo head542b. FIG.5Dillustrates a tape head550comprising the SGV head assemblies350ofFIGS.3B and5B. The tape head550ofFIG.5Dcomprises two SGV head assemblies350a,350b, where each SGV head assembly350a,350bcomprises the plurality of write heads306and the plurality of read heads308. The plurality of read heads308of the first SGV head assembly350aare disposed adjacent to the plurality of read heads308of the second SGV head assembly350b. The first SGV head assembly350aand the second SGV head assembly350bmay be arranged in a face-to-face configuration or in a substrate-to-substrate configuration (i.e., a reversed configuration), like described above inFIG.4. While six writers306and six readers308are shown, the tape head550may comprise any number of writers306and readers308, and as such, the number of writers306and readers308is not intended to be limiting. In the embodiment ofFIG.5D, each SGV head assembly350a,350bfurther comprises a first reader servo head540a, a second reader servo head540b, a first writer servo head542a, and a second writer servo head542b. The first or the second writer servo head542a,542bmay be the writer servo head542shown inFIG.5B, and the first or the reader servo head540a,540bmay be the reader servo head540shown inFIG.5B. However, in some embodiments, such as the embodiment shown inFIG.5B, each SGV head assembly350a,350bmay comprise only one writer servo head542, which may be either the first or the second writer servo head542a,542b, and/or only one reader servo head540, which may be either the first or the second reader servo head540a,540b. As such, each SGV head assembly350a,350bis not limited to having two reader servo heads540a,540band two writer servo heads542a,542b. In other embodiments, each SGV head assembly350a,350bmay comprise a different number of writer servo heads542aand reader servo heads540a. For example, each SGV head assembly350a,350bmay comprise one reader servo head540and two writer servo heads542a,542b, or vice versa. The first and second reader servo heads540a,540bare disposed at either end of the row560of the plurality of read heads308such that the plurality of read heads308are disposed between the first and second reader servo heads540a,540bin the x-direction. The first and second writer servo heads542a,542bare disposed at either end of the row562of the plurality of write heads306such that the plurality of write heads306are disposed between the first and second writer servo heads542a,542bin the x-direction. Each first reader servo head540ais aligned in the y-direction with each first writer servo head542a, and each second reader servo head540bis aligned in the y-direction with each second writer servo head542b. Moreover, each first reader servo head540ais offset the distance556from each first writer servo head542a, and each second reader servo head540bis offset the distance556from each second writer servo head542b. FIGS.6A-6Eillustrate SGV head assemblies600,625,650,675,690, showing various placement options for a writer servo head542, according to various embodiments. It is noted that the embodiments ofFIGS.6A-6Eare intended to be examples of placement options for a writer servo head542only, and are not intended to be limiting. Rather, the embodiments ofFIGS.6A-6Eare intended to illustrate the broad placement options for the writer servo head542that still enable the writer servo head542to function as desired. WhileFIGS.6A-6Eillustrate a plurality of read heads308a-308ndisposed adjacent to a substrate304, like described above inFIGS.3A and5A, the same placement options for the writer servo head542apply to embodiments where the plurality of write heads306a-306nare disposed adjacent to the substrate304, like shown and described above inFIGS.3B and5B. The plurality of write heads306a-306nmay be referred to as writers306or write heads306. Thus, each of the SGV head assemblies600,625,650,675,690ofFIGS.6A-6Emay each individually be the SGV head assembly300ofFIGS.3A,5A, and5C, or the SGV head assembly350ofFIGS.3B,5B, and5D. FIG.6Aillustrates a SGV head assembly600where the sensor543of the writer servo head542is substantially aligned with a first portion316aof the first write pole316of each writer306in the row562of writers306a-306n, as shown by line670. In other words, the sensor543of the writer servo head542is offset a distance680from the write gap326of each writer306in the −y-direction. The distance680is between about 0.1 μm to about 5 μm. In embodiments where the SGV head assembly600comprises a second writer servo head542, like shown inFIGS.5C-5D, the second writer servo head542is positioned the same as the writer servo head542shown inFIG.6A. FIG.6Billustrates a SGV head assembly625where the sensor543of the writer servo head542is substantially aligned with a second portion316bof the first write pole316of each writer306in the row562of writers306a-306n, as shown by line672. In other words, the sensor543of the writer servo head542is offset a distance682from the write gap326of each writer306in the −y-direction. The distance682is between about 0.1 μm to about 3 μm. In embodiments where the SGV head assembly625comprises a second writer servo head542, like shown inFIGS.5C-5D, the second writer servo head542is positioned the same as the writer servo head542shown inFIG.6B. FIG.6Cillustrates a SGV head assembly650where the sensor543of the writer servo head542is substantially aligned with the notch320of each writer306in the row562of writers306a-306n, as shown by line674. In other words, the sensor543of the writer servo head542is offset a distance684from the write gap326of each writer306in the −y-direction. The distance684is between about 0.01 μm to about 1 μm. In embodiments where the SGV head assembly650comprises a second writer servo head542, like shown inFIGS.5C-5D, the second writer servo head542is positioned the same as the writer servo head542shown inFIG.6C. FIG.6Dillustrates a SGV head assembly675where the sensor543of the writer servo head542is substantially aligned with the write gap326of each writer306in the row562of writers306a-306n, as shown by line676. As such, the sensor543of the writer servo head542may be offset from the write gap326of each writer306in the y-direction about 0 μm to about 0.5 μm. In embodiments where the SGV head assembly675comprises a second writer servo head542, like shown inFIGS.5C-5D, the second writer servo head542is positioned the same as the writer servo head542shown inFIG.6D. FIG.6Eillustrates a SGV head assembly690where the sensor543of the writer servo head542is disposed above the second write pole318of each writer306in the row562of writers306a-306n. In other words, the sensor543of the writer servo head542is offset a distance678from the write gap326of each writer306in the y-direction. The distance678is between about 0.1 μm to about 5 μm. In embodiments where the SGV head assembly690comprises a second writer servo head542, like shown inFIGS.5C-5D, the second writer servo head542is positioned the same as the writer servo head542shown inFIG.6E. Thus, the SGV head assemblies600,625,650,675,690ofFIGS.6A-6Eillustrate that the sensor543of the writer servo head542may be offset from the write gap326of each write head306in the y-direction or the −y-direction by about 0 μm to about 5 μm, and the writer servo head542is still configured to accurately position the write heads306a-306nwhen writing data to a tape. Therefore, by utilizing a tape head comprising one or more head assemblies, each head assembly comprising at least one writer servo head aligned with a row of write heads and at least one reader servo head aligned with a row of read heads, the tape head is able to accurately and independently position the write heads using the writer servo head(s) when writing data to a tape and position the read heads using the reader servo head(s) when reading data from the tape, even if the write heads and read heads are or become mis-aligned. Moreover, the writer servo head and the reader servo head may be used concurrently to ensure the write heads and/or read heads are positioned as accurately as possible. As such, data can be written to and read from a tape with more accuracy and precision. In one embodiment, a tape head comprises one or more head assemblies, each of the one or more head assemblies comprising: a plurality of write heads aligned in a first row, the first row extending in a first direction, a plurality of read heads aligned in a second row parallel to the first row, the second row extending in the first direction, at least one writer servo head disposed adjacent to the plurality of write heads, the at least one writer servo head being aligned with the first row in the first direction, and at least one reader servo head disposed adjacent to the plurality of read heads, the at least one reader servo head being aligned with the second row in the first direction. The at least one writer servo head is aligned with the at least one reader servo head in a second direction perpendicular to the first direction. The at least one writer servo head is spaced from the at least one reader servo head a distance of about 4 μm to about 20 μm in the second direction. The at least one writer servo head is different than the at least one reader servo head. The at least one writer servo head and the at least one reader servo head are configured to operate concurrently. A sensor of the at least one writer servo head is offset a distance of about 0 μm to about 5 μm from a write gap of a first write head of the plurality of write heads in a second direction perpendicular to the first direction. A tape drive comprises the tape head. The tape drive comprises a controller configured to: control a first head assembly of the one or more head assemblies to write data to a tape using the plurality of write heads and read verify the data using the plurality of read heads, use signals from the at least one writer servo head to accurately position the plurality of write heads to write to the tape, and use signals from the at least one reader servo head to accurately position the plurality of read heads to read from the tape. In another embodiment, a tape head comprises one or more head assemblies, each of the one or more head assemblies comprising: a plurality of write heads aligned in a first row, the first row extending in a first direction, wherein each of the plurality of write heads comprises a first write pole, a second write pole, and a write gap disposed between the first and second write poles, a plurality of read heads aligned in a second row parallel to the first row, the second row extending in the first direction, wherein each of the plurality of read heads comprises a first sensor, at least one writer servo head disposed adjacent to the plurality of write heads, the at least one writer servo head being aligned with the first row in the first direction, wherein the at least one writer servo head comprises a second sensor, and at least one reader servo head disposed adjacent to the plurality of read heads, the at least one reader servo head being aligned with the second row in the first direction, wherein the at least one reader servo head comprises a third sensor. The second sensor of the at least one writer servo head is substantially aligned with the write gap of a first write head of the plurality of write heads in the first direction. The third sensor of the at least one reader servo head is substantially aligned with the first sensor of a first read head of the plurality of read heads in the first direction. The second sensor of the at least one writer servo head is substantially aligned with the third sensor of the at least one reader servo head in a second direction perpendicular to the first direction. The second sensor of the at least one writer servo head is offset a distance of about 0 μm to about 5 μm from the write gap of a first write head of the plurality of write heads in a second direction perpendicular to the first direction. The at least one writer servo head is aligned with the at least one reader servo head in a second direction perpendicular to the first direction. The at least one writer servo head is spaced from the at least one reader servo head a distance of about 4 μm to about 20 μm in the second direction. The at least one writer servo head is two writer servo heads, the plurality or write heads being disposed between the two writer servo heads. A tape drive comprises the tape head. The tape drive comprises a controller configured to: operate the at least one writer servo head and the at least one reader servo head concurrently, use signals from the at least one writer servo head to position the plurality of write heads to write to a tape, and use signals from the at least one reader servo head to position the plurality of read heads to read from the tape. In yet another embodiment, a tape drive comprises a first head assembly comprising: a plurality of first write heads aligned in a first row, the first row extending in a first direction, a plurality of first read heads aligned in a second row parallel to the first row, the second row extending in the first direction, at least one first writer servo head disposed adjacent to the plurality of first write heads, the at least one first writer servo head being aligned with the first row in the first direction, and at least one first reader servo head disposed adjacent to the plurality of first read heads, the at least one first reader servo head being aligned with the second row in the first direction and aligned with the at least one first writer servo in a second direction perpendicular to the first direction, wherein the at least one first writer servo head and the at least one first reader servo head are configured to operate concurrently. The tape drive further comprises a second head assembly comprising: a plurality of second write heads aligned in a third row, the third row extending in the first direction, a plurality of second read heads aligned in a fourth row parallel to the third row, the fourth row extending in the first direction, at least one second writer servo head disposed adjacent to the plurality of second write heads, the at least one second writer servo head being aligned with the third row in the first direction, and at least one second reader servo head disposed adjacent to the plurality of second read heads, the at least one second reader servo head being aligned with the fourth row in the first direction and aligned with the at least one second writer servo in the second direction. The tape drive further comprises a controller configured to operate the at least one second writer servo head and the at least one second reader servo head concurrently. The controller is further configured to control the first head assembly to write first data to a tape using the plurality of first write heads and read verify the first data using the plurality of first read heads, and control the second head assembly to write second data to the tape using the plurality of second write heads and read verify the second data using the plurality of second read heads. The controller is further configured to use signals from the at least one first writer servo head to accurately position the plurality of first write heads to write to a tape, and use signals from the at least one second writer servo head to accurately position the plurality of second write heads to write to the tape. The controller is further configured to use signals from the at least one first reader servo head to accurately position the plurality of first read heads to read from a tape, and use signals from the at least one second reader servo head to accurately position the plurality of second read heads to read from the tape. A first sensor of the at least one first writer servo head is offset a first distance of about 0 μm to about 5 μm from a write gap of a first write head of the plurality of first write heads in the second direction. A second sensor of the at least one second writer servo head is offset a second distance of about 0 μm to about 5 μm from a write gap of a first write head of the plurality of second write heads in the second direction. While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
52,644
11862196
DETAILED DESCRIPTION System Overview FIG.1is a schematic view of an exemplary hard disk drive (HDD)100, according to one embodiment. For clarity, HDD100is illustrated without a top cover. HDD100is a multiple actuator drive, and includes one or more storage disks110, each including one or two recording surfaces on which a plurality of concentric data storage tracks are disposed. InFIG.1, only the top recording surface112A of storage disk110is visible. The one or more storage disks110are coupled to and rotated by a spindle motor114that is mounted on a base plate116. Two or more actuator arm assemblies120A and120B are also mounted on base plate116, and each of the assemblies includes one or more arm-mounted sliders121with one or more magnetic read/write heads127that read data from and write data to the data storage tracks of an associated recording surface, such as recording surface112A. One or more actuator arms124are included in actuator arm assembly120A, and one or more actuator arms124are included in actuator arm assembly120B. Actuator arm assembly120A and the one or more actuator arms124included therein are rotated together about a bearing assembly126by a voice coil motor (VCM)128A independently from actuator arm assembly120B. Likewise, actuator arm assembly120B and the one or more actuator arms124included therein are rotated together about bearing assembly126by a VCM128B independently from actuator arm assembly120A. Thus, each of VCMs128A and128B moves a group of the sliders121and read/write heads127radially relative to a respective recording surface of a storage disk110included in HDD100, thereby providing radial positioning of a corresponding read/write head127over a desired concentric data on a recording surface, for example on recording surface112A. Spindle motor114, the read/write heads127, and VCMs128A and128B are coupled to electronic circuits130, which are mounted on a printed circuit board132. Electronic circuits130include read channels137A and137B, a microprocessor-based controller133, random-access memory (RAM)134(which may be a dynamic RAM and is used as one or more data buffers) and/or a flash memory device135, and, in some embodiments, a flash manager device136. In some embodiments, read channels137A and137B and microprocessor-based controller133are included in a single chip, such as a system-on-chip (SoC)131. HDD100further includes a motor-driver chip125that accepts commands from microprocessor-based controller133and drives spindle motor114, and VCMs128A and128B and microactuators228and/or229(not shown inFIG.1). In the embodiment illustrated inFIG.1, HDD100is shown with a single motor-driver chip125that drives spindle motor114and VCMs128A and128B. In other embodiments, HDD100includes multiple motor-driver chips. In one such embodiment, for example, one motor-driver chip drives spindle motor114, one actuator (e.g., VCM128A), and one microactuator (e.g., microactuator228), and the other motor-driver chip drives the other actuator (e.g., VCM128B) and another microactuator (e.g., microactuator229). Further, in other embodiments, any other partition of the jobs of spindle motor control, actuator control, and microactuator control can be implemented. Via a preamplifier (not shown), read/write channel137A communicates with read/write heads127of actuator arm assembly120A and read/write channel137B communicates with read/write heads127of actuator arm assembly120B. The preamplifiers are mounted on a flex-cable, which is mounted on either base plate116, one of actuators120A or120B, or both. When data are transferred to or from a particular recording surface of HDD100, one of the actuator arm assemblies120A or120B moves in an arc between the inner diameter (ID) and the outer diameter (OD) of the storage disk110. The actuator arm assembly accelerates in one angular direction when current is passed in one direction through the voice coil of the corresponding VCM and accelerates in an opposite direction when the current is reversed, thereby allowing coarse control of the radial position of the actuator arm assembly and the attached read/write head with respect to the particular storage disk110. Fine radial positioning of each read/write head127is accomplished with a respective microactuator129. The microactuator129for each read/write head127is mechanically coupled to the actuator arm124that corresponds to the read/write head127. Each microactuator129typically includes one or more piezoelectric elements and is configured to move a corresponding read/write head127radially a small distance, for example on the order of a few tens or hundreds of nanometers. When employed together, microactuators129and voice coil motors128A and128B are sometimes referred to as dual-stage actuators, where voice coil motor128A or128B is the prime mover and each microactuator129is a second-stage actuator. Dual-stage actuators enable the servo system of HDD100to attain more accurate tracking control. In some embodiments, each microactuator129is mounted on a respective flexure arm122, at a gimbal between the respective flexure arm122and the corresponding slider121. In such embodiments, each microactuator129rotates the corresponding slider121, causing radial motion (relative to corresponding recording surface) of the corresponding read/write head127. Alternatively or additionally, in some embodiments, each microactuator129is mounted on an end of an actuator arm124or on the flexure arm, itself, and moves the flexure arm122through a relatively large arc, for example on the order of a hundred track widths. In yet other embodiments, each microactuator129includes a first piezoelectric or other movable element at the gimbal between the respective flexure arm122and the corresponding slider121and a second piezoelectric or other movable element at the end of the actuator arm124or on the flexure arm. In such embodiments, each read/write head127is provided with three-stage actuation in the radial direction. In the embodiment illustrated inFIG.1, only one slider121, one flexure arm122, one actuator arm124, and one read/write head127are shown for actuator arm assembly120A and only one slider121, one flexure arm122, one actuator arm124, and one read/write head127are shown for actuator arm assembly120B. In other embodiments, each of actuator arm assemblies120A and120B can include a plurality of actuator arms, sliders, flexure arms, and read/write heads. Further, in some embodiments, HDD100can include more than two actuator arm assemblies, each rotated about bearing assembly126by a respective VCM independently from each other. In other embodiments, additional actuators may rotate about other bearing assemblies. FIG.2schematically illustrates a partial side-view of multiple storage disks110A-110D and two independent actuator arm assemblies120A and120B of HDD100, according to an embodiment. The recording surfaces of multiple storage disks110A and110B are each accessed by one of the read/write heads included in the independent actuator arm assembly120A (e.g., read/write heads227A,227B,227C, and227D), and the recording surfaces of multiple storage disks110C and110D are each accessed by the read/write heads included in the independent actuator arm assembly120B (e.g., read/write heads227E,227F,227G, and227H). Thus, in the embodiment illustrated inFIG.2, HDD100is configured with multiple storage disks110A-110D having a total of eight recording surfaces112A-112H and multiple read/write heads227A-227H, each corresponding to one of these recording surfaces. Specifically, in the embodiment illustrated inFIG.2, HDD100includes: a storage disk110A with recording surfaces112A and112B; a storage disk110B with recording surfaces112C and112D; a storage disk110C with recording surfaces112E and112F; and a storage disk110D with recording surfaces112G and112H. Thus, read/write head227A reads data from and writes data to recording surface112A, read/write head227B reads data from and writes data to corresponding recording surface112B, and so on. Read/write heads227A-227H are disposed on sliders221A-221H, respectively, and sliders221A-221H (referred to collectively herein as sliders221) are respectively coupled to actuator arms124A-124F via flexure arms222A-222H (referred to collectively herein as flexure arms222) as shown. In some embodiments, each of sliders221A-221H is mounted on a corresponding one of flexure arms222via a microactuator229A-229H (referred to collectively herein as microactuators229), such as a micro-actuator (MA) second stage that includes two lead zirconate titanate piezoelectric actuators attached to a baseplate of the corresponding flexure arm222. Alternatively, in some embodiments, each of sliders221A-221H is mounted directly on a corresponding one of flexure arms222. In the embodiment illustrated inFIG.2, flexure arm222A is coupled to an actuator arm124A, flexure arms222B and222C are coupled to an actuator arm124B, flexure arm222D is coupled to an actuator arm124C, flexure arm222E is coupled to an actuator arm124D, flexure arms222F and222G are coupled to an actuator arm124E, and flexure arm222H is coupled to an actuator arm124F. Actuator arms124A-124F are referred to collectively herein as actuator arms124. In the embodiment illustrated inFIG.2, each of microactuators228A-228H (referred to collectively herein as microactuators228) is disposed at a base of flexure arms222A-222H, respectively, i.e., at an end of one of actuator arms124. Alternatively or additionally, in some embodiments, microactuators229A-229H can be disposed proximate sliders221A-221H, respectively, i.e., at a tip of flexure arms222A-222H, respectively. In embodiments in which microactuators229are disposed proximate sliders221, each of microactuators229can include a gimbal microactuator. In either case, each of microactuators229and/or228compensates for perturbations in the radial position of sliders221, so that read/write heads227A-227H follow the proper data track on recording surfaces112. Thus, microactuators229can compensate for vibrations of the disk, inertial events such as impacts to HDD100, and irregularities in recording surfaces112or in the written servo-pattern. Actuator arms124A-124C are included in actuator arm assembly120A, and actuator arms124D-124F are included in actuator arm assembly120B. In an embodiment of the invention, actuator arm assemblies120A and120B are independently controlled and both rotate about bearing assembly126(which includes a same shaft axis226). In positioning one of read/write heads227A-227H over a corresponding recording surface (i.e., one of recording surfaces112A-112H), the servo system determines an appropriate current to drive through the voice coil of the appropriate voice coil motor (i.e., either VCM128A or128B), and drives said current using a current driver and associated circuitry, e.g., included in motor-driver chip125. Typically, the appropriate current is determined based in part on a position feedback signal of the read/write head127, i.e., a position error signal (PES). The PES is typically generated by using servo patterns included in the servo wedges on the recording surface as a reference. One embodiment of such a recording surface112is illustrated inFIG.3. FIG.3illustrates a recording surface312of a storage disk310with servo wedges300and concentric data storage tracks320formed thereon, according to an embodiment. Recording surface312can be any of recording surfaces112A-112H inFIG.2. Servo wedges300may be written on recording surface312by either a media writer, or by HDD100itself via a self-servo-write (SSW) process. Servo wedges300are typically radially aligned. In practice, servo wedges300may be somewhat curved. For example, servo wedges300may be configured in a spiral pattern that mirrors the path that would be followed by a corresponding read/write head127(shown inFIG.1) if the read/write head127were to be moved across the stroke of one of actuator arm assemblies120A or120B while storage disk310is not spinning. Such a curved pattern advantageously results in the wedge-to-wedge timing being independent of the radial position of the read/write head127. For simplicity, servo wedges300are depicted as substantially straight lines inFIG.3. Each servo wedge300includes a plurality of servo sectors350containing servo information that defines the radial position and track pitch, i.e., spacing, of data storage tracks320. Data storage tracks320for storing data are located in data sectors325, and are positionally defined by the servo information written in servo sectors350. The region between two servo sectors may contain more than, equal to, or less than one data sector, including the possibility of fractional data-sectors. Each servo sector350encodes a reference signal that is read by the read/write head127as the read/write head127passes over the servo sector. Thus, during read and write operations, the read/write head127can be positioned above a desired data storage track320. Typically, the actual number of data storage tracks320and servo wedges300included on recording surface312is considerably larger than that illustrated inFIG.3. For example, recording surface312may include hundreds of thousands of concentric data storage tracks320and hundreds of servo wedges300. Timing Offsets in Multi-Actuator HDDs As noted previously, when one actuator of a multiple-actuator HDD (the so-called “aggressor actuator”) is seeking to a targeted data storage track, cross-actuator coupling can generate vibrations which can significantly affect the positioning accuracy of the other actuator (the so-called “victim actuator”). In particular, the high accelerations and changes in acceleration of the aggressor actuator are likely to affect the positioning accuracy of the victim actuator when the victim actuator is attempting to closely follow a specific data track, for example during a read or write operation. To reduce the effects of these inter-mechanical interactions between an aggressor and a victim actuator, victim disturbance feedforward control schemes have been developed, in which a feedforward control signal is asserted by a victim actuator. The feedforward control signal is determined as a function of the VCM commands asserted by the aggressor actuator, and is intended to reduce or compensate for the effect on the victim actuator of the VCM commands asserted by the aggressor actuator. However, the effectiveness of such victim disturbance feedforward control signals can be adversely affected by differences in timing (referred to herein as “timing offsets”) between a victim head and an aggressor head. Such timing offsets are described below in conjunction withFIG.4A-4C,FIG.5, andFIG.6. FIGS.4A-4Cschematically illustrate various timing offsets that can occur between two different read/write heads of HDD100.FIG.4Aillustrates a timing offset413caused by a circumferential displacement between the two different read/write heads;FIG.4Billustrates a timing offset433caused by a circumferential displacement between servo wedges associated with the two different read/write heads; andFIG.4Cillustrates a timing offset453caused by a combination of a circumferential displacement between the two different read/write heads and a circumferential displacement between servo wedges associated with the two different read/write heads. For ease of description, inFIGS.4A-4C, timing offset413, timing offset433, and timing offset453are each depicted as a physical displacement in the circumferential (down-track) direction. One of skill in the art will readily understand that, in practice, timing offset413corresponds to the duration of time required for a read/write head to move across the physical displacement depicted as timing offset413, timing offset433corresponds to the duration of time required for a read/write head to move across the physical displacement depicted as timing offset433, and timing offset453corresponds to the duration of time required for a read/write head to move across the physical displacement depicted as timing offset453. Further, timing offset413, timing offset433, and timing offset453are depicted between read/write heads that are each associated with a different actuator arm assembly of an HDD. However, such timing offsets can also occur between read/write heads that are both associated with the same actuator arm assembly. InFIG.4A, read/write head227A moves circumferentially relative to a portion405of recording surface112A along a data track401while read/write head227E moves circumferentially relative to a portion406of recording surface112E along a data track402. As shown, read/write head227A is offset from read/write head227E in the circumferential (down-track) direction and, as a result, crosses a rising edge411A of a servo wedge411on recording surface112A substantially sooner than read/write head227E crosses a rising edge412A of a servo wedge412on recording surface112E. That is, there is a timing offset413between the time that read/write head227A crosses rising edge411A and the time that read/write head227E crosses rising edge412A. It is noted that inFIG.4A, timing offset413occurs even though rising edge411A and rising edge412A are closely aligned circumferentially. In some instances, timing offset413can be caused at the time of manufacture by misalignment of read/write head227A and read/write head227E in the circumferential direction, differential thermal expansion during operation between the actuator arm coupled to read/write head227A and the actuator arm coupled to read/write head227E, shifts in the relative position of read/write head227A and read/write head227E due to shocks experienced during the lifetime of HDD100, and the like. In such instances, the timing offset413associated with a particular servo wedge of recording surface112A and recording surface112E can often be substantially the same for each servo wedge of recording surface112A and recording surface112E. In some instances, the magnitude of timing offset413can be a significant fraction of the time interval required for a read/write head to move across the circumferential separation between adjacent servo wedges, for example 5%, 10%, 20%, or more. InFIG.4B, read/write head227B moves circumferentially relative to a portion425of recording surface112B along a data track421while read/write head227F moves circumferentially relative to a portion426of recording surface112F along a data track422. As shown, read/write head227B and read/write head227F are substantially aligned in the circumferential direction. However, a servo wedge431on recording surface112B is circumferentially offset from a corresponding servo wedge432on recording surface112F. As a result, read/write head227B crosses a rising edge431A of a servo wedge431before read/write head227F crosses a rising edge432A of a servo wedge432. That is, there is a timing offset433between the time that read/write head227B crosses rising edge431A and the time that read/write head227F crosses rising edge432A. In some instances, timing offset433can be caused by servo wedges that are imprecisely written on recording surface112B and/or recording surface112F at the time of manufacture. In such instances, the timing offset433associated with a particular servo wedge of recording surface112B and recording surface112F can often be substantially the same for each servo wedge of recording surface112B and recording surface112F. Alternatively or additionally, in some instances, timing offset433can be caused by a shift in relative position between recording surface112B and recording surface112F, for example due to an impact or other physical shock experienced during the lifetime of HDD100. In such instances, the timing offset433associated with a particular servo wedge of recording surface112B and recording surface112F generally varies for each servo wedge of recording surface112A and recording surface112E, for example sinusoidally. In some instances, the magnitude of timing offset433can be a significant fraction of the time interval required for a read/write head to move across the circumferential separation between adjacent servo wedges, for example 5%, 10%, 20%, or more. InFIG.4C, read/write head227C moves circumferentially relative to a portion445of recording surface112C along a data track441while read/write head227G moves circumferentially relative to a portion446of recording surface112G along a data track442. As shown, a servo wedge451on recording surface112C is circumferentially offset from a corresponding servo wedge452on recording surface112G by a circumferential displacement455. Further, read/write head227C and read/write head227G are also circumferentially offset from each other. Therefore, read/write head227G trails read/write head227C circumferentially by a circumferential displacement454. As a result, read/write head227C crosses a rising edge451A of servo wedge451before read/write head227G crosses a rising edge452A of servo wedge452. That is, there is a timing offset453that corresponds to the time interval required for read/write head227G to move across a circumferential distance equivalent to the sum of circumferential displacement454and circumferential displacement455. Timing offset453can have a magnitude similar to or greater than timing offset413, timing offset433, or a combination of both. Ideally, read/write heads and servo wedges of an HDD are aligned circumferentially so that the above-described timing offset413, timing offset433, and timing offset453, are of insignificant magnitude. In practice, such timing offsets can be a significant fraction of the time interval required for a read/write head to move across the circumferential separation between adjacent servo wedges, as described below in conjunction withFIG.5. FIG.5is a plot of timing offset magnitude between read/write heads227A-227D of actuator arm assembly120A, according to some embodiments. More specifically,FIG.5shows timing offset magnitudes (referred to collectively herein as “timing offsets500”) for read/write heads227A-227D relative to read/write head227A. As shown, read/write head227A has a timing offset501relative to read/write head227A, read/write head227B has a timing offset502relative to read/write head227A, read/write head227C has a timing offset503relative to read/write head227A, and read/write head227D has a timing offset504relative to read/write head227A. InFIG.5, the magnitudes of timing offsets500are depicted in terms of “wedges,” where one “wedge” references a time interval Twedgethat is required for a read/write head to move across the circumferential separation between the starts of two adjacent servo wedges. For example, timing offset502of read/write head227B has a magnitude of 0.15 servo wedges. As such, timing offset502indicates that read/write head227B lags behind read/write head227A by 0.15 servo wedges. That is, given that read/write head227A crosses at a time TO a reference point (such as a leading edge) associated with a first servo wedge on the recording surface that corresponds to read/write head227A, therefore read/write head227B crosses a similar reference point on the recording surface that corresponds to read/write head227B at a time T1, where time T1=T0+0.15(Twedge). Similarly, timing offset503indicates that read/write head227C lags behind read/write head227A by 0.05 servo wedges, and therefore crosses a similar reference point on the recording surface that corresponds to read/write head227C at a time T2, where time T2=T0+0.05(Twedge). Further, timing offset504indicates that read/write head227D leads read/write head227A by 0.10 servo wedges, and therefore crosses a similar reference point on the recording surface that corresponds to read/write head227D at a time T3, where time T3=T0−0.10(Twedge). As shown, read/write head227A has no timing offset with itself, and therefore has a timing offset of 0. In some instances, timing offsets500can vary for each servo wedge that is crossed by read/write heads227A-227D. In such instances, when recording surfaces of HDD100include K servo wedges, there are K different timing offsets501for read/write head227A, K different timing offsets502for read/write head227B, K different timing offsets503for read/write head227C, and K different timing offsets504of read/write head227D. Thus, in such instances,FIG.5shows timing differences500for a single servo wedge. Timing offsets500ofFIG.5are associated with the various read/write heads of a single actuator of HDD100, such as VCM128A. In a multi-actuator HDD, there are generally timing offsets between each of the multiple read/write heads of one actuator and the multiple read/write heads of the other actuator. Such timing offsets are described below in conjunction withFIG.6. FIG.6is a plot of timing offset magnitude between the read/write heads of a first actuator arm assembly of a multi-actuator drive and one read/write head of a second actuator arm assembly of the multi-actuator drive, according to some embodiments.FIG.6includes initial timing offsets600, which are present between the read/write heads of the first actuator arm assembly and one read/write head of the second actuator arm assembly at the time of manufacture of HDD100. In the embodiment illustrated inFIG.6, initial timing offsets600for read/write heads227A-227D of actuator arm assembly120A (shown inFIG.2) relative to read/write head227E of actuator arm assembly120B (shown inFIG.2) are shown. Initial timing offsets600include an initial timing offset601of read/write head227A relative to read/write head227E, an initial timing offset602of read/write head227B relative to read/write head227E, an initial timing offset603of read/write head227C relative to read/write head227E, and an initial timing offset604of read/write head227D relative to read/write head227E. Similar to timing offsets500ofFIG.5, initial timing offsets600indicate that each of read/write heads227A-227D can either lag behind or lead read/write head227E. Further, in some instances, initial timing offsets600can vary for each servo wedge that is crossed by read/write heads227A-227E. In such instances, when recording surfaces of HDD100include K servo wedges, there are K different initial timing offsets601for read/write head227A, K different initial timing offsets602for read/write head227B, K different initial timing offsets603for read/write head227C, and K different initial timing offsets604of read/write head227D. Thus, in such instances,FIG.6shows initial timing differences600for a single servo wedge. FIG.6also shows subsequent timing offsets650, which are timing offsets for read/write heads227A-227D relative to read/write head227E at some time during the lifetime of HDD100. Subsequent timing offsets650include a subsequent timing offset651of read/write head227A relative to read/write head227E, a subsequent timing offset652of read/write head227B relative to read/write head227E, a subsequent timing offset653of read/write head227C relative to read/write head227E, and a subsequent timing offset654of read/write head227D relative to read/write head227E. Subsequent timing offsets650may develop or be present in HDD100after a specific period of time, a specific quantity of operating time by HDD100, and/or after a physical shock is experienced by HDD100. Subsequent timing offsets650illustrate that the magnitude of timing offsets between the read/write heads of one actuator arm assembly of a multi-actuator drive and one read/write head of another actuator arm assembly of the multi-actuator drive can vary or drift over time from initial timing offsets600. That is, a shift can occur in the timing relationship between one or more read/write heads of one actuator arm assembly and one or more read/write heads of the another actuator arm assembly of the multi-actuator drive. Such shifts are referred to herein as “timing shifts,” and, in some embodiments, can significantly affect the accuracy of certain victim disturbance feedforward control schemes. For example, in the embodiment illustrated inFIG.6, there is a timing shift661between initial timing offset601and timing offset651, a timing shift662between initial timing offset602and timing offset652, a timing shift663between initial timing offset603and timing offset653, and a timing shift664between initial timing offset604and timing offset654(referred to collectively herein as “timing shifts660”). Generally, a feedforward transfer function that generates a feedforward control signal for a victim actuator may assume an initial timing offset600exists between an aggressor head and a victim head that is changed by a timing shift660. As a result, the presence of timing shifts660can reduce the accuracy of the feedforward control signal that is generated based on such a transfer function. In some instances, a particular timing shift660can be caused by a change in the timing of a single read/write head. For example, in some instances, timing shift661occurs due to a circumferential displacement (or other timing change) of either read/write head227A or read/write head227E. It is noted that when a circumferential displacement (or other timing change) of read/write head227E occurs, the effect of such a timing change affects all of subsequent timing offsets650. Additionally or alternatively, in some instances, a particular timing shift660can be caused by a change in the timing of both read/write heads associated with the timing shift. For example, in such an instance, timing shift661occurs due to a circumferential displacement (or other timing change) of read/write head227A and of read/write head227E. Further, similar to timing offsets500and initial timing offsets600, timing shifts660can vary for each servo wedge that is crossed by read/write heads227A-227D. Feedforward Control Signal Based on Aggressor Operation In some embodiments, to reduce or compensate for the effect of the VCM commands asserted by the aggressor actuator on the position of a victim head, a feedforward signal for a victim actuator is determined as a function of the VCM commands asserted by the aggressor actuator. In such embodiments, the feedforward control signal may be generated based on a transfer function that models a feedforward correction signal for the victim head as a function of a control signal supplied to the aggressor actuator. For example, determination of the transfer function for a particular victim head can include a procedure that involves adding known disturbances to a control signal for an actuator that is currently designated as the aggressor actuator and measuring the radial position of the victim head in response to each of the added disturbances. It is noted that the mechanical disturbances that affect victim head position are caused by the accelerations and decelerations of the aggressor actuator itself, and therefore are independent of which particular aggressor head is the active head when determining the above-described transfer function. Thus, in some embodiments, the same aggressor head can be used when determining the transfer function for each different victim head, and is referred to herein as a “base aggressor head” for purposes of description. In such embodiments, the fractional-wedge timing offsets between each particular victim head and the base aggressor head are generally different, and can have a magnitude that is a significant fraction of a wedge, as illustrated by timing offsets600. However, in such embodiments, the effect on the response in position of a particular victim head by the timing offset between that particular victim head and the base aggressor head is incorporated into the transfer function of that particular victim head. Consequently, in operation, when the base aggressor head is the active head, the timing offset between that particular victim head and the aggressor head is inherently accounted for, as long as that timing offset remains substantially the same and does not drift to another value. By contrast, when the base aggressor head is not the active head, the timing offset between that particular victim head and the aggressor head can affect the accuracy of the feedforward correction signal for that particular victim head. This is because, as illustrated by timing offsets500inFIG.5, each read/write head associated with a particular actuator can have a timing offset from each of the other read/write heads that is a significant fraction of a servo wedge in magnitude. As a result, the transfer function on which the feedforward correction signal is based is determined via measurements of victim head response that assume a different timing offset between the victim head and the aggressor head than is actually present. Thus, in such an instance, the effectiveness of the victim disturbance feedforward control signal can be degraded. According to various embodiments, measurements of fractional-wedge timing-offsets between an aggressor head and a victim head are used to adjust the aggressor actuator commands that are inputted to a victim disturbance feedforward signal generator. In some embodiments, when a timing offset exists between the aggressor head and the victim head that is equivalent to a specific fraction of the timing difference between the starts of adjacent servo wedges, values of the aggressor actuator commands that are inputted to the victim disturbance feedforward signal generator are modified based on the specific fraction. Additionally or alternatively, in some embodiments, feedforward signals generated by the victim disturbance feedforward signal generator are modified based on the specific fraction. In some embodiments, the victim disturbance feedforward signal is added to a microactuator control signal of the victim actuator in response to a VCM control signal that is applied to the aggressor actuator. One such embodiment is described below in conjunction withFIGS.7-11. FIG.7illustrates an operational diagram of HDD100, with some elements of electronic circuits130and motor-driver chip125shown configured according to one embodiment. As shown inFIG.7, HDD100is configured to coordinate fractional-wedge timing of aggressor and victim actuators, according to various embodiments. For example, in some embodiments, values of the aggressor actuator commands that are inputted to a victim disturbance feedforward signal generator are modified based on a specific fractional-wedge timing offset, and in some embodiments, values of feedforward signals generated by the victim disturbance feedforward signal generator are modified based on a specific fractional-wedge timing offset. Generally, an actuator of HDD100is considered a victim actuator while performing a position-sensitive operation, such as servoing over a data track to read data from and/or write data to the data track. An actuator of HDD100is considered an aggressor actuator when performing a seek operation while another actuator of the multi-actuator HDD is currently a victim actuator. HDD100is connected to a host10, such as a host computer, via a host interface20, such as a serial advanced technology attachment (SATA) bus or a Serial Attached Small Computer System Interface (SAS) bus. As shown, microprocessor-based controller133includes one or more central processing units (CPU)701or other processors, a first servo controller715, a second servo controller716, a hard disk controller (HDC)702, a DRAM134, and read/write channels137A and137B. Motor-driver chip125includes VCM driver circuits713, MA driver circuits717, and a spindle motor (SPM) control circuit714. DRAM134may be integrated on the same die as the controller133, included in a separate die in the same package as the controller133, or included in a separate package mounted on circuit board130. HDD100further includes preamplifiers720A and720B, which can be each mounted on actuator arm assemblies120A and120B or elsewhere within the head and disk assembly (HDA) of HDD100. Preamplifier720A supplies a write signal (e.g., current) to read/write head127A in response to write data input from read/write channel137A. Similarly, preamplifier720B supplies a write signal (e.g., current) to read/write head127B in response to write data input from read/write channel137B. In addition, preamplifier720A amplifies a read signal output from to read/write head127A and transmits the amplified read signal to read/write channel137A, and preamplifier720B amplifies a read signal output from to read/write head127B and transmits the amplified read signal to read/write channel137B. CPU701controls HDD100, for example according to firmware stored in flash memory device135or another nonvolatile memory, such as portions of recording surfaces112A-112H. For example, CPU701manages various processes performed by HDC702, read/write channels137A and137B, read/write heads127A-127H, recording surfaces112A-112H, and/or motor-driver chip125. Such processes include a writing process for writing data onto recording surfaces112A-112H and a reading process for reading data from recording surfaces112A-112H. In some embodiments, the first servo system of HDD100(e.g., CPU701, read/write channel137A, preamplifier720A, first servo controller715, voice-coil motor128A, and a suitable microactuator228or229) performs positioning of a read/write head127included in actuator arm assembly120A (e.g., read/write head127A) over a corresponding recording surface (e.g., recording surface112A), during which CPU701determines an appropriate current to drive through the voice coil of VCM128A. Typically, the appropriate current is determined based in part on a position feedback signal of the read/write head, i.e., a position error signal (PES). Similarly, the second servo system of HDD100(e.g., CPU701, read/write channel137B, preamplifier720B, second servo controller716, voice-coil motor128B, and a suitable microactuator228or229) performs positioning of a read/write head127included in actuator arm assembly120B (e.g., read/write head127D) over a corresponding recording surface (e.g., recording surface112D), during which CPU701determines an appropriate current to drive through the voice coil of VCM128B. Although a single CPU701is shown here, it is possible that multiple CPUs might be used (for example, one or more CPUs for each actuator). In the embodiment illustrated inFIG.7, various links are shown between certain elements of HDD100for enablement of certain embodiments. In some embodiments, additional and/or alternative links between certain elements of HDD100may exist for operation of HDD100, but are not shown for clarity and ease of description. Such additional and/or alternative links would be known to one of ordinary skill in the art. In the embodiment illustrated inFIG.7, microprocessor-based controller133includes a single CPU701incorporated into a single SoC131. In alternative embodiments, microprocessor-based controller133includes more than one CPU. In such embodiments, HDD100can include two CPUs; one devoted to servo/spindle control and the other devoted to a combination of host-based and disk-control activities. In other alternate embodiments, HDD100can include a CPU and one or more separate servo controllers, such as first servo controller715and second servo controller716shown inFIG.7. Alternatively or additionally, in some embodiments, HDD100includes a separate SoC for each actuator, where each SoC has two such CPUs. Further, in some embodiments, microprocessor-based controller133includes multiple motor driver chips. For instance, in one such embodiment, a first motor driver chip is dedicated for controlling the spindle motor, a first actuator, and a first microactuator, while a second motor driver chip is dedicated for controlling a second actuator and a second microactuator. Read/write channels137A and137B are signal processing circuits that encode write data input from HDC702and output the encoded write data to respective preamplifiers720A and720B. Read/write channels137A and137B also decode read signals transmitted from respective preamplifiers720A and720B into read data that are outputted to HDC702. In some embodiments, read/write channels137A and137B each include a single read channel and a single write channel, whereas in other embodiments, read/write channels137A and137B each include multiple write channels and/or multiple read channels for read/write heads127A-127H. HDC702controls access to DRAM134by CPU701, read/write channels137A and137B, and host10, and receives/transmits data from/to host10via interface20. In some embodiments, the components of microprocessor-based controller133(e.g., CPU701, HDC702, DRAM134, and read/write channels137A,137B) are implemented as a one-chip integrated circuit (i.e., as an SoC). Alternatively, one or more of CPU701, HDC702, DRAM134, and read/write channels137A and137B can each be implemented as a separate chip. Motor-driver chip125drives the spindle motor114, a first actuator (that includes VCM128A, actuator arms124A-124C, and bearing assembly126), and a second actuator (that includes VCM128B, actuator arms124D-124F, and bearing assembly126). A first VCM driver circuit713of motor-driver chip125generates an amplified control signal743A in response to control signals743from first servo controller715, and a second VCM driver circuit713of motor-driver chip125generates an amplified control signal744A in response to control signals744from second servo controller716. Control signals743enable execution of disk access commands received from host10that are to be executed by a first servo system of HDD100that includes VCM128A and control signals744enable execution of disk access commands received from host10that are to be executed by a second servo system of HDD100that includes VCM128B. MA driver circuits717(in some embodiments included in motor-driver chip125) generate amplified second-stage control signals742A and745A in response to control signals742and745(which are control values for microactuators228and/or microactuators229), respectively. Control signals742and745are generated by first servo controller715and second servo controller716, respectively. Thus, a first MA driver circuit717generates amplified second-stage control signal742A for microactuators228and/or229associated with actuator arm assembly120A, and a second MA driver circuit717generates amplified second-stage control signal745A for microactuators228and/or229associated with actuator arm assembly120B. SPM control circuit714generates a drive signal741(a drive voltage or a drive current) in response to a control signal751received from the CPU701and feedback from the spindle motor114, and supplies drive signal741to spindle motor114. In this way, spindle motor114rotates storage disks110A-110D. First servo controller715generates a VCM control signal743(drive voltage or drive current) and a microactuator control signal742(drive voltage or drive current). First servo controller715supplies VCM control signal743to the first actuator (VCM128A) via a VCM driver circuit713and microactuator control signal742to a suitable microactuator228or229via one of MA driver circuits317. In this way, first servo controller715positions read/write heads127A-127D radially relative to a corresponding one of recording surfaces112A-112D. In some embodiments, first servo controller715includes a fine servo controller771that generates microactuator control signal742, a coarse servo controller772that generates VCM control signal743, and a victim feedforward signal generator773that generates a feedforward signal (not shown inFIG.7) for modifying microactuator control signal742when VCM128A is the victim actuator. The functionality of victim feedforward signal generator773can be implemented in whole or in part as software- or firmware-implemented logic, and/or as hardware-implemented logic circuits. Further, in some embodiments, the functionality of victim feedforward signal generator773is distributed across multiple software- firmware- and/or hardware-entities. For example, in one such embodiment, a first portion of the functionality of victim feedforward signal generator773is implemented as one or more logic circuits, such as a multiply-accumulate engine, and a second portion of the functionality of victim feedforward signal generator773is implemented as software executed by, for example, CPU701. In such an embodiment, the multiply-accumulate engine executes most of the operations associated with performing a convolution of previously asserted VCM commands to pre-compute a partial victim feedforward value, while the software executed by the CPU uses the pre-computed partial victim feedforward value and more recently asserted VCM commands (such as VCM commands stored in recent aggressor VCM command buffer777) to compute a victim feedforward value for modifying microactuator control signal742when VCM128A is the victim actuator. Operations performed by victim feedforward signal generator773to generate a victim feedforward value are described in greater detail below in conjunction withFIGS.11-13. According to various embodiments, first servo controller715further includes an aggressor VCM command buffer774, a victim feedforward value buffer775, and a recent aggressor VCM command buffer777. As set forth below, aggressor VCM command buffer774, victim feedforward value buffer775, and recent aggressor VCM command buffer777facilitate the calculation and assertion of a victim feedforward signal by victim feedforward signal generator773. In some embodiments, aggressor VCM command buffer774, victim feedforward value buffer775, and/or recent aggressor VCM command buffer777are implemented as a first-in-first-out (FIFO) memory device, a circular buffer, or any other suitable memory device that can be accessed with sufficient speed to enable embodiments described herein. Further, in some embodiments, recent aggressor VCM command buffer777can be a specified portion of aggressor VCM command buffer774. Aggressor VCM command buffer774is configured for storing recently asserted VCM commands issued to the aggressor actuator. Aggressor VCM command buffer774is configured to store a sufficient number of previously issued VCM commands to include aggressor VCM commands that can still have a significant effect on the current position of the victim head. For example, when the victim head has passed over a servo wedge N, an aggressor VCM command asserted for the aggressor actuator at servo wedge N−500 is unlikely to still have a significant effect on the position of the victim head, and can generally be ignored when calculating a feedforward signal to compensate for recent motion of the aggressor actuator. By contrast, an aggressor VCM command asserted for the aggressor actuator at servo wedge N−3 is likely to have a large effect on the position of the victim head, since the mechanical disturbance of such recent motion by the aggressor actuator is still propagating throughout HDD100. Thus, aggressor VCM command buffer774can be configured to store a relatively large number of aggressor VCM commands. In some embodiments, aggressor VCM command buffer774is configured to store a number of aggressor VCM commands that is fewer than the number of servo wedges associated with a complete rotation of a storage disk110. For example, in some embodiments, aggressor VCM command buffer774is configured to store on the order of about 100 to 200 VCM commands issued most recently to the aggressor actuator. Alternatively, in some embodiments, aggressor VCM command buffer774is configured to store a number of aggressor VCM commands that is approximately equal to the number of servo wedges associated with a complete rotation of a storage disk110. One embodiment of aggressor VCM command buffer774is described below in conjunction withFIG.8Aand one embodiment of recent aggressor VCM command buffer777is described below in conjunction withFIG.8B. FIG.8Aschematically illustrates an aggressor VCM command buffer774, according to an embodiment. As shown, aggressor VCM command buffer774includes a plurality of VCM command entries801that are each associated with a corresponding servo wedge number802. The servo wedge number802for a particular VCM command entry801indicates the servo wedge number most recently passed over by the aggressor head when the particular VCM command entry801is determined for the aggressor VCM. For example, when the aggressor head passes over a servo wedge N and a next VCM command is calculated for the aggressor actuator in response, the aggressor VCM command is calculated, asserted by the aggressor VCM (for example by the aggressor actuator), and stored in aggressor VCM command buffer774as one of VCM command entries801. In addition, a unique value indicating servo wedge N, such as a wedge index value, is stored as the corresponding servo wedge number802for that VCM command entry801. In the embodiment illustrated inFIG.8A, servo wedge numbers802are stored in aggressor VCM command buffer774. In other embodiments, the association between VCM command entries801and servo wedge numbers802can be tracked using any other technically feasible approach. For instance, in some embodiments, aggressor VCM command buffer774does not store servo wedge numbers802. In such embodiments, when command entries801are employed to determine a victim feedforward value, selection of each command entry801can be controlled, for example via a pointer, so that each command entry801is associated with the correct servo wedge during the victim feedforward calculation without explicitly retrieving a servo wedge number802from aggressor VCM command buffer774. In the embodiment illustrated inFIG.8A, VCM command entries801are depicted as integer values, such as values sent to a digital-to-analog converter that is an input to a VCM driver713. In some embodiments, such integer values may be appropriately re-scaled prior to being sent to the VCM driver713. Alternatively, in some embodiments, each VCM command entry801includes a number representing a target current (e.g., in milliamperes), a target aggressor head acceleration (e.g., in m/sec2), or the like. The location of aggressor VCM command buffer774is depicted inFIG.7to be included in or otherwise associated with the servo controller of the aggressor actuator. That is, when VCM128A is the aggressor actuator, aggressor VCM command buffer774of first servo controller715stores VCM commands for VCM128A because first servo controller715controls the aggressor actuator (VCM128A). In alternative embodiments, a VCM command buffer is instead included in or otherwise associated with the servo controller of the victim actuator. For example, in such embodiments, when VCM128A is the aggressor actuator and VCM128B is the victim actuator, an aggressor VCM command buffer784of second servo controller716stores the VCM commands for VCM128A, because second servo controller716controls the victim actuator (VCM128B). Recent aggressor VCM command buffer777facilitates generation of a victim feedforward value for modifying microactuator control signal742when VCM128A is the victim actuator. Specifically, in some embodiments, recent aggressor VCM command buffer777stores the aggressor VCM commands that have been most recently asserted by the aggressor VCM (VCM128B). Some or all of the aggressor VCM commands stored by recent aggressor VCM command buffer777are employed in performing a convolution of previously asserted VCM commands to pre-compute a partial victim feedforward value. In such embodiments, a multiply-accumulate engine (or another logical circuit) associated with or included in victim feedforward signal generator773pre-computes a partial victim feedforward value by executing most of the operations associated with performing the convolution of previously asserted aggressor VCM commands that are likely to affect the position of the victim head, for example 100 or 200 recent aggressor VCM commands stored in aggressor VCM command buffer774. Then, software executed by a CPU or other processor, or by another short run of the multiply-accumulate engine (or other logical circuit), uses the pre-computed partial victim feedforward value and one, some, or all of the aggressor VCM commands stored in recent aggressor VCM command buffer777to complete computation of a victim feedforward value for modifying microactuator control signal742when VCM128A is the victim actuator. FIG.8Bschematically illustrates an aggressor VCM command buffer774, according to another embodiment. Similar to the embodiment illustrated inFIG.8A, aggressor VCM command buffer774includes a plurality of VCM command entries801. Unlike the afore-mentioned embodiment, in the embodiment illustrated inFIG.8B, aggressor VCM command buffer774includes two VCM commands per servo wedge. Thus, in the embodiment, aggressor VCM command buffer774is configured for a multi-rate control system for the aggressor actuator, in which the control signals for the aggressor actuator and the victim actuator are updated at a higher rate than the rate at which the positions of the aggressor head and the victim head are determined. For example, in an embodiment, the control signal for an aggressor actuator (and the victim actuator) is updated at twice the rate at which the read/write head position for the aggressor head and the victim head actuator is measured. Thus, in such an embodiment, two aggressor VCM commands may be associated with each servo-wedge, where these two VCM commands are the VCM commands employed by the servo system for the aggressor actuator to control the aggressor actuator for the corresponding servo wedge. A scheme for implementing embodiments in a multi-rate control system is described below in conjunction withFIGS.12and13. FIG.8Cschematically illustrates recent aggressor VCM command buffer777, according to an embodiment. As shown, recent aggressor VCM command buffer777includes one or more VCM command entries851and one or more corresponding servo wedge numbers852. Thus, for each VCM command entry851, there is a corresponding servo wedge number852. Similar to servo wedge numbers802of aggressor VCM command buffer774, the servo wedge number852for a particular VCM command entry851indicates the servo wedge number most recently passed over by the aggressor head when the particular VCM command entry851is determined for the aggressor VCM. Thus, when the aggressor head passes over a servo wedge N and a next VCM command is calculated for the aggressor actuator in response, the aggressor VCM command is calculated, asserted (for example by the aggressor actuator), and stored in recent aggressor VCM command buffer777as one of VCM command entries851. In addition, a unique value indicating servo wedge N, such as a wedge index value, is stored as the corresponding servo wedge number852for that VCM command entry851. In the embodiment illustrated inFIG.8C, VCM command entries851are depicted as integer values, such as values sent to a digital-to-analog converter that is an input to a VCM driver713. Alternatively, in some embodiments, each VCM command entry851includes a number representing a target current (e.g., in milliamperes), a target aggressor head acceleration (e.g., in m/sec2), or the like. According to some embodiments, recent aggressor VCM command buffer777is configured to store a small number of VCM command entries851, such as one, two, or three of the most recently asserted aggressor VCM commands. According to some embodiments, recent aggressor VCM command buffer777is configured to store only VCM command entries for the most-recent aggressor wedge. In the embodiment illustrated inFIG.8C, a single VCM command entry851is associated with each servo wedge number852in recent aggressor VCM command buffer777. Similar to aggressor VCM command buffer774, in embodiments in which HDD100includes a multi-rate control system for controlling the aggressor actuator and the victim actuator, recent aggressor VCM command buffer777may include multiple (e.g., two) VCM command entries851for each servo wedge number852in recent aggressor VCM command buffer777. Returning toFIG.7, victim feedforward value buffer775is configured for storing one or more values of the victim feedforward signal to be added to microactuator control signal742when VCM128A is the victim actuator. One embodiment of victim feedforward value buffer775is described below in conjunction withFIG.9. FIG.9schematically illustrates a victim feedforward value buffer775, according to an embodiment. As shown, victim feedforward value buffer775includes one or more victim feedforward entries901and, for each victim feedforward entry901, a corresponding servo wedge number902. Thus, for each victim feedforward value stored in victim feedforward value buffer775, there is a corresponding servo wedge number902. That is, each value of the victim feedforward signal stored in victim feedforward value buffer775is associated with a particular servo wedge of a recording surface. According to various embodiments, a time at which a particular victim feedforward entry901is employed to compensate for an aggressor actuator command can be determined based on 1) the servo wedge number902corresponding to that particular victim feedforward entry901and 2) a preset wedge offset value (described below). In some embodiments, each servo wedge number902in victim feedforward value buffer775indicates a respective servo wedge of a recording surface associated with the aggressor actuator, and in other embodiments, with a respective servo wedge of a recording surface associated with the victim actuator. In the embodiment illustrated inFIG.9, victim feedforward entries901are depicted as integer values that are re-scaled prior to being sent to the MA Driver Circuit717to indicate a target displacement of the microactuator (for example, in units of 256 counts per servo-track). Alternatively, in some embodiments, each victim feedforward entry901includes a value indicating a target voltage applied to a microactuator (e.g., microactuator228). In embodiments in which each servo wedge number902indicates a respective servo wedge of a recording surface associated with the aggressor actuator, the servo wedge number902indicates the servo wedge most recently passed over by the aggressor head when that particular victim feedforward value was determined. For example, when the aggressor head passes over a servo wedge N and a victim feedforward entry901is calculated, that victim feedforward entry901is stored in victim feedforward value buffer775and the corresponding servo wedge number902for that victim feedforward value901is a unique wedge index value indicating servo wedge N. According to such embodiments, a particular victim feedforward entry901is asserted at a servo wedge that does not circumferentially correspond to the servo wedge indicated by the servo wedge number902for that particular victim feedforward entry901. Instead, in such embodiments, a particular victim feedforward entry901is asserted at a servo wedge that is circumferentially offset by a preset wedge offset value from the servo wedge indicated by the servo wedge number902. For example, when the aggressor head passes over a servo wedge N, an aggressor VCM command is calculated in next aggressor VCM command calculation413for servo wedge N. Then, in a feedforward value calculation and storing operation, a victim feedforward value is calculated and is stored in victim feedforward value buffer775as a victim feedforward entry901, along with an associated servo wedge number902(such as a unique wedge index value indicating servo wedge N). In the example, that victim feedforward entry901is asserted in response to the victim head passing over a servo wedge on the recording surface associated with the victim actuator that is circumferentially offset from servo wedge N by the preset wedge offset value. Thus, that victim feedforward entry901is not asserted in response to the victim head passing over servo wedge N on the recording surface associated with the victim actuator, and instead is asserted at a later servo wedge. For example, when the wedge offset value is set to 1, that victim feedforward entry901is asserted in response to the victim head passing over servo wedge N+1 on the recording surface associated with the victim actuator. Similarly, when the wedge offset value is set to 2, that victim feedforward entry901is asserted in response to the victim head passing over servo wedge N+2 on the recording surface associated with the victim actuator. Generally, the preset wedge offset value for an HDD is an integral value from one to the number of wedges in a revolution of the disk. In this way, a victim feedforward value that is calculated and stored in feedforward value calculation and storing operation527for servo wedge N is asserted in victim actuator command assertion526for a subsequent servo wedge, such as servo wedge N+1 or N+2. Thus, a victim feedforward value is generated based on position information collected when the aggressor head passes over a first servo wedge of the recording surface associated with the aggressor head, and the victim feedforward value is asserted in response to the victim head passing over a second servo wedge of the recording surface associated with the victim head, where the second servo wedge does not circumferentially correspond to the first servo wedge. In alternative embodiments, each servo wedge number902stored in victim feedforward value buffer775indicates a respective servo wedge of a recording surface associated with the victim actuator. In one such embodiment, the servo wedge number902for a particular victim feedforward entry901indicates the servo wedge to be passed over by the victim head prior to that particular victim feedforward value being asserted. In such an embodiment, the servo wedge to be passed over by the victim head is circumferentially offset by a preset wedge offset value from an aggressor servo wedge, which is a servo wedge of the recording surface associated with the aggressor head. Specifically, in this embodiment, the aggressor servo wedge is the servo wedge from which the most recent position information was collected for the aggressor head on which an aggressor VCM control signal is based that is included in the VCM control signals used to generate the particular victim feedforward entry901. Thus, in such an embodiment, the aggressor servo wedge is the last servo wedge of the recording surface from which aggressor head position information has been collected that is included in the calculation of the particular victim feedforward entry901. For example, in one such embodiment, when the victim head passes over a servo wedge N, a particular victim feedforward entry901is employed during the next calculation of the victim microactuator command for servo wedge N. That particular victim feedforward value is calculated based on a plurality of aggressor VCM commands asserted prior to the victim head passing over servo wedge N (for example when the aggressor head passes over servo wedge N−1, N−2, etc.). That is, the particular victim feedforward entry901is calculated based on a plurality of aggressor VCM commands, the most recent of which is generated in response to the aggressor head passing over a servo wedge that is 1) circumferentially offset by a wedge offset value from servo wedge N and 2) is passed over by the aggressor head prior to the victim head passing over servo wedge N. Thus, when the wedge offset value is set to one, the particular victim feedforward entry901stored in victim feedforward value buffer775that is associated with servo wedge N is generated based on position information that has been collected when the aggressor head passes over servo wedge N−1 and over previous servo wedges, but not on position information that has been collected when the aggressor passes over servo wedge N. Similarly, when the wedge offset value is set to two, that particular victim feedforward entry901stored in victim feedforward value buffer775is generated based on position information that has been collected when the aggressor head passes over servo wedge N−2 and a plurality of servo wedges preceding servo wedge N−2, but not on position information that has been collected when the aggressor passes over servo wedges N or N−1. In the embodiment, the servo wedge indicated by the servo wedge number902for a particular victim feedforward entry901is a unique index value. The unique index value indicates a servo wedge passed over by the victim head (e.g., servo wedge N) that is offset by the wedge offset value (e.g., 2) from the servo wedge passed over by the aggressor head and associated with the determination of that particular victim feedforward entry901(e.g., servo wedge N−2). That is, in such an embodiment, the unique index value indicates the servo wedge to be passed over by the victim head immediately prior to that particular victim feedforward value being asserted. Therefore, as with the previously described embodiment for servo wedge number902, a victim feedforward value is generated based on position information collected when the aggressor head passes over a first servo wedge of the recording surface associated with the aggressor head, and the victim feedforward value is asserted in response to the victim head passing over a second servo wedge of the recording surface associated with the victim head, where the second servo wedge does not circumferentially correspond to the first servo wedge and is circumferentially offset from the first servo by the wedge offset value. In the embodiment illustrated inFIG.9, victim feedforward value buffer775includes 11 or more victim feedforward entries901and corresponding servo wedge numbers902. In other embodiments, victim feedforward value buffer775includes a number of victim feedforward entries901and servo wedge numbers902equal to or greater than the number of wedges associated with one revolution of a storage disk. Returning toFIG.7, second servo controller716is similar in configuration and operation to first servo controller715. Second servo controller716generates a VCM control signal744(drive voltage or drive current) and a microactuator control signal745(drive voltage or drive current), and supplies VCM control signal744to the second actuator (VCM128B) via a VCM driver circuit713and microactuator control signal745to a suitable microactuator228or229via MA driver circuit317. In this way, second servo controller716positions read/write heads127E-127H radially with respect to a corresponding one of recording surface112E-127H. In some embodiments, second servo controller716includes a fine servo controller781that generates microactuator control signal745, a coarse servo controller782that generates VCM control signal744and a victim feedforward signal generator783that generates a feedforward signal (not shown inFIG.7) for modifying microactuator control signal745. Similar to victim feedforward signal generator773, the functionality of victim feedforward signal generator783can be implemented in whole or in part as software- or firmware-implemented logic, and/or as hardware-implemented logic circuits. Further, in some embodiments, the functionality of victim feedforward signal generator783is distributed across multiple software- firmware- and/or hardware-entities. Operations performed by victim feedforward signal generator783to generate a victim feedforward value are described in greater detail below in conjunction withFIGS.12A,12B, and13. According to various embodiments, second servo controller716further includes an aggressor VCM command buffer784, a victim feedforward value buffer785, and a recent aggressor VCM command buffer787. Aggressor VCM command buffer784, victim feedforward value buffer785, and recent aggressor VCM command buffer787facilitate the calculation and assertion of a victim feedforward signal by victim feedforward signal generator783in the same way (described above) that aggressor VCM command buffer774, victim feedforward value buffer775, and recent aggressor VCM command buffer777facilitate the calculation and assertion of a victim feedforward signal by victim feedforward signal generator773. In some embodiments, aggressor VCM command buffer784, victim feedforward value buffer785, and/or recent aggressor VCM command buffer787are implemented as a FIFO memory device, a circular buffer, or any other suitable memory device that can be accessed with sufficient speed to enable embodiments described herein. In the embodiment described above, first servo controller715and second servo controller716each generate a feedforward signal for modifying a microactuator signal. In alternative embodiments, CPU701generates a feedforward signal for modifying microactuator signal742and another feedforward signal for modifying microactuator signal745. Thus, in some embodiments, first servo controller715and second servo controller716are implemented in whole or in part in firmware running on CPU701. In embodiments in which microprocessor-based controller133includes multiple CPUs, such firmware can run on one or more of the multiple CPUs. In the embodiment described above, aggressor VCM command buffer774, victim feedforward value buffer775, and/or recent aggressor VCM command buffer777are included in or otherwise associated with first servo controller715and aggressor VCM command buffer784, victim feedforward value buffer785, and/or recent aggressor VCM command buffer787are included in or otherwise associated with second servo controller716. In other embodiments, the functionality of aggressor VCM command buffer774, victim feedforward value buffer775, recent aggressor VCM command buffer777, aggressor VCM command buffer784, victim feedforward value buffer785and/or recent aggressor VCM command buffer777is included in RAM134. Fractional-Wedge Timing Compensation of Aggressor and Victim Actuators FIG.10Ais a control diagram1000illustrating the generation and application of a victim feedforward control signal in HDD100, according to various embodiments. As shown, HDD100includes a first control loop1020associated with an aggressor actuator (in this example VCM128A) and a second control loop1030associated with a victim actuator (in this example VCM128B). In conjunction, first control loop1020and second control loop1030enable fractional-wedge timing compensation in the application of the victim feedforward control signal. First control loop1020includes VCM128A, a microactuator1028A for the currently active read/write head associated with VCM128A, aggressor VCM command buffer774, and first servo controller715. In some embodiments, first control loop1020further includes a notch filter1011for modifying microactuator control signals1042for microactuator1028A and/or a notch filter1012for modifying VCM control signals1047for VCM128A to a filtered VCM control signal743. In some embodiments, first control loop1020also includes an injection point1029. Injection point1029is a point in first control loop1020at which a disturbance can be injected into control signals that are applied to VCM128A (e.g., VCM control signal743) as part of measuring a transfer function is described below in conjunction withFIG.16. Second control loop1030includes VCM128B, a microactuator1028B for the currently active head associated with VCM128B, second servo controller716, and victim feedforward signal generator783. In some embodiments, second control loop1030further includes recent aggressor VCM command buffer787, and in some embodiments second control loop1030further includes victim feedforward value buffer785. Additionally or alternatively, in some embodiments, second control loop1030further includes a notch filter1021for modifying microactuator control signals1045for microactuator1028B and/or a notch filter1022for modifying VCM control signals1044for VCM128B. For clarity, VCM driver circuit713and MA driver circuit717are not shown inFIGS.10A and10B. Instead, the functionality of VCM driver circuit713is included in VCM128A and128B, while the functionality of MA driver circuit717is included in microactuator1028A and1028B. Further, in the embodiment illustrated inFIG.10A, notch filters1011and1012are depicted as external to first servo controller715and recent aggressor VCM command buffer787, victim feedforward signal generator783, and notch filters1021and1022are depicted as external to second servo controller716. In other embodiments, notch filters1011and/or1012may be implemented conceptually or physically as part of first servo controller715, and recent aggressor VCM command buffer787, victim feedforward signal generator783, and/or notch filters1021and1022may be implemented conceptually or physically as part of second servo controller716. During operation, first servo controller715generates VCM control signal743(or alternatively VCM control signal1047, on which VCM control signal743is based) for VCM128A and microactuator control signal742(or alternatively microactuator control signal1042, on which microactuator control signal742is based) for microactuator1028A. Alternatively, CPU701generates microactuator control signal742for microactuator1028A. Because of mechanical coupling1001between VCM128A and VCM128B, operations performed by VCM128A in response to VCM control signal743cause a radial displacement of the currently active read/write head associated with VCM128B. This radial displacement contributes to position error signal (PES)1002based on the radial position1005of the currently active read/write head associated with VCM128B. According to various embodiments, victim feedforward signal generator783generates a victim feedforward signal1003, based on a transfer function. In some embodiments, the transfer function models commands added to microactuator control signal745for microactuator1028B (the victim microactuator) as a function of VCM control signal743for VCM128A (the aggressor actuator). Example embodiments for the determination of such a transfer function are described below in conjunction withFIGS.15-17. Victim feedforward signal generator783in second servo controller716receives information about VCM control signal1047or VCM control signal743from first servo controller715via a communication link between first servo controller715and second servo controller716(shown inFIG.7). The information about VCM control signal1047or VCM control signal743is associated with a first servo wedge. In some embodiments, feedforward signal1003values (for example, victim feedforward entries901and servo wedge numbers902) are then stored in victim feedforward value buffer785and added to a subsequent microactuator control signal1045(or alternatively microactuator control signal745, which is based on microactuator control signal1045) associated with a second servo wedge that is passed over by the victim head subsequent to the aggressor head passing over the first servo wedge. In some embodiments, victim feedforward value buffer785may be included in, for example, victim feedforward signal generator783or in second servo controller716. Feedforward signal1003, when added to the appropriate microactuator control signal1045(or alternatively to microactuator control signal745, which is based on microactuator control signal1045), reduces or compensates for contributions to radial position1005caused when VCM control signal743(which is based on VCM control signal1047) is applied to VCM128A. Victim feedforward entries901are employed based on the corresponding servo wedge number901in victim feedforward value buffer785. As a result, the use of a particular victim feedforward entry901is timed to be used for an appropriate servo wedge that is crossed by the victim head (the second servo wedge) after the aggressor head has passed over a servo wedge (the first servo wedge) that is used to generate the particular victim feedforward entry901. Alternatively, in some embodiments, a suitable victim feedforward value is generated by victim feedforward signal generator783immediately before being added to the appropriate microactuator control signal1045and storage of victim feedforward values for one or more servo wedges is not needed. For example, in such embodiments, victim feedforward value buffer785is not employed. Instead, a victim feedforward value used to modify microactuator control signal1045(or alternatively, microactuator control signal745, which is based on microactuator control signal1045) that is asserted by the victim actuator for servo wedge N is calculated shortly before the victim actuator command is asserted for servo wedge N. In such embodiments, the victim feedforward value is based on aggressor actuator commands that are associated with servo wedges prior to servo wedge N, the most recent of which precedes servo wedge N by the wedge offset value for HDD100. In such embodiments, one or more of the most recent aggressor actuator commands that are employed to generate the victim feedforward value may be stored in recent aggressor VCM command buffer787. In some embodiments, the transfer function for determining feedforward signal1003, referred to herein as the feedforward transfer function, is determined as the ratio of two transfer functions that can be directly measured in the multi-actuator drive: a first transfer function modeling radial position1005as a function of filtered VCM control signal743for VCM128A and a second transfer function modeling radial position1005as a function of commands1003that are added to microactuator control signal1045or microactuator control signal745. In such embodiments, the first and second transfer functions can be determined in HDD100as part of a calibration/start-up process or during factory tuning of the drive. For example, in one such embodiment, values associated with the first transfer function are determined by adding various values to filtered VCM control signal743and measuring the resultant radial position1005of the victim read/write head. Similarly, values associated with the second transfer function are determined by adding various values of feedforward signal1003to microactuator control signal745and measuring the resultant radial position1005of the victim read/write head. In some embodiments, the second transfer function is determined by adding various values of feedforward signals directly to microactuator control signal745. In alternative embodiments, the second transfer function is determined by adding various values of feedforward signals to the input of notch filter1021, the output of which is microactuator control signal745. The latter embodiment is shown inFIG.10A. The second transfer function is determined using the same injection point that will be used to inject feedforward during operation of the HDD100. Example embodiments for the determination of the feedforward transfer function are described below in conjunction withFIG.16, example embodiments for the determination of the first transfer function are described below in conjunction withFIG.17, and example embodiments for the determination of the second transfer function are described below in conjunction withFIG.18. In some embodiments, victim feedforward signal1003is generated using a kernel that is derived based on the feedforward transfer function. In such embodiments, values associated with the feedforward transfer function are determined by taking a ratio of the above-described first and second transfer functions (e.g., the first transfer function divided by the second transfer function), where the kernel is the inverse discrete Fourier transform (DFT) of the values associated with the feedforward transfer function. Subsequently, victim feedforward signal generator783can convolve values for filtered VCM control signal743with the kernel to generate feedforward signal1003. In the embodiment illustrated inFIG.10A, VCM128A is described as the aggressor actuator and VCM128B is described as the victim actuator. In other instances, VCM128B can operate as the aggressor actuator and VCM128A can operate as the victim actuator. In such instances, victim feedforward signal generator773(shown inFIG.7) generates a feedforward signal similar to feedforward signal1003and provides that feedforward signal to microactuator1028A as a correction signal. In normal practice, VCM128A and VCM128B act as both the aggressor actuator and the victim actuator simultaneously. In the embodiment illustrated inFIG.10A, victim feedforward signal generator783is implemented as an element of HDD100that is separate from first servo controller715and second servo controller716. Alternatively, victim feedforward signal generator783is implemented as a component of first servo controller715, a component of second servo controller716, or a component of both first servo controller715and second servo controller716. Similarly, in instances in which VCM128B is the aggressor actuator and VCM128A is the victim actuator, victim feedforward signal generator773can be implemented as an element of HDD100that is separate from first servo controller715and second servo controller716, as a component of first servo controller715, a component of second servo controller716, or a component of both first servo controller715and second servo controller716. In embodiments in which multiple read/write heads are coupled to a victim actuator, such as VCM128A or VCM128B, the above-described feedforward transfer function for determining feedforward signal1003typically varies for each such read/write head. That is, mechanical coupling1001between an aggressor actuator and a victim actuator can result in a different contribution to radial position1005of a victim head, depending on the victim head. For example, referencing the embodiment illustrated inFIG.2, actuation of actuator arm assembly120A can affect radial position1005for each of read/write heads227E,227F,227G, and227H differently. In such embodiments, a different feedforward transfer function for determining feedforward signal1003is determined for each read/write head127included in HDD100. A process for determining different feedforward transfer functions for each read/write head127of HDD100is described below in conjunction withFIG.16-18. In some embodiments, first control loop1020includes notch filter1011and/or notch filter1012and second control loop1030includes notch filter1021and/or notch filter1022. In such embodiments, notch filters1012and1022may be band-stop filters configured to block or attenuate portions of input signals that are likely to excite one or more resonances in or associated with VCM128A and128B, respectively. For first control loop1020, properly-designed notch filters1011and1012, in combination with other elements in first servo controller715and the mechanical system including microactuator1028A and VCM128A, can result in a stable servo control-loop. Without the notch filters, first control loop1020might be unstable, or only marginally stable. Similarly, for second control loop1030, properly-designed notch filters1021and1022facilitate a stable servo control-loop. For example, one or more bands of an input signal, such as VCM control signal742or744, are reduced in amplitude when processed by notch filter1012or1022. Notch filters1011and1021are configured to remove or reduce portions of input signals that are likely to excite one or more resonances in or associated with microactuators1028A and1028B, respectively. In the embodiment described above, notch filters1011,1012,1021, and1022are employed to eliminate or greatly attenuate certain frequency components. Alternatively or additionally, in some embodiments, one or more of notch filters1011,1012,1021, and1022are configured to modify the phase of a signal and/or to increase the gain of a signal at certain frequencies. Such filters are sometimes called “phase steering” or “loop-shaping” filters, and can be used to stabilize a system using calculations that are similar or identical to calculations included in notch filters that are designed to eliminate or greatly attenuate certain frequency components. Alternatively, in some embodiments, one or more of notch filters1011,1012,1021, and1022are omitted from first control loop1020and/or second control loop1030, and/or are included in victim feedforward signal generator783. FIG.10Bis a control diagram1050illustrating the generation and application of a victim feedforward control signal in HDD100, according to various embodiments. Control diagram1050is substantially similar to control diagram1000inFIG.10A, with two exceptions. First, control diagram1050includes a notch filter1051that is configured to process the output of victim feedforward signal generator783and victim feedforward value buffer785, i.e., victim feedforward signal1003. For clarity, victim feedforward value buffer785is omitted fromFIG.10B, but can be located between victim feedforward signal generator783and notch filter1051. Second, victim feedforward signal1003, after being modified by notch filter1051, is added to microactuator control signal745in a different location, i.e., a summer1052disposed between notch filter1021and microactuator1028A. Victim Feedforward Signal Generation As described above, victim feedforward signal generator773and victim feedforward signal generator783are each configured to generate a victim feedforward signal (for example victim feedforward signal1003) for the current victim head in HDD100. In some embodiments, such a victim feedforward signal is based on not only a transfer function specific to the current victim head, but also on a timing offset between the victim head and the aggressor head. One such embodiment is described below in conjunction withFIG.11. FIG.11is a more detailed block diagram of a victim feedforward signal generator1100, according to various embodiments. For example, victim feedforward signal generator1100can be implemented as victim feedforward signal generator773and as victim feedforward signal generator783in HDD100. In the embodiment illustrated inFIG.11, victim feedforward signal generator1100includes a VCM command interpolator1101, a victim feedforward kernel1102, and a victim feedforward signal interpolator1103. In some alternative embodiments, victim feedforward signal generator1100does not include VCM command interpolator1101, and in some alternative embodiments victim feedforward signal generator1100does not include victim feedforward signal interpolator1103. Victim feedforward kernel1102is a kernel for a specific victim head and is derived based on the feedforward transfer function for that specific victim head. Generally, victim feedforward kernel1102is configured to generate a victim feedforward signal or signals (e.g., victim feedforward signal1003inFIG.10A) based on aggressor VCM commands/control signals and on a set of values derived from the appropriate feedforward transfer function for the victim head. For example, in some embodiments, victim feedforward kernel1102generates a victim feedforward signal or signals based on VCM command entries801ofFIG.8A or8B. Generation of a set of values for victim feedforward kernel1102for a specific victim head is described below in conjunction withFIG.16. VCM command interpolator1101is configured to modify VCM command values prior to use by victim feedforward kernel1102based on the timing offset between the victim head and the aggressor head. As described above, the timing offset between the victim head and the aggressor head can change over the lifetime of HDD100, and/or the aggressor head can have a different timing offset with the victim head than the base aggressor head that was used when determining victim feedforward kernel1102. In either case, the modified VCM command values generated by VCM command interpolator1101are selected to compensate for such timing offsets between the victim head and the aggressor head. For example, in some embodiments, VCM command interpolator1101generates a modified VCM command value for use by victim feedforward kernel1102by interpolating between two calculated VCM command values. Such embodiments are described below in conjunction withFIGS.12A and12B. FIG.12Ais a plot of VCM command values1200determined for a portion of an aggressor seek performed by an aggressor head when there is no timing offset between the aggressor head and the victim head, according to various embodiments. Each of VCM command values1200is associated with a different corresponding servo wedge number (e.g., N−4, N−3, N−2, N−2, N−1, or N). For example, VCM command values1200can correspond to VCM command entries801ofFIG.8A, and servo wedge numbers N−4 through N correspond to servo wedge numbers802ofFIG.8A. Thus, when the aggressor head crosses servo wedge N−4, a VCM command value1214is calculated, asserted by the aggressor actuator (for example by the aggressor VCM), and stored (for example in aggressor VCM command buffer774inFIG.8A) as one of VCM command entries801. Similarly, when the aggressor head crosses servo wedge N−3, a VCM command value1213is calculated, asserted, and stored; when the aggressor head crosses servo wedge N−2, a VCM command value1212is calculated, asserted, and stored; when the aggressor head crosses servo wedge N−1, a VCM command value1211is calculated, asserted, and stored; and when the aggressor head crosses servo wedge N, a VCM command value1210is calculated, asserted, and stored. In some instances, the transfer function for determining the values included in victim feedforward kernel1102for the victim head accurately takes into account the timing relationship (e.g., the timing offset) between the victim head and the aggressor head. As a result, in such instances, VCM command interpolator1101does not modify VCM command values1200. For example, in some instances, neither the victim head nor the aggressor head have undergone a significant timing shift since measurements for determining the transfer function were performed. In such instances, when the aggressor head is the base aggressor head used when determining victim feedforward kernel1102for the current victim head, VCM command interpolator1101does not modify VCM command values1200, even when a significant timing offset is present between the victim head and the aggressor head (such as a timing offset600ofFIG.6). This is because the victim head can be assumed to cross servo wedge N on the victim head recording surface with an initial timing offset from when the aggressor head crosses servo wedge N on the aggressor head recording surface. Because the effects of the initial timing offset between a particular victim head and the base aggressor head are accounted for in victim feedforward kernel1102for the particular victim head, in such an instance, VCM command interpolator1101does not modify VCM command values1200. By contrast, in many instances there is a timing relationship difference between the victim head and the aggressor head performing the aggressor seek associated with VCM command values1200. In some embodiments, in such instances VCM command values1200are modified by VCM command interpolator1101to compensate for such a timing relationship difference. One such embodiment is described below in conjunction withFIG.12B. FIG.12Bis a plot of VCM command values1200that are initially calculated for an aggressor seek and interpolated VCM command values1250-1254(referred to collectively as interpolated VCM command values1240) that are actually used to determine a victim feedforward signal, according to various embodiments. Specifically, interpolated VCM command values1240for the aggressor seek are employed to compensate for certain timing relationship differences between the victim head and the aggressor head. One example of such a timing relationship difference is a timing shift that is experienced by the aggressor head, for example due to a circumferential displacement or other circumferential misalignment of the aggressor head. Such a timing shift alters the timing offset between the victim head and the aggressor head from an initial timing offset (such as an initial timing offset600inFIG.6). Another example of such a timing relationship difference is a timing offset between the aggressor head and the base aggressor head that was used when determining victim feedforward kernel1102, because victim feedforward kernel1102is configured based on the assumption that the aggressor head has identical timing to that of the base aggressor head. In the embodiment illustrated inFIG.12B, there is a timing relationship difference between the victim head and the aggressor head that is not accounted for by the victim feedforward kernel1102for the current victim head. In some instances, the timing relationship difference is a timing shift that is experienced by the aggressor head, and in other instances, the timing relationship difference is a timing offset between the aggressor head and the base aggressor head. As noted above, the timing relationship difference causes the transfer functions on which victim feedforward kernel1102is based to be less accurate, and the victim feedforward signal1003generated by victim feedforward kernel1102is less effective at canceling the effect of aggressor disturbances on victim head position. According to various embodiments, interpolated VCM command values1240are used by victim feedforward kernel1102instead of VCM command values1200to compensate for such a timing relationship difference between the victim head and the aggressor head. InFIG.12B, VCM command values1200are calculated for an aggressor seek. Victim feedforward kernel1102generates a victim feedforward signal1003based on a transfer function that assumes there is no timing shift, timing offset, or other timing relationship difference between the current victim head and the aggressor head that affects the accuracy of victim feedforward kernel1102. However, in the instance illustrated inFIG.12B, there is a timing offset1260that is indicated by a horizontal displacement between an assumed time that the aggressor head crosses wedge N and an actual time the aggressor head crosses wedge N. For example, timing offset1260can be caused by the aggressor head being a different head than the base aggressor head used to determine victim feedforward kernel1102. Alternatively or additionally, timing offset1260can be caused by a timing shift that is experienced by the aggressor head. In the instance illustrated inFIG.12B, the aggressor head crosses each servo wedge earlier than a time assumed by victim feedforward kernel1102. Similarly, inFIG.12Bthere is a timing offset1261between an assumed time that the aggressor head crosses wedge N−1 and an actual time the aggressor head crosses wedge N−1, a timing offset1262between an assumed time that the aggressor head crosses wedge N−2 and an actual time the aggressor head crosses wedge N−2, a timing offset1263between an assumed time that the aggressor head crosses wedge N−3 and an actual time the aggressor head crosses wedge N−3, and a timing offset1264between an assumed time that the aggressor head crosses wedge N−4 and an actual time the aggressor head crosses wedge N−4. In the instance illustrated inFIG.12B, timing offset1260, timing offset1261, timing offset1262, timing offset1263, and timing offset1264are substantially equal in magnitude, but in other embodiments, each timing offset can have a different magnitude for each servo wedge of a recording surface. According to various embodiments, VCM command interpolator1101determines interpolated VCM command values1240(denoted by triangles) based on VCM command values1200(denoted by circles) and corresponding timing offsets. In some embodiments, an interpolated VCM command value1240for a particular servo wedge is linearly interpolated from the VCM command value determined for that servo wedge and the VCM command value determined for an adjacent servo wedge. In such embodiments, the magnitude of the timing offset at that particular servo wedge indicates where along the interpolation function the particular interpolated VCM command value1240lies. For example, in some embodiments, interpolated VCM command value1250is based on a VCM command value1209calculated for servo wedge N+1, VCM command1210calculated for servo wedge N, and timing offset1260; interpolated VCM command value1251is based on a VCM command value1210calculated for servo wedge N, VCM command1211calculated for servo wedge N−1, and timing offset1261; interpolated VCM command value1252is based on a VCM command value1211calculated for servo wedge N−1, VCM command1212calculated for servo wedge N−2, and timing offset1262; and so on. In an embodiment illustrated inFIG.12B, interpolated VCM command value1250is linearly interpolated from VCM command value1209and VCM command1210using timing offset1260. In other embodiments, interpolated VCM command value1250for servo wedge N is determined using any other technically feasible interpolation between VCM command value1209(calculated for servo wedge N+1) and VCM command1210(calculated for servo wedge N), or including VCM command values from earlier wedges. Alternatively, in an instance in which the value of timing offset1260is negative, the aggressor head crosses each servo wedge later than assumed by victim feedforward kernel1102. In some embodiments, in such an instance, interpolated VCM command value1250for servo wedge N is determined using any technically feasible interpolation between VCM command value1210(calculated for servo wedge N), VCM command1211(calculated for servo wedge N−1) and timing offset1260. In the embodiment described above in conjunction withFIG.12B, a single VCM command value1200is associated with each servo wedge, and VCM command interpolator1101generates a single interpolated VCM command value1240for each servo wedge. Thus,FIG.12Bdepicts the generation of interpolated VCM command values1240for a single-rate control system. In other embodiments, VCM command interpolator1101is configured for operating in conjunction with a multi-rate control system, and therefore generates multiple interpolated VCM command values1240for each servo wedge. In such embodiments, for each servo wedge, there are, for example, L associated multi-rate VCM command values. Consequently, VCM command interpolator1101generates L interpolated multi-rate VCM command values1240for each servo wedge, where the first interpolated multi-rate VCM command value1240for a servo wedge is based on the first multi-rate VCM command value determined for that servo wedge and the first multi-rate VCM command value determined for an adjacent servo wedge; the second interpolated multi-rate VCM command value1240for the servo wedge is based on the second multi-rate VCM command value determined for that servo wedge and the second multi-rate VCM command value determined for the adjacent servo wedge, and so on to the Lthmulti-rate VCM command value determined for that servo wedge. Returning toFIG.11, victim feedforward signal interpolator1103is configured to modify values of a victim feedforward signal prior to being stored and added to a subsequent microactuator control signal. For example, in some embodiments, victim feedforward signal interpolator1103modifies values of victim feedforward signal1003(shown inFIGS.10A and10B) before the values of victim feedforward signal1003are stored in victim feedforward value buffer785(shown inFIG.8A) and added to microactuator control signal1045(shown inFIG.10A). Similar to how VCM command interpolator1101modifies VCM command values prior to use by victim feedforward kernel1102based on the timing offset between the victim head and the aggressor head, victim feedforward signal interpolator1103modifies values of a victim feedforward signal based on the timing offset between the victim head and the aggressor head. Such embodiments are described below in conjunction withFIG.13. FIG.13is a plot of victim feedforward signal values1300and interpolated victim feedforward signal values1350-1354(referred to collectively as interpolated victim feedforward signal values1340), according to various embodiments. Victim feedforward signal values1300(denoted by circles) are initially calculated for controlling the position of a victim head that is disturbed by an aggressor seek, such as values of victim feedforward signal generated by victim feedforward kernel1102inFIG.11. Interpolated victim feedforward signal values1350-1354(denoted by triangles) are signals that are actually used for controlling the position of a victim head that is disturbed by an aggressor seek instead of victim feedforward signal values1300. Specifically, interpolated victim feedforward signal values1340are employed instead of victim feedforward signal values1300in order to compensate for certain timing relationship differences between the victim head and the aggressor head. One example of such a timing relationship difference is a timing shift that is experienced by the victim head, for example due to a circumferential displacement or other circumferential misalignment of the victim head. Such a timing shift alters the timing offset between the victim head and the aggressor head from an initial timing offset (such as an initial timing offset600inFIG.6). In some embodiments, when there is no timing relationship difference between the victim head and the aggressor head that is not accounted for by the victim feedforward kernel1102for the current victim head, victim feedforward signal values1300can be added to a microactuator control signal without being modified by victim feedforward signal interpolator1103. By contrast, in the embodiment illustrated inFIG.13, there is a timing relationship difference between the victim head and the aggressor head that is not accounted for by the victim feedforward kernel1102for the current victim head. As noted above, such a timing relationship difference can cause the transfer function on which victim feedforward kernel1102is based to be less accurate. InFIG.13, victim feedforward signal values1300are calculated for a victim head based on the assumption that there is no timing shift, timing offset, or other timing relationship difference between the current victim head and the aggressor head that affects the accuracy of victim feedforward kernel1102. However, in the instance illustrated inFIG.13, there is a timing offset1360that is indicated by a horizontal displacement between an assumed time that the aggressor head crosses wedge N and an actual time the aggressor head crosses wedge N (or alternatively, a timing offset between an assumed time that the victim head crosses a particular wedge and an actual time that the victim head crosses that wedge). Thus, the aggressor head crosses each servo wedge earlier than assumed by victim feedforward kernel1102. That is, there is a timing offset1360that is indicated by a horizontal displacement inFIG.13between an assumed time that the aggressor head crosses wedge N and an actual time the aggressor head crosses wedge N. Similarly, inFIG.13there is a timing offset1361between an assumed time that the aggressor head crosses wedge N−1 and an actual time the aggressor head crosses wedge N−1, a timing offset1362between an assumed time that the aggressor head crosses wedge N−2 and an actual time the aggressor head crosses wedge N−2, a timing offset1363between an assumed time that the aggressor head crosses wedge N−3 and an actual time the aggressor head crosses wedge N−3, and a timing offset1364between an assumed time that the aggressor head crosses wedge N−4 and an actual time the aggressor head crosses wedge N−4. In the instance illustrated inFIG.13, timing offset1360, timing offset1361, timing offset1362, timing offset1363, and timing offset1364are substantially equal in magnitude, but in other embodiments, each timing offset can have a different magnitude for each servo wedge. According to various embodiments, victim feedforward signal interpolator1103determines each interpolated victim feedforward signal value1340based on victim feedforward signal values1300and a corresponding timing offset. In some embodiments, an interpolated victim feedforward signal value1340for a particular servo wedge is linearly interpolated from the victim feedforward signal value1300determined for that servo wedge and the victim feedforward signal value1300determined for an adjacent servo wedge. In such embodiments, the magnitude of the timing offset at that particular servo wedge indicates where along the interpolation function the particular interpolated victim feedforward signal value1340lies. In an embodiment illustrated inFIG.13, interpolated victim feedforward signal values1340are linearly interpolated from victim feedforward signal values1300based on timing offsets1360-1364. In other embodiments, an interpolated victim feedforward signal value1340for a particular servo wedge is determined using any other technically feasible interpolation between the victim feedforward signal value1300for the servo wedge and the victim feedforward signal value1300for an adjacent servo wedge. Further, as described above for VCM command interpolator1101, in some embodiments, victim feedforward signal interpolator1103is configured for operating in conjunction with a multi-rate control system, and therefore generates multiple interpolated victim feedforward signal values1340for each servo wedge. Using Interpolated Victim Feedforward Signal for Controlling Victim Head Position FIG.14sets forth a flowchart of method steps for controlling magnetic head position in a multi-actuator HDD, according to an embodiment. In some embodiments, the method steps are performed in HDD100during normal operation of HDD100. Although the method steps are described in conjunction with HDD100ofFIGS.1-13, persons skilled in the art will understand that the method steps may be performed with other types of systems. The control algorithms for the method steps may reside in microprocessor-based controller133, motor-driver chip125, or a combination of both. The control algorithms can be implemented in whole or in part as software- or firmware-implemented logic, and/or as hardware-implemented logic circuits. Prior to the method steps, values for a victim feedforward generator783of HDD100are determined, such as via method1600ofFIG.16. For example, in an embodiment in which HDD100includes N read/write heads127, where N is a positive integer, at least N sets of values are determined and stored for use by victim feedforward generator783. In the embodiment, each set of values corresponds to a kernel that is derived from a feedforward transfer function for a different read/write head127. Thus, victim feedforward generator783can generate a different victim feedforward signal to a victim microactuator depending on which of the N read/write heads127of HDD100is currently the victim head. In such embodiments, the values for victim feedforward generator783for each of the N victim read/write heads127can be determined using the same base aggressor head. Therefore, in such embodiments, N sets of values are determined and stored for use by victim feedforward generator783. In some embodiments, prior to the method steps, M sets of values, where M is a positive integer, are determined and stored for use by victim feedforward generator783for each of the N read/write heads127. For example, in one such embodiment, each of the M different sets of values for a particular read/write head127corresponds to a different temperature range in which HDD100may operate and for which a different victim feedforward transfer function is applicable. Thus, in the embodiment, victim feedforward generator783can generate M different victim feedforward signals1003for a single read/write head127, depending on the temperature range in which HDD100is operating at the time. Alternatively or additionally, in some embodiments, prior to the method steps, K sets of values, where K is a positive integer, are determined and stored for use by victim feedforward generator783for each of the N read/write heads127. For example, in one such embodiment, each of the K different sets of values for a particular read/write head127corresponds to a different radial location of the victim head. Thus, in the embodiment, victim feedforward generator783can generate K different victim feedforward signals1003for a single read/write head127, depending on the radial location of the victim head at the time. Further, in some embodiments, victim feedforward generator783can generate K×M different victim feedforward signals1003for a single read/write head127, depending on the radial location of the victim head and the temperature range in which HDD100is operating at the time. A method1400begins at step1401, when a suitable controller (e.g., microprocessor-based controller133and/or motor-driver chip125) determines a specific set of values to be retrieved from a memory by victim feedforward generator783to generate an appropriate victim feedforward signal1003. For purposes of discussion below, the specific set of values to be retrieved from a memory by victim feedforward generator783to generate an appropriate victim feedforward signal will be referred to as a victim feedforward kernel. The controller then retrieves the specific set of values from the appropriate memory of HDD100. The controller determines the specific set of values based on which of the N read/write heads127of HDD100is currently designated to be the victim head. Additionally, in some embodiments, the controller determines the specific set of values further based on which of M different predetermined temperature ranges HDD100is currently operating in. Additionally or alternatively, in some embodiments, the controller determines the specific set of values further based on which of K different radial locations the victim head currently occupies. Thus, in some embodiments, the controller determines the specific set of values from N different sets of values; in other embodiments, the controller determines the specific set of values from N×M different sets of values; in other embodiments, the controller determines the specific set of values from N×K different sets of values; and in yet other embodiments, the controller determines the specific set of values from N×M×K different sets of values. In yet other embodiments, the controller may determine the set of values based upon a combination of one or more of the up to N×M×K different sets of values, using interpolation between two or more sets of values, based upon temperature range, victim location, or other operating parameters. In some embodiments, all of the disk surfaces associated with actuator arm assembly120A (for example, recording surfaces112A-112D inFIG.2) are servo-written in such a manner that the servo samples are aligned in time. In other words, a servo wedge on surface112A passes under read/write head227A at about the same time that a servo wedge on surface112B passes under read/write head227B, and so on. For this reason, the timing of commands sent to VCM128A (as part of the response of first servo controller715to the measured position of any read/write head associated with that VCM) should be relatively independent of which read/write head is currently under servo control. In such a case, coordination of fractional-wedge timing of an aggressor actuator and a victim actuator may not be employed for generating a victim feedforward signal. In step1402, the controller receives or determines a radial position generated by the victim head, such as radial position1005, and a radial position generated by the aggressor head. Generally, the controller receives or determines the position signal generated by the victim head as the victim head passes over a servo wedge N on a recording surface112of HDD100associated with the victim head. Similarly, the controller receives or determines the position signal generated by the aggressor head as the aggressor head passes over the servo wedge N on a recording surface112of HDD100associated with the aggressor head. The position signal for the victim head is employed to enable the victim servo loop to function, and the position signal for the aggressor head is employed to enable the aggressor servo loop to function. In this description, it is assumed that the aggressor and victim pass over wedge-number N at about the same time. While this may be the case in some embodiments, in other embodiments the aggressor head may pass over wedge N at about the time that victim head passes over wedge P. In step1411, the controller generates an aggressor VCM control signal for moving the aggressor actuator of HDD100. The controller generates the aggressor VCM control signal based on aggressor head position signal received in step1402. In instances in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the controller generating the aggressor VCM control signal corresponds to first servo controller715inFIG.7. In step1412, the controller generates a microactuator control signal for the aggressor head based on the radial position received or determined in step1402. In optional step1415, the aggressor VCM control signal generated in step1411and/or the microactuator control signal for the aggressor head generated in step1412is modified with one or more notch filters, such as notch filters1011or1012. For example, in an embodiment, a notch filter modifies the microactuator signal for the aggressor head to produce a filtered microactuator signal for the aggressor microactuator. In some embodiments, the microactuator control signal for the aggressor head passes through all portions of the notch filter, whereas in other embodiments, the microactuator control signal for the aggressor head does not pass through all filtering portions of the notch filter. Alternatively or additionally, in some embodiments, the aggressor VCM control signal for the aggressor head is processed by a notch-filter. In the embodiment of method14described herein, the controller operates using control diagram1000ofFIG.10A. In other embodiments, the controller can operate using control diagram1050ofFIG.10B, in which the modification of signals with notch filters can occur in a different step than step1415. In step1416, the controller asserts the aggressor VCM control signal and the microactuator control signal. Thus, the aggressor VCM control signal is asserted by the aggressor actuator (e.g., VCM128A) and the microactuator control signal for the aggressor head is asserted by the aggressor microactuator (e.g., one of microactuators228A-D). Generally, the aggressor VCM control signal is applied to the aggressor actuator and the microactuator control signal for the aggressor head is applied to the aggressor microactuator prior to the aggressor head passing over the next servo wedge. For example, when the controller receives or determines the position signal in step1402immediately after the aggressor head passes over a servo wedge N, in step1416, the controller asserts the aggressor VCM control signal and the microactuator control signal prior to the aggressor head passing over servo wedge N+1. In this way, the radial position and/or velocity profile of the aggressor head is modified prior to the aggressor head passing over servo wedge N+1. In some embodiments, the controller asserts multiple aggressor VCM control signals and the microactuator control signals in step1416when the servo system for the aggressor actuator is configured as a multi-rate control system. In step1417, the aggressor head generates another position signal as the aggressor head passes over the next servo wedge. Upon completion of step1417, method1400returns back to step1402. It is noted that in some embodiments, step1428occurs substantially concurrently with step1417. That is, in such embodiments, the victim head passes over the next servo wedge on the recording surface associated with the victim head at approximately the same time that the aggressor head passes over the next servo wedge on the recording surface associated with the aggressor head. Therefore, in such embodiments, the victim head generates another position signal for the victim head in step1428at approximately the same time that the aggressor head generates another position signal for the aggressor head in step1417. In such embodiments, each servo wedge on the recording surface associated with the aggressor head is circumferentially aligned with a respective servo wedge on the recording surface associated with the aggressor head. Step1430is performed upon completion of step1415, in which an aggressor VCM control signal or signals for moving the aggressor actuator of HDD100is modified by (for example) notch filter1012. In step1430, the controller determines an interpolated aggressor VCM command(s), such as interpolated VCM command values1240inFIG.12B. For example, in an embodiment, VCM command interpolator1101determines a single interpolated aggressor VCM command1240for servo wedge N when the servo system for the victim actuator is configured as a single-rate control system. In another example, VCM command interpolator1101determines multiple interpolated aggressor VCM commands1240for servo wedge N when the servo system for the victim actuator is configured as a multi-rate control system. In some embodiments, the interpolated aggressor VCM commands1240for servo wedge N are determined based on the aggressor modified VCM command generated in step1415. In alternative embodiments, the input value for step1430is provided by step1411instead of step1415. In such embodiments, the interpolated aggressor VCM commands1240for servo wedge N are determined in step1430based on the aggressor VCM command(s) that are generated by the controller for moving the aggressor actuator of HDD100. In some embodiments, in instances in which there is no significant timing relationship difference between the current victim head and the aggressor head, interpolated aggressor VCM commands1240are not determined, and the aggressor the VCM command(s) that are generated by the controller for moving the aggressor actuator of HDD100are employed in steps1431and1432. In step1431, the controller stores the aggressor VCM command(s) for servo wedge N in a memory of HDD100. For example, in an instance in which VCM128A is the aggressor actuator, the controller stores the aggressor VCM command(s) for servo wedge N in aggressor VCM command buffer774for first servo controller715. In an instance in which there is a significant timing offset between the current victim head and the aggressor head, the controller stores the interpolated aggressor VCM commands1240that are generated in step1430. In instances in which there is no significant timing offset between the current victim head and the aggressor head, the controller stores the aggressor the VCM command(s) that are generated by the controller in step1411for moving the aggressor actuator of HDD100. In step1432, the controller generates a victim feedforward signal or signals for servo wedge N+W, where W is the wedge offset number for HDD100. The controller generates the victim feedforward signal or signals based on the aggressor VCM control signal(s) stored in step1431and on the set of values selected in step1401. The set of values is derived from the appropriate feedforward transfer function for the victim head. Generally, the feedforward transfer function models commands to be added to the microactuator control signal as a function of the aggressor VCM control signal generated in step1411and of recent previous values of that aggressor VCM control signal, e.g., the J most recent values of the aggressor VCM control signal. In some embodiments, the J most recent values of the aggressor VCM control signal are stored in a memory of HDD100, such as aggressor VCM command buffer774or784. As noted above, in instances in which there is a significant timing relationship difference between the current victim head and the aggressor head, the aggressor VCM control signals employed in step1432are the interpolated aggressor VCM commands1240that are generated in step1430. In some embodiments, for a particular read/write head127, victim feedforward signal1003is generated using the feedforward kernel for that particular read/write head127. In one such embodiment, a value for the victim feedforward signal is calculated using Equation 1: victimFF⁡(j)=∑k=WJ-1Kernel(k)*VCMCMD⁡(j-k)(1) where victimFF(j) is the jth (current) value of victim feedforward signal1003for the particular read/write head127, Kernel(k) is the kth kernel element, and VCMCMD(j) is the jth VCM-CMD, and W is the wedge offset number for HDD100. According to various embodiments, wedge offset number W is an integer greater than 0. As a result, the first W victim feedforward kernel values are effectively forced to be zero. The integer J is the number of kernel elements that are included in the kernel for the particular read/write head127. In some embodiments, J is selected so that kernel elements past Kernel(J−1) are very small. That is, generally, Kernel(j) begins with a particular magnitude (that can vary significantly from one sample to the next), and, over time, the magnitude of Kernel(j) gradually gets smaller (though possibly with increasing and decreasing oscillations). Thus, once sample #J, is reached, Kernel(j) typically approaches zero. For the case of a multi-rate control system (for example one in which the VCM control signal743is updated at twice the rate at which the victim's read/write head1005is determined), the formula would be extended to Equation 2: victimFF⁡(j)=∑k=WJ-1K⁢e⁢r⁢n⁢e⁢l0(k)*V⁢C⁢M⁢C⁢M⁢D0(j-k)+∑k=WJ-1K⁢e⁢r⁢n⁢e⁢l1(k)*V⁢C⁢M⁢C⁢M⁢D1(j-k)(2) Where VCMCMD0(j) is the jth value of the first VCM-CMD of each servo sample, VCMCMD1(j) is the jth value of the second VCM-CMD of each servo sample, Kernel0(k) is the kth kernel element (applied to first VCM-CMDs), and Kernel1(k) is the kth kernel element (applied to the second VCM-CMDs). In step1433, the controller stores the victim feedforward signal generated in step1432for servo wedge N+W in a memory of HDD100. For example, in an instance in which VCM128A is the aggressor actuator, the controller stores the aggressor VCM control signal in victim feedforward value buffer785for second servo controller716. The controller further stores a corresponding servo wedge number1002that indicates a servo wedge that is offset from servo wedge N by the wedge offset value W. In step1421, the controller generates a victim VCM control signal for moving the victim actuator of HDD100. The controller generates the victim VCM control signal based on victim head position signal received in step1402. In instances in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the controller generating the victim VCM control signal corresponds to second servo controller716inFIG.7. In step1422, the controller generates a microactuator control signal for the victim head based on the radial position received or determined in step1402. In step1423, in an instance in which there is no significant timing relationship difference between the current victim head and the aggressor head, the controller retrieves a suitable victim feedforward entry901for servo wedge N from a memory of HDD100, such as victim feedforward value buffer785. The controller determines the suitable victim feedforward entry901to retrieve based on servo wedge numbers902. In some embodiments, each servo wedge number902indicates a respective servo wedge of a recording surface associated with the aggressor actuator. In such embodiments, the controller selects the victim feedforward entry901associated with the servo wedge number902in victim feedforward value buffer785that indicates a servo wedge of the recording surface associated with the aggressor actuator that is offset from the servo wedge N by the preset wedge offset value W, where servo wedge N is the servo wedge from which the position signal in step1402is generated. Thus, in such embodiments, the controller selects the victim feedforward entry901that is generated based on servo wedge N−W of the recording surface associated with the aggressor actuator. For example, when the preset wedge offset value is two, the controller selects the victim feedforward entry901associated with the servo wedge number902that indicates servo wedge N−2 of the recording surface associated with the aggressor actuator. Alternatively, in some embodiments, each servo wedge number902indicates a respective servo wedge of a recording surface associated with the victim actuator. In such embodiments, the controller selects the victim feedforward entry901associated with the servo wedge number902in victim feedforward value buffer785that indicates the servo wedge from which the position signal in step1402is generated, for example, servo wedge N of the recording surface associated with the victim actuator. Alternatively, in an instance in which there is a significant timing offset between the current victim head and the aggressor head, in step1423the controller retrieves two suitable victim feedforward entries901from the memory of HDD100. Specifically, the controller retrieves the victim feedforward entry901for servo wedge N and the victim feedforward entry901for a servo wedge that is adjacent to servo wedge N. It is noted that the victim feedforward entry901for servo wedge N is generated based on servo wedge N−W of the recording surface associated with the aggressor actuator. When the value of the timing offset between the victim head and the aggressor head is positive, and therefore the aggressor head crosses servo wedge N before the current victim head crosses servo wedge N, the servo wedge that is adjacent to servo wedge N is servo wedge N+1. Conversely, when the value of the timing offset between the victim head and the aggressor head is negative, and therefore the aggressor head crosses servo wedge N after the current victim head crosses servo wedge N, the servo wedge that is adjacent to servo wedge N is servo wedge N−1. As described above, the controller determines the suitable victim feedforward entries901to retrieve for servo wedge N based on servo wedge numbers902. In step1424, in an instance in which there is a significant timing relationship difference between the current victim head and the aggressor head, the controller determines an interpolated victim feedforward signal value1340for servo wedge N, for example via victim feedforward signal interpolator1103. In some embodiments, the interpolated victim feedforward signal value1340is based on the two victim feedforward entries901retrieved for servo wedge N and an adjacent wedge in step1423. Alternatively, in an instance in which there is no significant timing relationship difference between the current victim head and the aggressor head, no interpolated victim feedforward signal value1340is determined for servo wedge N in step1424. Instead, step1424is not performed. In step1425, the controller combines the victim microactuator control signal and a victim feedforward signal to produce a corrected microactuator signal. In an instance in which there is a significant timing relationship difference between the current victim head and the aggressor head, in step1425the controller combines the victim microactuator control signal and an interpolated victim feedforward signal to produce the corrected microactuator signal. Alternatively, in instance in which there is no significant timing offset between the current victim head and the aggressor head, in step1425the controller combines the victim microactuator control signal and a victim feedforward signal generated in step1432to produce the corrected microactuator signal. In optional step1426, the victim VCM control signal generated in step1421and/or the corrected victim microactuator control signal generated in step1425is modified via one or more notch filters, such as notch filters1021or1022. For example, in an embodiment, a notch filter modifies the corrected microactuator signal for the victim head to produce a filtered corrected microactuator signal for the victim microactuator. In some embodiments, the corrected microactuator control signal for the victim head passes through all portions of the notch filter, whereas in other embodiments, the filtered corrected microactuator control signal for the victim head does not pass through all filtering portions of the notch filter. Alternatively or additionally, in some embodiments, the victim VCM control signal for the aggressor head is processed by a notch-filter. In the embodiment of method14described herein, the controller operates using control diagram1000ofFIG.10A. In other embodiments, the controller can operate using control diagram1050ofFIG.10B, in which the modification of signals with notch filters can occur in a different step than step1426. In step1427, the controller asserts the victim VCM control signal and the filtered corrected microactuator control signal for servo wedge N. Thus, the victim VCM control signal is applied to the victim actuator (e.g., VCM128B) and the filtered corrected microactuator control signal for the victim head is applied to the victim microactuator (e.g., one of microactuators228E-H). Generally, the victim VCM control signal is applied to the victim actuator and the filtered corrected microactuator control signal for the victim head is applied to the victim microactuator prior to the victim head passing over the next servo wedge. For example, when the controller receives or determines the position signal in step1402immediately after the victim head passes over servo wedge N, in step1427, the controller asserts the victim VCM control signal and the filtered corrected microactuator control signal prior to the victim head passing over servo wedge N+1. In this way, the radial position and/or velocity profile of the victim head is modified prior to the victim head passing over servo wedge N+1. In step1428, the victim head generates another position signal as the victim head passes over the next servo wedge. Upon completion of step1428, method1400returns back to step1402. Implementation of method1400enables a suitable victim feedforward signal to be determined and added to a microactuator for a victim head, thereby reducing or eliminating the effect of aggressor actuator motion on the positioning accuracy of the victim head. Determining Timing Relationship Differences for Feedforward Signal Generation As noted previously, timing relationship differences between different victim and aggressor heads can occur over the life of a disk drive. Further, such timing relationship differences can be a significant fraction of the time interval required for a read/write head to move across the circumferential separation between adjacent servo wedges, and can significantly affect the accuracy of certain victim disturbance feedforward control schemes. Consequently, to compensate for such timing relationship differences, timing shifts for each read/write head of a drive are measured at certain times over the lifetime of a disk drive. The measured timing shifts can then be applied by VCM command interpolator1101for interpolation of VCM commands for an aggressor head and/or victim feedforward signal interpolator1103for interpolation of a victim feedforward signal for a victim head. One such embodiment is described below in conjunction withFIG.15. FIG.15sets forth a flowchart of method steps for measuring timing shifts for each read/write head of a drive, according to an embodiment. Although the method steps are described in conjunction with HDD100ofFIGS.1-14, persons skilled in the art will understand that the method steps may be performed with other types of systems. The control algorithms for the method steps may reside in microprocessor-based controller133, motor-driver chip125, or a combination of both. The control algorithms can be implemented in whole or in part as software- or firmware-implemented logic, and/or as hardware-implemented logic circuits. A method1500begins at step1501, when a suitable controller (e.g., microprocessor-based controller133and/or motor-driver chip125) determines a victim feedforward kernel1102for each read/write head of HDD100. Example embodiments for the determination of the feedforward transfer function for victim feedforward kernel1102are described below in conjunction withFIGS.16-18. In step1502, the controller determines timing differences between the read/write heads of HDD100. Thus, for a first actuator (e.g., VCM128A), the controller measures timing-differences between one specific read/write head on a second actuator (e.g., VCM128B) and each read/write head on the first actuator. Thus, when the chosen head on the second actuator is (for example) head number 1 (from the multiple read/write heads of the second actuator), then the differences are represented by a set of values TD_b_1k, where k is in the range {1:K} and K is the number of servo wedges per revolution, and b is in the range {1:B} and B is the total number of read/write heads of the first actuator. Similarly, for the second actuator, the controller measures timing-differences between one specific read/write head on the first actuator and each read/write head on the second actuator. Thus, when the chosen head on the first actuator is (for example) head number 1 (from the multiple read/write heads of the first actuator), then the differences are represented by a set of values TD_1_ak, where a is in the range {1:A} and A is the total number of read/write heads of the second actuator. Set of values TD_1_akand set of values TD_b_1k, include sufficient information to define the timing differences between any pair of read/write head of HDD100(either cross-actuator head pairs or same-actuator head pairs). In some embodiments, the timing difference values included in a particular set of values TD_1_akor set of values TD_b_1kcan be determined while servoing simultaneously on the pair of heads associated with the particular set of values being measured. In such embodiments, the timing difference values are measured via any technically feasible timing approach. For example, in some embodiments, a disk-synchronized or free-running clock included in HDD100can be employed to detect when a specific event occurs at a particular servo wedge for each head, such as when each head passes a rising edge, sync mark, or other feature of that particular servo wedge. Because such clocks generally include 1000 or more counts per servo wedge, in such embodiments the timing difference values so measured can indicate timing differences between two heads with high granularity. In step1503, the controller determines representative timing offset components for each pair of read/write heads of HDD100. In some embodiments, the controller determines, for each pair of read/write heads of HDD100, an average offset value (referred to herein as TD_AVG_b_1 for the read/write heads of the first actuator and TD_AVG_1_a for the read/write heads of the second actuator), an “in-phase” component of sinusoidal variation at the spin-speed (referred to herein as TD_COS_b_1 for the read/write heads of the first actuator and TD_COS_1_a for the read/write heads of the second actuator), and a “quadrature” component of sinusoidal variation at the spin-speed (referred to herein as TD_SIN_b_1 for the read/write heads of the first actuator and TD_SIN_1_a for the read/write heads of the second actuator). In such embodiments, the average offset value, the in-phase component, and the quadrature component for a particular pair of read/write heads of HDD100are determined based on a timing difference data set associated with that particular pair of read/write heads. For example, such a data set includes a measured timing difference value (measured in step1502) for K servo wedges. These representative timing offset components are the base timing difference values for each pair of read/write heads of HDD100. In the embodiment described above, timing differences for a particular pair of read/write heads is assumed to vary sinusoidally once per revolution of K servo wedges of a recording surface. In other embodiments, the timing differences for a particular pair of read/write heads can be assumed to vary sinusoidally multiple times per revolution of K servo wedges of a recording surface. In such embodiments, additional representative timing offset components may be computed in addition to the above-described average offset value, in-phase component value, and quadrature component value. In step1504, the controller determines whether a calibration condition has been met. For example, in some embodiments, a calibration condition can be the lapsing of a specific time period or the occurrence of a specific event (such as system start up, system power down, detection of a shock, determination that a specific performance threshold has not been met, and the like). In some embodiments, a calibration condition can be the determination that one or more timing shifts of a sufficient magnitude have been experienced by one or more read/write heads. In such embodiments, timing differences are measured passively by the controller during normal operation of HDD100. If a calibration condition has been met, method1500proceeds to step1505; if no calibration condition has been met, method1500returns to step1504. In step1505, the controller determines timing differences between the read/write heads of HDD100. In some embodiments, procedures described above in step1502can be performed. In such embodiments, new values for set of values TD_1_akand set of values TD_b_1kare determined. Alternatively, in some embodiments, timing differences are measured passively by the controller during normal operation of HDD100. In such embodiment, step1505is not performed. In such embodiments, timing differences between whichever read/write heads of HDD100are currently being used can be measured using procedures described above in step1502to determine the requisite values for set of values TD_1_akand set of values TD_b_1k. In step1506, the controller determines timing shifts of each read/write head of HDD100. In some embodiments, such timing shifts are determined based on the information included in set of values TD_1_akand set of values TD_b_1k, which is collected in step1505. For example, in an embodiment, the base values in set of values TD_1_akand set of values TD_b_1kare subtracted from the values collected in1505, thereby determining ΔTD_1_akand ΔTD_b_1k. In some embodiments, a set of simultaneous equations can be constructed and solved to provide values for the absolute time shift of each read/write of each actuator using ΔTD_1_akand ΔTD_b_1k. In such embodiments, values indicating the absolute time shift of each read/write head of each actuator are represented by: ΔT_AVG_ACT1_b and ΔT_AVG_ACT2_a, ΔT_COS_ACT1_b and ΔT_COS_ACT2_a, and ΔT_SIN_ACT1_b and ΔT_SIN_ACT2_a. With the introduction of one additional constraint, one of skill in the art can apply any suitable method to solve the above-described simultaneous equations to determine the values of ΔT_AVG_ACT1_b and ΔT_AVG_ACT2_a, ΔT_COS_ACT1_b and ΔT_COS_ACT2_a, and ΔT_SIN_ACT1_b and ΔT_SIN_ACT2_a. For example, in some embodiments, the additional constraint can be that the average of all of the values in the set of values ΔT_AVG_ACT1_b and all of the values in the set of values ΔT_AVG_ACT2_a is zero. Alternatively, in some embodiments, the additional constraint can be that the sum of all of the values in the set of values ΔT_AVG_ACT1_b and all of the values in the set of values ΔT_AVG_ACT2_a is zero. Alternatively, in some embodiments, the additional constraint can be provided by minimizing the absolute timing offset of the worst timing offset of the drive. Thus, in such embodiments, the largest-magnitude value in the two sets of values ΔT_AVG_ACT1_b and ΔT_AVG_ACT1_b is minimized. Alternatively, in some embodiments, in step1506, the controller determines timing shifts of each read/write head of HDD100based on an absolute timing reference. For example, in such embodiments, a timing difference for each read/write head of HDD100can be determined based on a timing difference between rising edges of a servo gate signal and an edge of a back-EMF (electro-motive force) zero crossing signal from spindle motor114. In step1507, the controller updates VCM command interpolator1101and victim feedforward signal interpolator1103. For example, in some embodiments, the values of ΔT_AVG_ACT1_b and ΔT_AVG_ACT2_a, ΔT_COS_ACT1_b and ΔT_COS_ACT2_a, and ΔT_SIN_ACT1_b and ΔT_SIN_ACT2_a are used by VCM command interpolator1101for generating the timing offsets that VCM command interpolator1101employs to interpolate VCM command values, such as VCM command values1200. Specifically, in such embodiments, these values are added to the base values of TD_AVG_ACT1_b, TD_AVG_ACT2_a, TD_COS_ACT1_b, TD_COS_ACT2_a, TD_SIN_ACT1_b, and TD_SIN_ACT2_a, and the timing offsets for interpolating VCM command values are generated. Additionally or alternatively, in some embodiments, the values of ΔT_AVG_ACT1_b and ΔT_AVG_ACT2_a, ΔT_COS_ACT1_b and ΔT_COS_ACT2_a, and ΔT_SIN_ACT_1_b and ΔT_SIN_ACT2_a are used by victim feedforward signal interpolator1103for generating the timing offsets that victim feedforward signal interpolator1103employs to interpolate values of victim feedforward signal, such as modifies values of victim feedforward signal1003. Method1500then returns to step1504. Determination of Victim Feedforward Signal Transfer Function FIG.16sets forth a flowchart of method steps for determining values for a victim feedforward generator in a multi-actuator HDD, according to an embodiment. In the embodiment, a different set of values for the victim feedforward transfer function is determined for each magnetic head of the multi-actuator HDD. In some embodiments, the method steps are performed in HDD100as part of a calibration/start-up process. Although the method steps are described in conjunction with HDD100ofFIGS.1-10B, persons skilled in the art will understand that the method steps may be performed with other types of systems. The control algorithms for the method steps may reside in microprocessor-based controller133, motor-driver chip125, or a combination of both. The control algorithms can be implemented in whole or in part as software- or firmware-implemented logic, and/or as hardware-implemented logic circuits. A method1600begins at step1601, when a suitable controller (i.e., microprocessor-based controller133and/or motor-driver chip125) selects a victim read/write head127from the read/write heads associated with VCM128A or VCM128B. The selected read/write head127and associated actuator (either VCM128A or VCM128B) are then designated as the victim head and the victim actuator, respectively, while another actuator is designated as the aggressor actuator. In step1602, the controller determines a first transfer function that models the radial position of the victim head as a function of a control signal applied to the aggressor actuator. In some embodiments, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the first transfer function models radial position1005of the currently active read/write head associated with VCM128B as a function of VCM control signal743applied to VCM128A. In one such embodiment, the first transfer function can model radial position1005as a function of filtered VCM control signal743, which is VCM control signal743after passing through notch filter1012. Alternatively, in another embodiment, the first transfer function models radial position1005as a function of VCM control signal1047prior to being processed by notch filter1012. One process by which the controller determines the first transfer function is described below in conjunction withFIG.17. In step1603, the controller determines a second transfer function that models the radial position of the victim head as a function of a feedforward signal added to the control signal that is applied to a microactuator228and/or229for positioning the victim head. For example, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, in some embodiments the second transfer function models radial position1005of the currently active read/write head associated with VCM128B as a function of victim feedforward signal1003for microactuator1028B. One process by which the controller determines the second transfer function is described below in conjunction withFIG.18. In step1604, the controller determines a feedforward transfer function for the current victim head. The feedforward transfer function models a feedforward correction signal for the victim head as a function of a control signal supplied to the aggressor actuator. For example, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, in some embodiments the feedforward transfer function models victim feedforward signal1003(the output of the feedforward transfer function) as a function of filtered VCM control signal743for VCM128A (the input of the feedforward transfer function). In some embodiments, the controller determines a feedforward transfer function for the current victim head based on a ratio of the first transfer function determined for the victim head in step1602and the second transfer function determined for the victim head in step1603. So that the feedforward transfer function substantially cancels the effect of filtered VCM control signal743, the feedforward transfer function is multiplied by −1. In step1605, the controller generates a victim feedforward kernel for the current victim head. In some embodiments, the victim feedforward kernel is based on the feedforward transfer function determined in step1604. For example, in one such embodiment, the controller generates a plurality of values for the victim feedforward kernel for the current victim head by determining an inverse discrete Fourier transform of values associated with the first transfer function. The controller then stores the plurality of values for the victim feedforward kernel in a memory of HDD100, such as RAM134and/or flash memory device135. Alternatively or additionally, the plurality of values can be programmed into one or more control algorithms of HDD100. In step1606, the controller determines whether there are any remaining read/write heads127in HDD100for which a feedforward transfer function is to be determined. If yes, method1600returns to step1601; if no, method1600proceeds to step1607and terminates. In some embodiments, a feedforward transfer function is determined not only for each different read/write head127of HDD100, but also for each read/write head127at each of multiple temperature ranges. Thus, temperature variations in the mechanical coupling between a victim actuator and an aggressor actuator can be accurately accounted for. In such embodiments, a different iteration of method1600is performed for each of the multiple temperature ranges. Thus, a different transfer function for the same read/write head127is determined for each of the different temperature ranges. For example, in one such embodiment, a different iteration of method1600is performed for each of the following temperature ranges of HDD100: −5 ŁC to +5 ŁC; +5 ŁC to +15 ŁC; +15 ŁC to +25 ŁC; +25 ŁC to +35 ŁC. In other embodiments, a different iteration of method1600is performed for any other temperature ranges, including larger temperature ranges than those described above, smaller temperature ranges than those described above, temperature ranges spanning different thermal ranges, etc. In other embodiments, a different iteration of method1600is performed for one or more temperature ranges, and kernel values are determined for those temperature ranges, plus other temperature ranges, using methods of interpolation or extrapolation known to one of skill in the art. In some embodiments, a feedforward transfer function is determined not only for each different read/write head127of HDD100, but also for various radial locations of each read/write head127. Thus, variations in the mechanical coupling between a victim actuator and an aggressor actuator that depend upon the radial location of the victim head can be accurately accounted for. In such embodiments, a different iteration of method1600is performed for each of the multiple radial locations (e.g., proximate the ID, proximate the OD, and/or proximate a mid-diameter region). Thus, a different transfer function for the same read/write head127is determined for each of the different radial locations. FIG.17sets forth a flowchart of method steps for determining a transfer function that models the radial position of a victim head as a function of a control signal applied to an aggressor actuator, according to an embodiment. For consistency with the description of method1600inFIG.16, such a transfer function is described herein as the “first transfer function.” In some embodiments, the method steps are performed in HDD100as part of a calibration/start-up process. For example, the method steps ofFIG.17may be implemented in step1602of method1600. Although the method steps are described in conjunction with HDD100ofFIGS.1-10B, persons skilled in the art will understand that the method steps may be performed with other types of systems. The control algorithms for the method steps may reside in microprocessor-based controller133, motor-driver chip125, or a combination of both. The control algorithms can be implemented in whole or in part as software- or firmware-implemented logic, and/or as hardware-implemented logic circuits. A method1700begins at step1701, when a suitable controller (i.e., microprocessor-based controller133and/or motor-driver chip125) selects a disturbance to be injected into or otherwise added to a control signal for an actuator that is currently designated as the aggressor actuator. For example, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the controller selects a disturbance to be added to filtered VCM control signal743before being applied to VCM128A. It is noted that VCM control signal1047is generated as part of the closed loop servo control of the aggressor head and then modified to filtered VCM control signal743. Generally, the controller selects the disturbance to be added to the VCM command from a plurality of disturbances that together facilitate the determination of the first transfer function. For example, the plurality of disturbances may include a range of different acceleration values that are each to be individually applied to the aggressor actuator during implementation of method1700. In some embodiments, the plurality of disturbances is selected to excite the mechanical systems of first control loop1020and second control loop1030over all frequencies of interest. In this way, the first transfer function measured in method1700more accurately captures the response of the mechanical and control systems of first control loop1020and second control loop1030. In some embodiments, the different disturbances to be applied to VCM control signal743can be part of a sinusoidal waveform, a pulse of acceleration values, and/or selected from random or pseudo-random noise. For example, in one embodiment, each disturbance to be applied to VCM control signal743is a sinewave of a different frequency. When each such disturbance is applied to VCM control signal743, a complete spectrum of the first transfer function can be measured. In some embodiments, a control signal for an aggressor actuator (e.g., VCM control signal743) may be updated at the same rate at which the read/write head position1005is determined for a victim actuator. Such systems are generally referred to as single-rate control systems. In other embodiments, a control signal for an aggressor actuator may be updated at a higher rate than the rate at which the read/write head position1005is determined for the victim actuator. For example, in an embodiment, the control signal for an aggressor actuator might be updated at twice the rate at which the read/write head position1005for the victim actuator is measured. Such systems are generally referred to as multi-rate control systems, and are known to one of skill in the art. For such systems, the relationship between the control signal for the aggressor actuator and the read/write head position1005of the victim actuator can be represented by multiple transfer functions. For the example described above (in which VCM commands are updated at twice the rate at which the read/write head position is determined), the relationship between the aggressor control signal (e.g., VCM control signal743) and the read/write head position (e.g., radial position1005) can be represented by two transfer functions; one transfer function between a signal that is made up of a first VCM control signal743that is sent to the aggressor VCM (VCM128A) each servo sample and the victim's read/write head position1005, and a second transfer-function between a signal that is made up of a second VCM control signal743that is sent to the aggressor VCM (VCM128A) each servo sample and the victim read/write head position (radial position1005). For such systems, the disturbances to be applied to VCM control signal743could include disturbances to only the first of the two control signals for each servo sample, disturbances to only the second of the two control signals, or to both simultaneously. For such systems, determining the two transfer-functions could involve measuring the aggressor VCM control signal743and the response of the victim read/write head position to two or more different disturbance signals, and simultaneously solving for the two transfer-functions, based upon the results of the multiple experiments. Such signal processing is known to one of skill in the art. Continuing with the case of a multi-rate control system, the processing that was previously described forFIG.16is extended to apply to two first transfer functions. In step1602, the first transfer function would consist of two transfer functions. In step1604, determining the feedforward transfer function for the victim head would consist of two transfer functions. In step1605, the victim feedforward kernel would consist of two kernels; one which is applied to the first VCM control signal743that is applied to VCM12A8each servo sample, and another which is applied to the second VCM control signal743that is applied to VCM128A each servo sample. In step1702, the controller applies the selected disturbance to the VCM command for the aggressor VCM. For example, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the disturbance is added to filtered VCM control signal743before VCM control signal743is applied to VCM128A. In one such embodiment, the disturbance is added to filtered VCM control signal743at injection point1029. In step1703, the controller measures the radial position of a read/write head127that is currently designated as the victim head. That is, the controller measures the response of the victim head (i.e., radial positions of the victim head over a certain time interval) to the disturbance applied in step1702. The controller also measures the VCM commands applied to the aggressor actuator. For example, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the controller measures radial position1005of the currently active read/write head associated with VCM128B (i.e., the victim head), and the commands that were applied to the VCM128A (i.e., VCM control signal743). The commands applied to the aggressor actuator are collected in step1703since such commands are not based solely on the selected disturbance applied in step1702; such commands include controller-determined feedback values as well. In some embodiments, the controller performs steps1702and1703multiple times to reduce the influence of random noise and other non-repeatable runout on the measured radial position of the victim head. For example, the controller may perform steps1702and1703over a plurality of rotations of a storage disk110. Alternatively or additionally, in some embodiments, the controller performs steps1702and1703at multiple circumferential locations of a recording surface112to reduce the influence of repeatable runout on the measured radial position of the victim head. In such embodiments, the controller may also perform steps1702and1703over a plurality of rotations of a storage disk110. In some embodiments, the effects of synchronous runout (also known as “written-in runout”) on the accuracy of measurements of the first transfer function are reduced. In such embodiments, the measurements associated with steps1702and1703are made in pairs. In such embodiments, each pair of measurements is performed with added disturbances of equal amplitude and shape, but opposite sign, and with starting times that are separated by an integer number of revolutions of the storage disk110. The difference between the resulting victim position (e.g., victim PES1005) for such a pair of experiments should, to first order, be devoid of effects of synchronous runout, which might otherwise degrade the accuracy of the transfer-function measurement. The difference between the resulting commands that were applied to the aggressor actuator should similarly be, to first order, devoid of effects of synchronous runout. In step1704, the controller stores the values of the measured position of the victim head over the time extent of the experiment, and stores the values of the commands that were applied to the aggressor actuator over that same time extent. In some embodiments, the values stored are based on multiple measurements made when the controller performs steps1702and1703multiple times. In step1705, the controller determines whether there are any remaining disturbances for which a resultant radial position of the victim head is to be measured. If yes, method1700returns to step1701; if no, method1700proceeds to step1706. In step1706, the controller derives the first transfer function for the victim head based on the values stored over the multiple iterations of step1704. In some embodiments, the transfer function is determined as the ratio of the spectrum of the victim measured position to the spectrum of the commands applied to the aggressor actuator. Method1700then proceeds to step1707and terminates. FIG.18sets forth a flowchart of method steps for determining a transfer function that models the radial position of a victim head as a function of a disturbance added to a control signal that is applied to a microactuator for positioning the victim head, according to an embodiment. For consistency with the description of method1600inFIG.16, the transfer function is described herein as the “second transfer function.” In some embodiments, the method steps are performed in HDD100as part of a calibration/start-up process. For example, the method steps ofFIG.18may be implemented in step1603of method1600. Although the method steps are described in conjunction with HDD100ofFIGS.1-17, persons skilled in the art will understand that the method steps may be performed with other types of systems. The control algorithms for the method steps may reside in microprocessor-based controller133, motor-driver chip125, or a combination of both. The control algorithms can be implemented in whole or in part as software- or firmware-implemented logic, and/or as hardware-implemented logic circuits. A method1800begins at step1801, when a suitable controller (i.e., microprocessor-based controller133and/or motor-driver chip125) selects a disturbance (or microactuator control signal) for a microactuator that is configured to position a read/write head127currently designated as the victim head. For example, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the controller selects a disturbance to microactuator control signal745, which is to be applied to microactuator1028B. Generally, the controller selects the disturbance from a plurality of disturbances that together facilitate the determination of the second transfer function. For example, the plurality of microactuator commands may include a range of different acceleration values that are each to be individually applied to the microactuator during implementation of method1800. In some embodiments, the different disturbances to be applied to microactuator control signal745can be part of a sinusoidal waveform, a pulse of acceleration values, and/or selected from random noise. For example, in one embodiment, each disturbance to be applied to microactuator control signal745is a sinewave of a different frequency. Further, any of the other techniques described above in conjunction with method1700for measuring the first transfer function can be employed for measuring the second transfer function in method1800. In step1802, the controller adds the selected disturbance to the microactuator control signal for positioning the victim head. For example, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the disturbance is added to microactuator control signal745, which is then applied to microactuator1028B. In the embodiment illustrated inFIG.10A, the disturbance can be injected between second servo controller716and notch filter1021. In the embodiment illustrated inFIG.10B, the disturbance can be injected between notch filter1021and microactuator1028B. In some embodiments, the selected disturbance is modified by a notch filter before being applied to the microactuator, and in other embodiments, the selected disturbance is modified by a second filtering portion of a notch filter before being applied to the microactuator. In step1803, the controller measures the radial position of a read/write head127that is currently designated as the victim head. For example, in an instance in which VCM128A is the aggressor actuator and VCM128B is the victim actuator, the controller measures radial position1005of the currently active read/write head associated with VCM128B (i.e., the victim head). In some embodiments, the controller performs steps1802-1804multiple times to reduce the influence of random noise and other non-repeatable runout on the measured radial position of the victim head. For example, the controller may perform steps1802-1804over a plurality of rotations of a storage disk110. In step1804, the controller stores the value of the measured position of the victim head over the time extent of the experiment. In some embodiments, the value stored is based on multiple measurements made when the controller performs steps1802and1803multiple times. In step1805, the controller determines whether there are any remaining disturbances for which a resultant radial position of the victim head is to be measured. If yes, method1800returns to step1801; if no, method1800proceeds to step1806. In step1806, the controller derives the second transfer function for the victim head based on the values stored over the multiple iterations of step1804. In some embodiments, the transfer function is determined as the ratio of the spectrum of the victim measured position to the spectrum of the added disturbance. Method1800then proceeds to step1807and terminates. While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
164,293
11862197
DETAILED DESCRIPTION In general, according to one embodiment, a magnetic disk device which includes two or more independently drivable actuator blocks and performs seek control with a low jerk in which a jerk that is a derivative of acceleration is limited, wherein in a state where a first actuator block is not accessing a data sector of a disk, a second actuator block that is not the first actuator block accesses the data sector of the disk by seek control with a high Jerk. Hereinafter, an embodiment of the present invention will be described with reference to the drawings. Embodiment In the present embodiment, an example of seek control for improving a command access performance of a magnetic disk device including two actuators will be described. The command access performance may be the number of commands that can be processed in a unit time. For example, when one actuator (actuator A) performs a seek operation, vibration of the other actuator (actuator B) is excited, and the tracking head of the actuator B is shaken. As a result, a position error PES (position error signal) with respect to the target position of the tracking head of the actuator B increases, and there is a possibility that an error occurs in reading and writing for a data sector. Therefore, by limiting the jerk at the time of seeking of the actuator A, it is possible to reduce the shake of the tracking head of the actuator B and read and write the data sector. However, when the jerk is limited, a seek time is delayed, and the command access performance of the actuator A is deteriorated. In the present embodiment, an example of seek control for improving the command access performance in the above-described situation will be described. FIGS.1A and1Bare configuration diagrams of the magnetic disk device according to the embodiment. A magnetic disk device1is, for example, a hard disk drive (HDD), and is a multi-actuator magnetic disk device including two actuator block100A and actuator block100B. The magnetic disk device1of the present embodiment includes two control systems A and B. The control system A controls the actuator block100A and controls access to the disk DK1. The control system B controls the actuator block100B and controls access to the disk DK2. The control system A and the control system B can perform data communication. In the functions of the control systems A and B, the functional blocks having the same names are distinguished by adding A and B to reference signs. A HDA10is a head disk assembly, and a plurality of disks, a plurality of actuator blocks, a spindle, and the like are stored in a housing. The HDA10includes at least two or more actuator blocks100and a disk DK for each actuator blocks100. A disk DK1and a disk DK2(when not particularly distinguished, the disks are collectively referred to as the disk DK) are magnetic disks that are controlled by the control systems A and B, respectively, and store data. A spindle12is a support of the disk DK1and the disk DK2, and is installed in the housing HS or the like. A spindle motor (SPM)13is attached with the spindle12and rotates the spindle12. The actuator block100A and the actuator block100B (when not particularly distinguished, the actuator blocks are collectively referred to as the actuator block100) are controlled by the control systems A and B, respectively, and read and write data from and to the different disks DK1and DK2. FIG.2is schematic diagram of configurations of the actuator block, the disk, and the like according to the embodiment. FIG.2part (a) is the schematic diagram of the actuator block100(the actuator block100A or the actuator block100B). In a case where the actuator block100A and the actuator block100B have the same name of components, the components of the actuator block100A and the actuator block B are similar to each other, and thus, the reference sign number of the actuator block100B is shown in parentheses following the reference sign number of the actuator block A. An actuator AC1is controlled by the control system A, is a voice coil motor (VCM) type actuator, and is attached to a coaxial BR. An actuator AC2functions in the control system B in the same manner as the actuator AC1. When not particularly distinguished, the actuator AC1and the actuator AC2are collectively referred to as an actuator AC. An arm AM11and an arm AM12(when not particularly distinguished, collectively referred to as an arm AM1) are connected to the actuator AC1and a head HD1to sandwich the disk DK1, and support the head HD1. The arm AM1is controlled by the control system A. An arm AM21and an arm AM22(when not particularly distinguished, collectively referred to as an arm AM2) are connected to the actuator AC2and a head HD2to sandwich the disk DK2, and support the head HD2. The arm AM2is controlled by the control system B in the same manner as the arm AM1. When not particularly distinguished, the arm AM1and the arm AM2are collectively referred to as an arm AM. A microactuator MA11and a microactuator MA12(when not particularly distinguished, collectively referred to as a microactuator MA1) are actuators that are installed to sandwich the disk DK1between suspensions (not illustrated) of the arm AM11and the arm AM12, are controlled by the control system A, and control a head HD11and a head HD12, respectively. More specifically, the microactuator MA1more finely controls the operation of the head HD1in a radial direction of the disk DK1than the control of the operation of the head HD1in the radial direction by the voice coil motor VCM. The microactuator MA1may be driven independently of the VCM. A microactuator MA21and a microactuator MA22(when not particularly distinguished, collectively referred to as a microactuator MA2) are controlled by the control system B in the same manner as the microactuator MA1. When not particularly distinguished, the microactuator MA1and the microactuator MA2are collectively referred to as a microactuator MA. The head HD11and the head HD12(when not particularly distinguished, collectively referred to as a head HD1) are installed to sandwich the disk DK1at the distal ends of the arm AM11and the arm AM12, respectively. The head HD11and the head HD12are controlled by the control system A, and read and write data of the upper surface and the lower surface of the disk DK1, respectively. A head HD21and a head HD22(when not particularly distinguished, collectively referred to as a head HD2) are installed to sandwich the disk DK2at the distal ends of the arms AM21and AM22, respectively. The head HD2is controlled by the control system B in the same manner as the head HD1. When not particularly distinguished, the head HD1and the head HD2are collectively referred to as a head HD. Each head HD is attached to a slider (not illustrated) attached to the suspension of the arm AM. The head HD is selected and operated by a head selection unit (also referred to as a read head selection unit310) to be described later. A time during which the DK moves by a rotation angle α (in a case where the rotation direction of the DK is clockwise) so that a target data sector TGT moves to the position of the head HD is referred to as a rotation time, and in particular, the rotation time after the head HD seeks to the track (referred to as a target track) having the target data sector TGT is referred to as a rotation waiting time. More specifically, when the SPM angle (or a servo sector number or the like) at the position on the disk where the head HD is currently present is al and the SPM angle (or a servo sector number or the like) of the target sector TGT is α2, the time required for the rotation of the SPM angle difference α (or a servo sector number or the like) between α1 and α2 corresponds to the rotation time or the rotation waiting time. In addition, a time from the start of the seek of the head HD to the arrival at the target data sector TGT is referred to as a command access time. When the head HD reaches the target track by the seek, in a case where the target data sector TGT has passed the position of the head HD in the rotation direction of the disk DK, the head HD waits for the target data sector TGT by the rotation of the disk DK. In this case, the command access time is obtained by adding the rotation time corresponding to the number of additional rotations or the rotation waiting time to the time for seeking from the current head position to the target data sector TGT. In basic command reordering, a command having the shortest time among these command access times is selected. The head HD includes a write head WH which writes data to the disk DK and a read head RH which reads data written to the disk DK. Hereinafter, a process of writing data to the disk DK may be referred to as a write process, and a process of reading data from the disk DK may be referred to as a read process. In addition, recording data in a predetermined recording region, reading data from a predetermined recording region, arranging the head HD at a predetermined position of the disk DK, writing data in a predetermined region of the disk DK, reading data from a predetermined region of the disk DK, and the like may be referred to as accessing. A read head RH11and a read head RH12(when not particularly distinguished, collectively referred to as a read head RH1) are installed in the head HD1. The read head RH11and the read head RH12are controlled by the control system A, and read data from the upper surface and the lower surface of the disk DK1, respectively. A read head RH21and a read head RH22(when not particularly distinguished, collectively referred to as a read head RH2) are installed in the head HD2and controlled by the control system B in the same manner as the read head RH1. When not particularly distinguished, the read head RH1and the read head RH2are collectively referred to as a read head RH. A write head WH11and a write head WH12(when not particularly distinguished, collectively referred to as a write head WH1) are installed in the head HD1. The write head WH11and the write head WH12are controlled by the control system A, and write data on the upper surface and the lower surface of the disk DK1. A write head WH21and a write head WH22(when not particularly distinguished, collectively referred to as a write head WH2) are installed in the head HD2and controlled by the control system B in the same manner as the write head WH1. When not particularly distinguished, the write head WH1and the write head WH2are collectively referred to as a write head WH. FIG.2part (b) is a schematic diagram of the disk DK. The disk DK is a general magnetic disk, and a detailed description thereof will be omitted. The disk DK is, for example, a disk-type magnetic disk, and is a medium to which data is written by magnetism. The disk DK is attached to the spindle12and rotates by driving of the SPM13. A direction along the circumference of (the upper surface and the lower surface of) the disk DK is referred to as a circumferential direction, and a direction orthogonal to the circumferential direction of (the upper surface and the lower surface of) the disk DK is referred to as a radial direction. The disk DK is divided into a plurality of regions called tracks in the radial direction (or concentrically) around the spindle12. In addition, the disk DK is divided into a plurality of regions called sectors in the circumferential direction.FIG.2part (b) is an example in which a track TR and a sector SCT are illustrated one by one, and the target data sector TGT is illustrated in the sector SCT on the track TR. In the seek control, the arm AM or the like of the actuator block100is moved to move the head HD onto the track TR, and then the head HD is moved onto the target data sector TGT by the rotation of the disk DK. Thereafter, tracking control is performed by the microactuator MA or the like. The tracking control is a general technique, and details are not described. Returning toFIGS.1A and1B, a driver IC20A is a function of the control system A, and controls each function of the HDA10according to control from a MPU50A, a HDC60A, or the like. A driver IC20B functions in the control system B in the same manner as the driver IC20A. When not particularly distinguished, the driver IC20A and the driver IC20B are collectively referred to as a driver IC20. Note that in the driver IC20, the microactuator MA may not be provided, and in this case, an MA control unit230may not be provided. An SPM control unit210A is a function shared by the control systems A and B, and controls the rotation of the spindle motor SPM13of the HDA10. The SPM control unit210A is installed in the driver IC20A inFIG.1, but may be installed in the driver IC20B. A VCM control unit220A controls driving of the voice coil motor (VCM) by controlling a current (or voltage) supplied to the voice coil motor which controls the actuator block100A of the HDA10. A VCM control unit220B functions in the control system B in the same manner as the VCM control unit220A. When not particularly distinguished, the VCM control unit220A and the VCM control unit220B are collectively referred to as a VCM control unit220. A MA control unit230A controls driving of the microactuator MA1by controlling a current (or voltage) supplied to the microactuator MA1. A MA control unit230B functions in the control system B in the same manner as the MA control unit230A. When not particularly distinguished, the MA control unit230A and the MA control unit230B are collectively referred to as the MA control unit230. A head amplifier IC30A is, for example, a preamplifier, amplifies a read signal read from the disk DK1, and outputs the amplified read signal to a read/write (R/W) channel40A. The head amplifier IC30A is electrically connected to the head HD1(the head HD11, the head HD12). In addition, the head amplifier IC30A outputs a write current corresponding to a signal output from the R/W channel40A to the head HD1. A head amplifier IC30B functions in the control system B in the same manner as the head amplifier IC30A. When not particularly distinguished, the head amplifier IC30A and the head amplifier IC30B are collectively referred to as a head amplifier IC30. The head amplifier IC30includes the read head selection unit310and a read signal detection unit320. A read head selection unit310A selects the read head RH1which reads data from the disk DK1. A read head selection unit310B functions in the control system B in the same manner as the read head selection unit310A. When not particularly distinguished, the read head selection unit310A and the read head selection unit310B are collectively referred to as a read head selection unit310. A read signal detection unit320A detects a signal (read signal) read from the disk DK1by the read head RH1. A read signal detection unit320B functions in the control system B in the same manner as the read signal detection unit320A. When not particularly distinguished, the read signal detection unit320A and the read signal detection unit320B are collectively referred to as a read signal detection unit320. The R/W channel40A executes signal processing of read data transferred from the disk DK1to a host system2and write data transferred from the host system2in response to an instruction from the MPU50A. The R/W channel40A is electrically connected to the head amplifier IC30A, the MPU50A, the HDC60A, a write prohibition detector180, and the like. The R/W channel40B functions in the control system B in the same manner as the R/W channel40A. When not particularly distinguished, the R/W channel40A and the R/W channel40B are collectively referred to as an R/W channel40. A write prohibition unit410A designates prohibition (or stop) of write (or write operation) to the disk DK1by the head HD1, and outputs a control signal or the like to the head amplifier IC30A. A write prohibition unit410B functions in the control system B in the same manner as the write prohibition unit410A. When not particularly distinguished, the write prohibition unit410A and the write prohibition unit410B are collectively referred to as a write prohibition unit410. In the case of receiving a write prohibition determination execution signal generated by the write prohibition detector180, a shock sensor write prohibition determination unit411A determines whether to prohibit (or stop) the write (or the write operation) by the head HD1based on a vibration, an impact, or the like detected by a shock sensor170. For example, the shock sensor write prohibition determination unit411A may determine whether the vibration or the impact detected by the shock sensor170is larger than a predetermined value or equal to or smaller than the predetermined value based on the write prohibition determination execution signal. In a case where it is determined that the vibration or the impact is larger than the predetermined value, the shock sensor write prohibition determination unit411A determines the prohibition (or stop) of the write (or the write operation) of the head HD1. The shock sensor write prohibition determination unit411A generates and outputs a write prohibition determination signal for prohibiting (or stopping) the write (or the write operation) of the head HD1based on the determination result. On the other hand, in a case where it is determined that the vibration or the impact is equal to or less than the predetermined value, the shock sensor write prohibition determination unit411A may determine not to prohibit (or stop) the write (or the write operation) of at least one head HD1connected to the actuator AC1. A shock sensor write prohibition determination unit411B functions in the control system B in the same manner as the shock sensor write prohibition determination unit411A. When not particularly distinguished, the shock sensor write prohibition determination unit411A and the shock sensor write prohibition determination unit411B are collectively referred to as a shock sensor write prohibition determination unit411. A HDC write prohibition determination unit412A determines whether or not to prohibit (or stop) the write (or the write operation) of the head HD1based on the signal from the HDC60A, and outputs a control signal to the head amplifier IC30A based on the determination result. For example, in the case of receiving a write prohibition signal for prohibiting (or stopping) the write (or the write operation) of the head HD1from the HDC60A, the HDC write prohibition determination unit412A negates (deasserts) a write gate and controls the head amplifier IC30A to prohibit (or stop) the write (or the write operation) of the head HD1. A HDC write prohibition determination unit412B functions in the control system B in the same manner as the HDC write prohibition determination unit412A. When not particularly distinguished, the HDC write prohibition determination unit412A and the HDC write prohibition determination unit412B are collectively referred to as an HDC write prohibition determination unit412. The MPU50A is a micro processing unit (MPU), and outputs a control signal or the like to the driver IC20A based on a signal from the HDC60A or the like. The MPU50A outputs a control signal for seeking the head HD1to a predetermined position (for example, the target data sector TGT) of the disk DK1. Further, the MPU50A outputs a signal for writing data to a predetermined sector (data sector) or reading data from a predetermined sector (data sector). For example, the MPU50A positions the head HD1in the target data sector TGT, and outputs a signal for writing data to a predetermined sector (target data sector TGT) or reading data from a predetermined sector (target data sector TGT). A MPU50B functions in the control system B in the same manner as the MPU50A. When not particularly distinguished, the MPU50A and the MPU50B are collectively referred to as an MPU50. The HDC60A is a hard disk controller and includes a command control unit610A, a servo control unit620A, and a write operation determination unit630A. Each unit of the HDC60A, for example, the command control unit610A, the servo control unit620A, the write operation determination unit630A, and the like may be executed by a program such as firmware or software. In addition, the HDC60A may include these units as hardware such as a circuit. In addition, a part of the configuration of the HDC60A may be provided in the MPU50A. The HDC60A controls the driver IC20A to control the actuator block100A. The HDC60A controls read/write processing of data to the disk DK1and controls data transfer between the host system2and the R/W channel40A. The HDC60A is electrically connected to, for example, the R/W channel40A, the MPU50A, a volatile memory70A, a buffer memory80A, a nonvolatile memory90A, and the like. A HDC60B functions in the control system B in the same manner as the HDC60A. When not particularly distinguished, the HDC60A and the HDC60B are collectively referred to as a HDC60. The command control unit610A acquires the state of the actuator AC1and selects a command stored as a command queue1in the buffer memory80A. The state of the actuator AC1includes Power Mode indicating whether the actuator is operating, the number of standby commands of the command queue1scheduled to be processed by the actuator AC1, and Type_JerkSeek set as an operation parameter of the actuator AC1. The state of the actuator AC1may indicate a function of a control system (in this case, the control system A) including the actuator AC1or a state of the actuator block100(in this case,100A). A command control unit610B acquires the state of the actuator AC2and selects a command stored as a command queue2in the buffer memory80B. The state of the actuator AC2includes Power Mode indicating whether the actuator is operating, the number of standby commands of the command queue2scheduled to be processed by the actuator AC2, and Type_JerkSeek set as an operation parameter of the actuator AC2. The command control unit610B functions in the control system B in the same manner as the command control unit610A. When not particularly distinguished, the command control unit610A and the command control unit610B are collectively referred to as a command control unit610. An actuator state confirmation unit611A confirms the state of the actuator AC2and determines the operation mode of the actuator AC1which is a control target of its own control system A. An actuator state confirmation unit611B confirms the state of the actuator AC1and determines the operation mode of the actuator AC2which is a control target of its own control system B. The actuator state confirmation unit611B functions in the control system B in the same manner as the actuator state confirmation unit611A. When not particularly distinguished, the actuator state confirmation unit611A and the actuator state confirmation unit611B are collectively referred to as an actuator state confirmation unit611. An actuator state communication unit612A communicates with an actuator state communication unit612B via a controller communication unit190to exchange data. For example, the actuator state communication unit612A acquires the state of the actuator AC2which is a control target of the HDC60B. An actuator state communication unit612B functions in the control system B in the same manner as the actuator state communication unit612A. For example, the actuator state communication unit612B acquires the state of the actuator AC1which is a control target of the HDC60A. When not particularly distinguished, the actuator state communication unit612A and the actuator state communication unit612B are collectively referred to as an actuator state communication unit612. A command selection unit613A selects a command to be executed from the commands scheduled to be processed by the control system A stored in the command queue1in the buffer memory80A. A command selection unit613B selects a command to be executed from the commands scheduled to be processed by the control system B stored in the command queue2in the buffer memory80B. The command selection unit613B functions in the control system B in the same manner as the command selection unit613A. When not particularly distinguished, the command selection unit613A and the command selection unit613B are collectively referred to as a command selection unit613. A reordering table selection unit6131A selects a reordering table for determining a command to be processed next among the commands stored in the command queue1. A reordering table selection unit6131B selects a reordering table for determining a command to be processed next among the commands stored in the command queue2. The reordering table selection unit6131B functions in the control system B in the same manner as the reordering table selection unit6131A. When not particularly distinguished, the reordering table selection unit6131A and the reordering table selection unit6131B are collectively referred to as a reordering table selection unit6131. An estimated seek time limit command selection unit6132A confirms whether or not a command having a seek time (estimated time) longer than the end time (estimated time) of the command being processed by the actuator AC2is in the Command queue1. The estimated seek time limit command selection unit6132A selects a JIT seek as the operation characteristic of a seek acceleration, for example, based on the confirmation result. The JIT seek will be described later. The estimated seek time limit command selection unit6132B confirms whether or not a command having a seek time (estimated time) longer than the end time (estimated time) of the command being processed by the actuator AC1is in the Command queue2. The estimated seek time limit command selection unit6132B selects the JIT seek as the operation characteristic of the seek acceleration, for example, based on the confirmation result. The estimated seek time limit command selection unit6132B functions in the control system B in the same manner as the estimated seek time limit command selection unit6132A. When not particularly distinguished, the estimated seek time limit command selection unit6132A and the estimated seek time limit command selection unit6132B are collectively referred to as an estimated seek time limit command selection unit6132. The servo control unit620A controls the position of the head HD1. In other words, the servo control unit620A controls access by the head HD1to a predetermined region (for example, the target data sector TGT) of the disk DK1. The servo control unit620A includes a tracking control unit621A and a seek control unit622A. The servo control unit620B functions in the control system B in the same manner as the servo control unit620A. When not particularly distinguished, the servo control unit620A and the servo control unit620B are collectively referred to as a servo control unit620. The tracking control unit621A controls tracking of the head HD1to a predetermined track of the disk DK1. Tracking the head HD1to the predetermined track of the disk DK1may be simply referred to as tracking. The tracking includes following a predetermined path (for example, the predetermined track) when writing data to the disk DK1and following the predetermined path (for example, the predetermined track) when reading data from the disk DK1. A tracking control unit621B functions in the control system B in the same manner as the tracking control unit621A. When not particularly distinguished, the tracking control unit621A and the tracking control unit621B are collectively referred to as a tracking control unit621. The seek control unit622A performs seek control of the head HD1to the target track in the disk DK1. The seek control unit622A determines a seek orbit with respect to the head HD1. In addition, the seek control unit622A switches an operation mode of the seek control by Type_JerkSeek which is a seek operation mode set by the actuator state confirmation unit611A. A seek control unit622B functions in the control system B in the same manner as the seek control unit622A. When not particularly distinguished, the seek control unit622A and the seek control unit622B are collectively referred to as a seek control unit622. A low jerk seek control unit6221A executes seek control based on the operation characteristic of a low jerk in a case where Type_JerkSeek set by the actuator state confirmation unit611A is 1. A low jerk seek control unit6221B executes seek control based on the operation characteristic of a low jerk in a case where Type_JerkSeek set by the actuator state confirmation unit611B is 1. When not particularly distinguished, the low jerk seek control unit6221A and the low jerk seek control unit6221B are collectively referred to as a low jerk seek control unit6221. A high jerk seek control unit6222A executes seek control based on the operation characteristic of a high jerk in a case where Type_JerkSeek set by the actuator state confirmation unit611A is 2. A low jerk seek control unit6221B executes seek control based on the operation characteristic of a high jerk in a case where Type_JerkSeek set by the actuator state confirmation unit611B is 2. When not particularly distinguished, the high jerk seek control unit6222A and the high jerk seek control unit6222B are collectively referred to as a high jerk seek control unit6222. A JIT seek control unit6223A executes seek control based on the operation characteristic of the JIT seek in a case where Type_JerkSeek set by the actuator state confirmation unit611A is 0. A JIT seek control unit6223B executes seek control based on the operation characteristic of the JIT seek in a case where Type_JerkSeek set by the actuator state confirmation unit611B is 0. When not particularly distinguished, the JIT seek control unit6223A and the JIT seek control unit6223B are collectively referred to as a JIT seek control unit6223. A write operation determination unit630A determines the write operation of the head HD1. The write operation determination unit630A includes a position write operation determination unit631A and a speed write operation determination unit632A. The position write operation determination unit631A determines the write operation of the head HD1according to the position of the head HD1. The speed write operation determination unit632A determines one write operation of the head HD according to the speed of the head HD1. A write operation determination unit630B functions in the control system B in the same manner as the write operation determination unit630A. When not particularly distinguished, the write operation determination unit630A and the write operation determination unit630B are collectively referred to as a write operation determination unit630. The volatile memory70A and the volatile memory70B (collectively referred to as the volatile memory70when not particularly distinguished) are semiconductor memories in which stored data is lost when power supply is cut off. The volatile memory70stores data and the like required for processing in each unit of the magnetic disk device1. The volatile memory70is, for example, a dynamic random access memory (DRAM) or a synchronous dynamic random access memory (SDRAM). The volatile memory70A and the volatile memory70B may be work memories used in processing of the control system A and the control system B, respectively. The buffer memory80A and the buffer memory80B (collectively referred to as buffer memory80when not particularly distinguished) are semiconductor memories which temporarily record data and the like transmitted and received between the magnetic disk device1and the host system2. For example, the buffer memory80is a buffer which temporarily stores a command received by the magnetic disk device1from the host system2. Note that the buffer memory80may be physically integrated with the volatile memory70. The buffer memory80A may be physically integrated with the buffer memory80B. The buffer memory80is, for example, a DRAM, a static random access memory (SRAM), an SDRAM, a ferroelectric random access memory (FeRAM), a magnetoresistive random access memory (MRAM), or the like. The buffer memory80A is a storage destination of the command queue1(standby command1) which stores a command for the DK1of the control system A. The buffer memory80B is a storage destination of the command queue2(standby command2) which stores a command for the DK2of the control system B. The nonvolatile memory90A and a nonvolatile memory90B (collectively referred to as a nonvolatile memory90when not particularly distinguished) are semiconductor memories which record stored data even when power supply is cut off. The nonvolatile memory90is, for example, a NOR type or NAND type flash read only memory (flash ROM: FROM). A reordering table or the like may be stored in the nonvolatile memory90. The shock sensor170detects vibration and/or impact applied from the outside to the magnetic disk device1or the housing HS of the HDA10or the like of the magnetic disk device1. In the case of detecting vibration and/or impact, the shock sensor170outputs a signal (hereinafter, may be referred to as a vibration/impact detection signal) indicating that the vibration and/or impact has been detected. The shock sensor170is electrically connected to, for example, the write prohibition detector180. In the case of detecting the vibration and/or impact, the shock sensor170outputs a vibration/impact detection signal to the write prohibition detector180. The write prohibition detector180outputs a signal (may be referred to as a write prohibition determination execution signal) for executing the determination of prohibition of write (or write operation). In the case of receiving the vibration/impact detection signal, the write prohibition detector180outputs a write prohibition determination execution signal. The write prohibition detector180is electrically connected to, for example, the R/W channel40. In the case of receiving the vibration/impact detection signal, the write prohibition detector180outputs a write prohibition determination execution signal to the R/W channel40. The controller communication unit190controls transfer of information between a plurality of control systems. The controller communication unit190is electrically connected to, for example, the control system A and the control system B. The controller communication unit190includes an actuator state communication unit191. The actuator state communication unit191exchanges the actuator state between the HDC60A and the HDC60B. Note that, in the configuration illustrated inFIGS.1A and1B, the number of functions, the configuration of functions, and the like are not intended to be particularly limited. In addition, in the present embodiment, the configurations of the control systems A and B are described, but the configurations are not particularly limited. FIG.3is a block diagram illustrating a functional configuration of the servo control unit of the magnetic disk device according to the embodiment. In the servo control unit620, by setting any one of a target position6201, a target speed6202, and a target acceleration6203, a current indication value determination unit6204determines the current and the voltage to be applied to the actuator (the VCM, the MA, and the like). A current application unit6205moves the actuator AC by applying a current to the VCM, and demodulates the position information when the reproduction head reads the position information of the servo pattern on the magnetic disk DK. In addition, there may be a mechanism in which a difference between an estimated position estimated from the position, the speed, and the acceleration (current) and the demodulated position is input to a state estimation unit6207as in the drawing without using the position information read from the servo pattern on the magnetic disk DK as it is, and the estimated position and an estimated speed are updated in each servo sample. Note that a plant6206indicates a function of the voice coil motor VCM or the microactuator MA used in a general HDD. FIG.4is a diagram illustrating an example of an influence on another actuator at the time of actuator activation of the magnetic disk device according to the embodiment. In the multi-actuator magnetic disk device, when any actuator AC performs a seek operation, vibration is applied to another actuator AC. The actuator AC which performs the seek operation is referred to as an aggressor, and the actuator AC which receives the vibration of the aggressor is referred to as a victim. A characteristic1011illustrates an example of a case where the aggressor operates in a high jerk seek control. A characteristic TC11indicates an example of the time characteristic of the VCM acceleration of the aggressor-side head HD. A characteristic TC12indicates a jerk (the derivative of acceleration) with respect to the characteristic TC11, and indicates a temporal change (da/dt) of an acceleration a. A characteristic TC13indicates an example of the time characteristic of the head position error of the victim-side head HD when the aggressor-side head HD operates as the characteristic TC11. A characteristic1021illustrates an example of a case where the aggressor operates in a low jerk seek control. A characteristic TC21indicates an example of the time characteristic of the VCM acceleration of the aggressor-side head HD. A characteristic TC22indicates a jerk (the derivative of acceleration) with respect to the characteristic TC21, and indicates the temporal change (da/dt) of the acceleration a. A characteristic TC23indicates an example of the time characteristic of the head position error of the victim-side head HD when the aggressor-side head HD operates as the characteristic TC21. TH11and TH12indicate thresholds for the characteristic TC12and the characteristic TC22. TH11is denoted by thjerk, and TH12is denoted by −thjerk. The characteristic1011indicates that the characteristic TC12may exceed the threshold TH11or may fall below the threshold TH12since the characteristic1011is a characteristic at the time of high jerk seek control. In addition, in the characteristic1021, the characteristic TC22is controlled not to exceed the threshold TH11or fall below the threshold TH12since the characteristic1021is a characteristic at the time of low jerk seek control. TH21and TH22indicate thresholds for the characteristic TC13and the characteristic TC23. TH21is denoted by thPES, and TH22is denoted by −thPES. The characteristic1011indicates that the characteristic TC13may exceed the threshold TH21or may fall below TH22and indicates that the victim-side head position error is large since the characteristic1011is a characteristic at the time of high Jerk seek control. In addition, the characteristic1021indicates that the characteristic TC23is controlled not to exceed the threshold TH21or fall below TH22, and the victim-side head position error is controlled to be small since the characteristic1021is a characteristic at the time of low jerk seek control. In a case where the write operation and the read operation of the data sector are performed on the victim side as described above, there is a case where a process of prohibiting the write operation occurs or an error of the read operation occurs due to the variation of the head position. Therefore, the present embodiment may have a function of suppressing the influence of the vibration interference generated between the actuators AC. More specifically, when the aggressor-side head HD performs the seek operation, disturbance information such as position disturbance exerted on the victim-side head HD is estimated, and the estimated disturbance information is reflected in the operation of the microactuator MA of the victim-side head HD to control the victim-side head HD. FIG.5is a diagram illustrating a flow of data in a function of suppressing an influence of vibration interference between actuators in the magnetic disk device according to the embodiment. This function will be described as a function of the servo control unit620. Here, the actuator AC1of the control system A is described as the aggressor, and the actuator AC2of the control system B will be described as the victim. Note that in the present embodiment, both the control system A and the control system B have the same function, and thus the aggressor and the victim may be reversed by replacing both the control system A and the control system B. In the servo control unit620A, when the seek operation is performed, a controller6241A determines a VCM operation amount (control current) of the aggressor. When the VCM operation amount is input to a digital filter6244A (transmission characteristic Fxact) of mutual interference between the actuators AC, it is possible to estimate the position disturbance exerted on the victim-side effective reproduction head HD2. The transmission characteristic Fxactindicates a transfer characteristic of the digital filter. Since the transmission characteristic Fxactvaries between the section radial position of the aggressor-side effective head HD1or the disk DK1and the section radial position of the victim-side effective head HD2or the disk DK2, parameters for generating the transmission characteristic Fxactmay be provided in each configuration as in the digital filters6244A and6244B. Specifically, the control current of the VCM is input from the controller6241A to the VCM6243A and is simultaneously input to the digital filter6244A, and the output current (a current having a waveform like the TC13inFIG.4) of the digital filter6244A is input to the servo control unit620B of the control system B. In the servo control unit620B, the output current of the digital filter6244A is added to the control current output from the controller6241B to be input to a microactuator MA6242B. The servo control unit620B adds the output current of the digital filter6244A to the microactuator MA6242B of the victim-side effective reproduction head to compensate for the estimated position disturbance exerted on the victim-side effective reproduction head HD and operates the victim-side effective reproduction head. As a result, the position error PES of the victim-side effective reproduction head HD with respect to the target track can be reduced. In a case where the write operation and the read operation of the data sector of the disk DK2are performed on the victim-side, when the head position error exceeds a head position error threshold ±thPESrange, a write prohibition process is performed in the write operation, and it is determined that a read error occurs in the read operation, and the read operation is stopped. In the present embodiment, as indicated by the time characteristic TC22inFIG.4, the operation of the aggressor-side actuator AC1at the time of seeking is set to a low jerk seek, and the jerk is limited within the range of ±thjerk. As a result, the position error of the tracking head HD2of the victim-side actuator AC2can be reduced to fall within the ±thPESrange, and the data sector can be read and written. FIG.6is a diagram illustrating a reordering table according to the embodiment. A table RTD1indicates a reordering table for the low jerk seek, and a table RTD2indicates a reordering table for the high jerk seek. Data STD1indicates a seek time during the low jerk seek control, and data STD2indicates a seek time during the high jerk seek control. The reordering tables (RTD1, RTD2) and the characteristics of the seek time (STD1, STD2) may be stored in the nonvolatile memory90and developed in the volatile memory70when the magnetic disk device1is activated. In addition, when selection is made by the reordering table selection unit6131, the data may be stored in the volatile memory70or the buffer memory80. The actuator AC of the present embodiment enables at least a seek control having a low Jerk characteristic falling within the ±thjerkrange indicated by the characteristic TC22ofFIG.4and a high Jerk characteristic exceeding at least the ±thjerkrange indicated by the characteristic TC12ofFIG.4. In this control, for example, when the head HD1is controlled by the actuator AC1of the aggressor, it is desirable to cause peak values of the VCM acceleration to match with each other (from the viewpoint of seek time) as indicated by the characteristics TC11and TC21, regardless of whether the seek control is performed by the low jerk characteristic or the seek control is performed by the high jerk characteristic. Note that in the seek control of the characteristic TC12and the characteristic TC22, at least one of the VCM current peak and the acceleration peak may be the same or different. When the actuator AC2does not perform at least a seek operation, the actuator AC1performs a seek with a high jerk exceeding the ±thjerkrange. As a result, the seek time of the actuator AC1can be shortened, a command can be accessed in a short time, and a command access performance can be improved. On the other hand, in the case of a low jerk falling within the range of ±thjerk, the seek time becomes long as indicated by STD1inFIG.6, and the command access performance (the number of commands that can be accessed per unit time) may deteriorate. FIG.7is a timing chart in which the magnetic disk device according to the embodiment executes the seek operation, and illustrates an example of an overall flow from the reception of a command transmitted from the host system2to the seek operation. Here, it is assumed that the command transmitted from the host system2is a command to the disk DK1processed by the control system A. The host system2outputs a data read or write request (referred to as a command) Cmd21to the magnetic disk device1(SC21). When receiving the command Cmd21transmitted from the host system2, the magnetic disk device1stores the command in a free portion of the command queue1(CQ1-1) on the buffer memory80A of the magnetic disk device1. Note that CQ1-1to CQ1-5inFIG.7indicate the same command queue1and indicate time transition of the state. The command control unit610A selects a command Cmd11to be executed in SC101before the SC21, and issues a seek request for accessing data on the DK1designated by the command Cmd11to the servo control unit620A (SC102). The servo control unit620A selects a pattern of a seek orbit within the predicted time among seek orbits corresponding to the disk radius movement distances of the head HD1to the target data sector TGT according to the time (estimated time) when the head HD1reaches the target data sector TGT of the data designated by the command request Cmd11(SC201). As a result, writing and reading can be performed in the target data sector TGT. The servo control unit620A performs seek control of the head HD1with respect to the target data sector TGT in the selected pattern of seek orbit (SC202). When the head HD1moves to the track having the target data sector TGT by the seek control, the head HD1issues a seek completion notification to the command control unit610A (SC203). When the command control unit610A waits for the rotation of the disk DK1and detects the target data sector TGT (SC104), the command control unit controls the head amplifier IC30A to read or write the data designated by the read head RH1or the write head WH1(SC105). According to the above procedure, it is possible to read or write data to the disk DK1by the Cmd11. For example, the HDC60A of the control system A notifies the host system2that the Cmd11has been completed (SC106). In SC201to SC106described above, while the head HD1or the like is processing the Cmd11, the command control unit610A selects a command to be executed next to the Cmd11from the commands stored in CQ1-2(SC103). In the SC103, the state of the other actuator AC2is confirmed, and the jerk setting and the command Cmd12are selected according to the state of the actuator AC2. When the processing of the Cmd11is completed in SC106, the processing of the Cmd12is executed in the seek operation from SC201in the same manner as SC106(SC107, SC204, SC205). In addition, a new command Cmd22from the host system2is also stored in the free portion of the command queue1(CQ1-4), and the command control unit610A selects a command to be executed next to Cmd12from the commands stored in CQ1-5in the same manner as SC103(SC108). Thereafter, the magnetic disk device1repeats the same procedure in response to a request command from the host system2. FIG.8is a flowchart in which the magnetic disk device according to the embodiment executes command selection, and is a flowchart for the command control unit610A to select a command from the command queue1in the command selection (reordering) in SC101, SC103, and SC108inFIG.7. The flowchart will be described as processing in the actuator AC1and the control system A which controls the actuator AC1. In the command control unit610A, the actuator state confirmation unit611A confirms the state of the actuator AC2of the control system B (step S1). The reordering table selection unit6131A selects a reordering table based on the state of the actuator AC2confirmed in step S1(step S2). The command selection unit613A selects a command from the command queue1based on the reordering table selected in step2(step S3). In step S3, for example, a command having the shortest access time (seek time+rotation waiting time) in a command group is selected. FIG.9is a flowchart in which the magnetic disk device according to the embodiment sets the type of the jerk seek control, and corresponds to the process of S1ofFIG.8. The actuator state communication unit612A acquires a power mode which is the state information of the actuator AC2and the number of held commands held in the command queue2from the actuator state communication unit612B of the other control system B (step S101). The actuator state confirmation unit611A confirms whether the power mode acquired in step S101is Active (step S102). In a case where the power mode of the actuator AC2of the control system B is not Active (No in step S102), the actuator state confirmation unit611A sets 2 indicating the high jerk seek control in its own parameter Type_JerkSeek (step S107). A case where the power mode is not Active is, for example, a case where the power mode is Standby, Idle, or Sleep. That is, in a case where the power mode of the actuator AC2is not Active, the HDC60A performs seek control of the actuator AC1with a high Jerk. On the other hand, in a case where the power mode of the actuator AC2of the control system B is Active (Yes in step S102), the actuator state confirmation unit611A confirms the number of held commands of the command queue2acquired in step S101(step S103). In a case where the number of commands in the command queue2is 0 (No in step S103), the actuator state confirmation unit611A sets 2 indicating the high jerk seek control in its own parameter Type_JerkSeek (step S107). In a case where the number of commands of the actuator AC2is 1 or more (Yes in step S103), the actuator state confirmation unit611A confirms Type_JerkSeek set in the actuator AC2(step S104). In a case where Type_JerkSeek=2 (Yes in step S104), the actuator state confirmation unit611A sets 0 indicating the low jerk seek control to its own parameter Type_JerkSeek (step S105). In addition, in a case where Type_JerkSeek is other than 2 (No in step S104), the actuator state confirmation unit611A sets 1 indicating the high jerk seek control in its own parameter Type_JerkSeek (step S106). The above setting procedure of Type_JerkSeek can also be summarized as follows. That is, in a case where the power mode of the actuator AC2of the other (control system B) is Standby or Idle, the high Jerk seek control (Type_JerkSeek=2) is set. In a case where the power mode of the other actuator AC2is Active and the number of commands of the command queue2related to the other actuator AC2is 0, the high Jerk seek control (Type_JerkSeek=2) is set. In a case where the power mode of the other actuator AC2is Active, the number of commands of the command queue2related to the other actuator AC2is 1 or more, and Type_JerkSeek of the other actuator AC2is 2, the low Jerk seek control (Type_JerkSeek=0) is set. In a case where the power mode of the other actuator AC2is Active, the number of commands of the command queue2related to the other actuator AC2is 1 or more, and Type_JerkSeek of the other actuator AC2is other than 2, the low Jerk seek control (Type_JerkSeek=1) is set. FIG.10is a flowchart in which the magnetic disk device according to the embodiment sets the type of the reordering table, and corresponds to the process of S2ofFIG.8. The reordering table selection unit6131A confirms Type_JerkSeek set by the actuator state confirmation unit611A in the flow ofFIG.9(step S201). In a case where Type_JerkSeek is a low Jerk seek of 0 or 1 (‘0, 1’ in step S201), the reordering table selection unit6131A acquires a reordering table for the low Jerk seek from the volatile memory70A and sets the reordering table in the servo control unit620A (step S202). In a case where Type_JerkSeek is a high jerk seek of 2 (‘2’ in step S201), the reordering table for the high jerk seek is acquired from the volatile memory70A and set in the servo control unit620A (step S203). FIG.11is a flowchart in which the magnetic disk device according to the embodiment executes the command selection, and corresponds to the process of S3ofFIG.8. The command selection unit613A confirms Type_JerkSeek set by the actuator state confirmation unit611A in the flow ofFIG.9(step S301). In a case where Type_JerkSeek set by the actuator state confirmation unit611A is not 0 (No in step S301), the command selection unit613A performs a normal command selection process (step S304). That is, in step S304, the command selection unit613A selects a command with a short access time from a command group in the command queue1of the control system A as in the related art. The servo control unit620A processes the command selected in step S304. More specifically, in the servo control unit620A, the low jerk seek control unit6221A performs seek control of the head HD1with the low jerk seek pattern in the case of Type_JerkSeek=1, and the high jerk seek control unit6221A performs seek control of the head HD1with the high jerk seek pattern in the case of Type_JerkSeek=2. In step S301, in a case where Type_JerkSeek is 0 (0 indicates low jerk seek control. However, the other actuator AC2is set to the high jerk seek control) (Yes in step S301), the command selection unit613A confirms the end time of the command currently being executed on the actuator AC2side and the access time of each command in the command queue1of the control system A (step S302). In step S302, in a case where there is no command in the command queue1that requires a seek time longer than the end time of the command being executed on the other actuator AC2side (No in step S302), the command selection unit613A performs the normal command selection process (step S304). That is, a command with a short access time is selected from the command group in the command queue1for the control system A as in the related art (step S304). In step S304, at this time, the servo control unit620A desirably performs a seek with the JIT seek in a time obtained by adding a rotation time of one turn. Accordingly, power consumption due to the seek can be reduced. In step S302, in a case where the command that requires a seek time longer than the end time of the command on the other actuator AC2side is in the command queue1(Yes in step S302), the command selection unit613A selects a command with a short access time from the commands in the command queue1that requires a seek time longer than the end time of the command on the other actuator AC2side (step S303). In step S303, at this time, the servo control unit620A desirably performs a seek with the JIT seek. Accordingly, power consumption due to the seek can be reduced. The above procedure can be summarized as follows. There is a problem that the control system A cannot access data due to the influence of vibration by the control system B while the actuator AC2of the control system B performs the high jerk seek operation. In order to avoid this problem, the control system A performs control so that the head HD1accesses data (data read/write process) after the high jerk seek operation by the control system B ends. For example, in the processing of the command selected from the command queue1in the normal command selection process (corresponding to step S304), the control system A may execute the read/write process operation by the head HD1after waiting for one rotation of the disk DK1after moving the head HD1to the target track by the seek control. Note that in a case where the high Jerk seek operation by the control system B is not ended even after waiting for one rotation of the disk DK1, the control system A may perform the data read/write process after waiting for a further rotation time of the disk DK1. In steps S303and S304, as a method of selecting a command with a short access time from the command group in the command queue1for the control system A, a method may be adopted which selects a command of a first destination with a short access time in the path in consideration of not only the first destination but also following second destination and third destination and the like. The command selection unit613A executes the command selected by the above flow. The servo control unit620A controls the driver IC20, the actuator AC1, and the like based on the command selected by the command selection unit613A. FIG.12is a flowchart in which the magnetic disk device according to the embodiment executes the seek control, and illustrates processing of the command selected in step S303ofFIG.11. The seek control is control to move the head HD to the track (target track) of the target data sector designated by the command. The JIT seek control unit620A of the servo control unit6223A selects a pattern of a seek orbit in time for the data sector (target data sector TGT) (step S11). In the pattern selection of the seek orbit, for example, in the case of the same access performance, it is preferable that the power consumption by the seek operation is low, and in step S11, the JIT seek control unit6223A selects, for example, the pattern of the seek orbit with the lowest power consumption (also referred to as the JIT seek). FIG.13Ais a diagram illustrating timings at which two control systems of the magnetic disk device according to the embodiment execute the seek control. FIG.13Apart (a) illustrates a state SA1of the control system A and a state SB1of the control system B on the same time axis, and corresponds to the states of the control systems A and B in Yes in S301ofFIG.11. In the control system B in the state SB1of the control system B, it is indicated that a command (referred to as a command B) for a target data sector TGT-B is being processed, and the control system B is in the middle of seek control in the high jerk seek at a time T1. In the control system A in the state SA1of the control system A, it is indicated that while the control system B performs the seek control in the high jerk seek, the seek control by the low jerk can be performed on the actuator AC1, but the read/write control for the target data sector is not possible. The control system B ends the high jerk seek control at a time T3, and the control system A can perform read/write control on the target data sector. FIG.13Apart (b) illustrates a state SA2of the control system A and a state SB2of the control system B on the same time axis, and corresponds to the states of the control systems A and B in Yes in step S302ofFIG.11. At the time T1of the state SA2of the control system A, the command selection unit613A compares an estimated seek control end time T5for a target data sector TGT-A of the command (referred to as a command A) to be processed by the control system A with an estimated time T3at which the seek control by the high jerk seek control of the control system B is completed, and the command selection unit613A of the control system A selects the JIT seek for the seek control for the target data sector TGT-A. FIG.13Apart (c) illustrates a state SA3of the control system A and a state SB3of the control system B on the same time axis, and corresponds to the states of the control systems A and B in a case where the command is selected in S304after No in step S302ofFIG.11and the seek operation is performed with the waveform of the corresponding JIT seek pattern. At the time T1of the state SA3of the control system A, the command selection unit613A compares an estimated seek control end time T2for the target data sector TGT-A of the command (referred to as the command A) to be processed by the control system A with the estimated time T3at which the seek control by the high jerk seek control of the control system B is completed, and the command selection unit613A of the control system A selects the JIT seek for the seek control for the target data sector TGT-A. However, in the case ofFIG.13Apart (c), the start time T2of the read/write control to the target data sector TGT-A by the control system A temporally overlaps the high jerk seek control of the control system B, and there is a large possibility that an error occurs in the read/write control to the TGT-A. In order to avoid this error, for example, the control system A may execute the read/write control to the target data sector TGT-A after waiting for the end of the high jerk seek control of the control system B. FIG.13Apart (d) corresponds to the states of the control systems A and B in the case of performing the read/write control to the target data sector TGT-A after the control system A waits for the end of the high jerk seek control of the control system B in the case ofFIG.13Apart (c).FIG.13Apart (d) illustrates an example in which the control system A waits for one cycle of the rotation time of the disk. The control system A starts the read/write control of the target data sector TGT-A at a time T6when one cycle of the rotation time of the disk has further elapsed from the start time T2of the first read/write control of the target data sector TGT-A. The control system A selects the JIT seek in which the seek control to the target data sector TGT-A ends at the time T6. As a result, the control system A can execute the read/write control to the target data sector TGT-A without being affected by the high jerk seek control of the control system B. FIG.13Bis a diagram illustrating a relationship between operation states of two control systems of the magnetic disk device according to the embodiment. The tracking control is control for positioning the head HD in the target track, and may be considered as a state when the seek (including the high jerk seek, the low jerk seek, and the like) is not being executed. For example, inFIG.13A, the portion (data read drive time) of the TGT (TGT-B, TGT-A), the time after the portion (data read drive time) of the TGT (TGT-B, TGT-A), and the time before the start of the seek indicate the state of “tracking control”. Using the operation parameter Type_JerkSeek set inFIG.9, the actuator block100or the actuator AC may have, for example, Type_JerkSeek=−1 in the state of tracking control. In addition, as a state in which the head HD is reading or writing data, another value may be assigned to Type_JerkSeek. A state STT1indicates an operation state permitted to one actuator block (referred to as the actuator block100A) when the other actuator block (referred to as the actuator block100B) is executing the seek in the high jerk seek control. That is, when the actuator block100B executes the seek in the high jerk seek control, the actuator block100A performs control such that the tracking control is possible, the data read/write of the head HD1is not possible, the low jerk seek control is possible, and the high jerk seek control is not possible. A state STT2indicates that the tracking control, the data read/write of the head HD1, the low jerk seek control, and the high jerk seek control are possible when the actuator block100B is not executing the seek in the high jerk seek control. FIG.14is a diagram illustrating an example of relationships of an acceleration and a position of the seek control in the magnetic disk device according to the embodiment. A TC100indicates an example of the time characteristic (time to seek acceleration) of the seek acceleration of the head HD1, a TC200indicates an example of the time characteristic (time to seek position) of the seek position of the head HD1, the characteristics TC101and TC201indicate characteristics in the case of the fastest seek, and the characteristics TC102and TC202indicate characteristics in the case of the JIT seek. A target data sector TGT is assumed to be one sector on one track on the disk DK1. The fastest seek is the fastest seek pattern for causing the head HD1to reach the target data sector TGT, and is, for example, a case where the head HD1is moved by the high jerk seek control as in TC11illustrated inFIG.4. The JIT seek is a seek in which control is performed with the pattern of the seek orbit optimized to reduce power consumption of the magnetic disk device1by the seek control. More specifically, the JIT seek is the pattern of the seek orbit optimized to reduce power consumption or the like among the seek patterns in which the seek is completed immediately before the target data sector. In a case where the actuator state confirmation unit611A performs the seek with the characteristic TC101of the fastest seek pattern that meets the Type_JerkSeek condition set in the flow ofFIG.9, the head reaches the track (target track) having the target data sector TGT earlier by (delta) Tseektime (REF202). In step S11ofFIG.12, the seek pattern in which the seek time corresponding to the (delta) Tseektime is long, that is, the JIT seek is selected. For example, in the characteristic TC102of the JIT seek inFIG.14, the acceleration level in the acceleration and deceleration section is lowered than the characteristic TC101of the fastest seek pattern (the voltage level applied to the VCM is lowered), and adjustment is made by a length of a zero acceleration section (it is sufficient if the acceleration is near zero) (also referred to as a constant speed section) to adjust a seek distance to the target track. Reducing the acceleration level has an effect of reducing a power consumption P of the disk device1. The power consumption P is determined by a current Ivcmflowing through the VCM and a circuit resistance R including the VCM, and in a case where the acceleration level is lowered, the current Ivcmflowing through the VCM is lowered, so that the power consumption P is lowered. P=R×Ivcm2 Note that in the above example, the power consumption is decreased by lowering the acceleration level, but the acceleration section and the deceleration section may be shortened, and adjustment may be made by the length of the zero acceleration section (it is sufficient if the acceleration is near zero) (also referred to as the constant speed section) to reduce the power consumption. In the adjustment of the seek control of the characteristic TC102, adjustment is performed by any combination of a time or acceleration level in the acceleration section, a time or acceleration level in the deceleration section, and a time in the constant speed section of the actuator AC1. After selecting the pattern of the seek orbit such as an acceleration characteristic T101in step S11ofFIG.12, the seek control unit622A executes the seek control according to the selected seek orbit pattern (seek control setting). More specifically, the seek control unit622A controls the actuator AC1from the low Jerk seek control unit6221A or the JIT seek control unit6223A in accordance with its own parameter Type_JerkSeek set by the actuator state confirmation unit611A, for example, when Type_JerkSeek=0. According to the above procedure, the actuator AC1can be operated in consideration of the state of the other actuator AC2, the seek time can be shortened so that the command can be accessed in a short time, and the command access performance can be improved. (Modification) In the embodiment, the example of the magnetic disk including two control systems for controlling two disks DK and actuators AC has been described, but the number of disks DK, the actuators AC, and the control systems may be two or more. For example, the spindle12may be provided with two or more disks DK, and each may be provided with the actuator block100. In addition, there may be two or more spindles12, each of which may be provided with two or more disks DK and the actuator block100for controlling the disks, or may be provided with two or more control systems. In a case where the magnetic disk device1includes two or more actuators AC (or actuator block100), for example, “the other actuator AC (or the actuator block100)” in steps S102, S103, and S104ofFIG.9,FIG.13B, or the like may be a plurality of actuators AC (or actuator blocks100) other than the actuator AC (or actuator block100) of interest. FIG.15is a configuration diagram of a magnetic disk device according to a modification. The magnetic disk device of the present modification is similar to the magnetic disk device1ofFIG.1except that the HDA10is an HDA10-2. The HDA10-2is an example in which two actuator blocks100(similar toFIG.2A) are attached in the horizontal direction (the case of the HDA10is defined as a vertical direction) of one spindle (SPM)12. Unlike the HDA10ofFIG.1, in the HDA10-2, the actuator blocks100A and100B are attached to two coaxial BRs for the VCM, respectively. In the magnetic disk device of the present modification, when the actuator blocks100A and100B are controlled by two control systems A and B (similar toFIG.1) in the same manner as the control illustrated in the embodiment, respectively, it is possible to perform command access in consideration of the influence of the microactuator. Features described in the present embodiment are extracted as follows. (A-1) A magnetic disk device1which includes two or more independently drivable actuator blocks100and performs seek control with a low jerk in with a jerk that is a derivative of acceleration is limited, in whichin a state where an actuator block100B is not accessing a data sector of a disk, an actuator block100A accesses the data sector of the disk by seek control with a high Jerk. (A-2) The magnetic disk device according to (A-1), in whicha state where the actuator block100B is not accessing the data sector of the disk is a case where a power mode of the actuator block100B is not Active or a state where a command queue2in which a command that is an access request from a host system2to the disk is stored is in an empty state. (A-3) The magnetic disk device according to (A-1), in whicha nonvolatile memory holds a low Jerk seek control parameter for executing the jerk-limited seek control with the low jerk and a high Jerk seek control parameter for executing the seek control with the high Jerk. (A-4) The magnetic disk device according to (A-3), in whichthe low Jerk seek control parameter and the high Jerk seek control parameter stored in the nonvolatile memory are developed in a volatile memory at a time of activation. (A-5) The magnetic disk device according to (A-1), in whicha correspondence table of a seek time with respect to a seek distance for performing command reordering is provided in each of the jerk-limited seek control with the low jerk and the seek control with the high jerk. (A-6) The magnetic disk device according to (A-5), in whichthe correspondence table is switched when the jerk-limited seek control with the low jerk is switched to the seek control with the high jerk. (A-7) The magnetic disk device according to any one of (A-1) to (A-6), in whichwhen the actuator block100A performs seek control while the actuator block100B is performing the seek control with the high jerk, in a case where one or more commands which are accessible to a data sector controlled by the actuator block100A at a time later than a seek end time of the actuator block100B are in a command queue1, a command to be accessed next is selected from the commands in the command queue1. (A-8) The magnetic disk device according to any one of (A-1) to (A-7), in whichwhen the actuator block100A performs seek control while the actuator block100B is performing the seek control with the high jerk, in a case where there is no command in the command queue1which is accessible to the data sector at the time later than the seek end time of the actuator block100B, the actuator block100A selects a command to be accessed next from the command queue1, adds a one-turn rotation time of the disk after the seek end time of the actuator block100B, and accesses the command. (A-9) The magnetic disk device according to any one of (A-1) to (A-8), in whichafter a target data sector is determined in the seek control of the actuator block100A, the seek control of the actuator block100A is adjusted such that the seek of the actuator block100A ends immediately before a time of accessing the target data sector. (A-10) The magnetic disk device according to (A-9), in whichin adjustment of the seek control of the actuator block100, the adjustment is performed by any combination of a time or an acceleration level in an acceleration section of an actuator AC, a time or an acceleration level in a deceleration section, and a time in a constant speed section. (A-11) The magnetic disk device according to any one of (A-1) to (A-10), in whichin the seek control with the high Jerk and the seek control with the low Jerk by the actuator block100, peak levels of either acceleration or current in the seek controls are matched with each other. By adopting the above (A-1), in a state where a certain actuator block is not accessing a data sector, a seek which allows the target data sector to be reached more quickly can be performed. By adopting the above (A-2), the (A-1) can be performed when the power mode of the HDD is Idle or Standby or when the command queue to the HDD requested from the host is in an empty state. By adopting the above (A-3), it is possible to perform the jerk-limited seek control with the low jerk and the seek control with the high Jerk. By adopting the above (A-4), in a configuration in which the overhead due to the processing time on the FW is small, it is possible to perform the jerk-limited seek control with the low jerk and the seek control with the high jerk. By adopting the above (A-5), the command control side can perform the command reordering corresponding to the seek control of the high jerk. By adopting the above (A-6), the command control side can perform the command reordering corresponding to the seek control of the high jerk. In a case where this is not adopted, even when the seek control with the high jerk is performed, rotation waiting occurs and an access performance is not improved. By adopting the above (A-7), it is possible to adjust a timing such that writing and reading to and from the data sector are not performed at the time when interference vibration due to the high jerk seek of a certain actuator block is applied. By adopting the above (A-8), it is possible to adjust the timing such that writing and reading to and from the data sector are not performed at the time when interference vibration due to the high jerk seek of a certain actuator block is applied. By adopting the above (A-9), it is possible to reduce power consumption in the same access performance. By adopting the above (A-10), it is possible to perform seek control adjustment in which the seek is ended immediately before the time of accessing the target data sector. By adopting the above (A-11), even when the current peak or the acceleration peak is the same, the seek time can be changed by changing the jerk. According to at least one embodiment and modification described above, it is possible to provide the magnetic disk device which improves the performance of a multi-actuator. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. In addition, the processes illustrated in the flowcharts, the sequence charts, and the like may be realized by software (programs and the like) operated by hardware such as an IC chip and a digital signal processing processor (DSP), a computer including a microcomputer, or the like, or a combination of the hardware and the software. In addition, the device of the present invention is also applied to a case where the claims are expressed as a control logic, a case where the claims are expressed as a program including an instruction for executing a computer, and a case where the claims are expressed as a computer-readable recording medium describing the instruction. In addition, names and terms used are not limited, and even other expressions are included in the present invention as long as they have substantially the same content and the same purpose.
78,037
11862198
DETAILED DESCRIPTION The following description and the drawings sufficiently illustrate example embodiments to enable those skilled in the art to practice them. Other example embodiments may incorporate structural, logical, electrical, process, and other changes. Examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some example embodiments may be included in, or substituted for those of other example embodiments. Example embodiments set forth in the claims encompass all available equivalents of those claims. One of the current trends that can be observed with artists interacting with their fans is to ask fans to submit footage that they have filmed during a concert for later use by professional editors in concert videos. One example for this is a recent appeal issued by a performance act to their fans to film an event in New York City and provide the material later. This can be seen as a reaction by the artists to the more and more frequent usage of filming equipment such as mobile phones or pocket cameras during shows and the futility of trying to quench this trend by prohibiting any photo equipment. Some example embodiments may provide the technical apparatus and/or system to automatically synchronize multiple video clips taken from the same event, and by enabling a consumer to create his or her own personal concert recording by having access to a manifold of video clips, including the ability to add own, personal material. This makes use of the fact that many people carry equipment with them in their daily life that is capable of capturing short media or multimedia clips (photo, video, audio, text), and will use this equipment during events or shows to obtain a personal souvenir of the experience. Many people are willing to share content that they have created themselves. This sharing of content is not restricted to posting filmed media clips, but includes assembling media clips in an artistic and individual way. To leverage these observed trends, some example embodiments use media fingerprinting to identify and synchronize media clips, In some example embodiments, the media synchronization system may be used to synthesize a complete presentation of an event from individual, separately recorded views of the event. An example of this is to synthesize a complete video of a concert from short clips recorded by many different people in the audience, each located in a different part of the performance hall, and each recording at different times. In these example embodiments, there may be sufficient overlap between the individual clips that the complete concert can be presented. In some example embodiments, concert attendees may take on the role of camera operators, and the user of the system may take on the role of the director in producing a concert video. In some example embodiments, technologies such as audio fingerprinting and data mining are used to automatically detect, group, and align clips from the same event. While some example embodiment may be used with public events like concerts, these example embodiments may also be used with private and semi-private events like sports and parties. For example, a group of kids may each record another kid doing tricks on a skateboard. Later, those separate recordings could be stitched together in any number of different ways to create a complete skate video, which could then be shared. Just as personal computers became more valuable when email became ubiquitous, video recording devices like cameras and phones will become more valuable when effortless sharing and assembly of recorded clips becomes possible. Some example embodiments may play a key role in this as storage and facilitator of said content assembly. Some example embodiments may be used by moviemakers who enjoy recording events and editing and assembling their clips, along with those of other moviemakers, into personalized videos, which they then share. Some example embodiments may be used by movie viewers who enjoy watching what the moviemakers produce, commenting on them, and sharing them with friends. Both groups are populated with Internet enthusiasts who enjoy using their computers for creativity and entertainment. Some example embodiments may be used by the event performers themselves who may sanction the recordings and could provide high-quality, professionally produced clips for amateurs to use and enhance, all in an effort to promote and generate interest in and awareness of their performance. Some example embodiments may be used for recording concerts and allowing fans to create self-directed movies using clips recorded professionally and by other fans. FIG.1illustrates a hypothetical concert100with a band102playing on a stage104, with several audience members106.1-106.3making their own recordings with their image capture devices108.1-108.3, for example, cameras (e.g., point and shoot cameras) and phone cameras, and with a professional camera106.4recording the entire stage104. A main mixing board110may be provided to record the audio for the concert100. The professional camera106.1may capture the entire stage104at all times while the fan cameras108.1-108.3may capture individual performers102.1-102.4, or the non-stage areas of the performance space like the audience and hall, or provide alternate angles, or move about the performance space. Some example embodiments may be suitable for use with public events like concerts, and may also be used for private and semi-private events like kids doing tricks on skateboards. For example,FIG.2illustrates one person202riding a skateboard through a course204while his friends206.1-206.3film him with their image capture devices, for example, cameras (e.g., point and shoot cameras) and phone cameras208.1-208.3. There may be a stereo210playing music to, inter alia, set the mood. In these example embodiments, the clips captured by the cameras208.1-208.3may be pooled, synchronized, and edited into a good-looking movie, for example, that may be shared on the Internet. In one example embodiment, as the skater moves past a camera, that camera's footage may be included in the final movie. Because the cameras may also record audio, each camera208.1-208.3may simultaneously record the same music played by the stereo (or any other audio equipment playing music at the event). Accordingly, in an example embodiment, audio fingerprinting is used to synchronize the clips without the need for conventional synchronization (e.g., synchronizing video based on related frames). In these example embodiments, the content itself may drive the synchronization process. When no music is playing during recording, some example embodiments may synchronize the clips using other audio provided the audio was sufficiently loud and varying to allow for reasonable quality audio fingerprints to be computed, although the scope of the disclosure is not limited in this respect. For the purposes of this disclosure, an audio fingerprint includes any acoustic or audio identifier that is derived from the audio itself (e.g., from the audio waveform). FIG.3is a functional diagram of a media synchronization system300in accordance with some example embodiments. InFIG.3, four example primary components of the media synchronization system300are illustrated. These example components may include, but are not limited to, a media ingestion component or module302, a media analysis component or module304, a content creation component or module306, and a content publishing component or module308. While it may be logical to process captured media sequentially from the media ingestion module302to the content publishing module308, this is not a requirement as a user likely may jump between the components many times as he or she produces a finished movie. The following description describes some example embodiments that utilize client-server architecture. However, the scope of the disclosure is not limited in this respect as other alternative architectures may be used. In accordance with some example embodiments, the media ingestion module302of the media synchronization system300may be used to bring source clips into the system300, and to tag each clip with metadata to facilitate subsequent operations on those clips. The source clips may originate from consumer or professional media generation devices310, including: a cellular telephone310.1, a camera310.2, a video camcorder310.3, and/or a personal computer (PC)310.4. Each user who submits content may be assigned an identity (ID). Users may upload their movie clips to a ID assignment server312, attaching metadata to the clips as they upload them, or later as desired. This metadata may, for example, include the following: Event Metadata:Name (e.g., U2 concert)Subject (e.g., Bono)Location (e.g., Superdome, New Orleans)Date (e.g., 12/31/08)Specific seat number or general location in the venue (e.g., section 118, row 5, seat 9)Geographic coordinates (e.g., 29.951N 90.081 W)General Comments (e. g., Hurricane Benefit, with a particular actor) Technical Metadata:User IDTimestampCamera settingsCamera identificationEncoding formatEncoding bit rateFrame rateResolutionAspect ratio Cinematic Metadata:Camera location in the event venue (e.g., back row, stage left, etc.)Camera angle (e.g., close up, wide angle, low, high, etc.)Camera technique (e.g., Dutch angle, star trek/batman style), handheld, tripod, moving, etc.)Camera motion (e.g., moving left/right/up/down, zooming in or out, turning left/right/up/down, rotating clockwise or counter-clockwise, etc.)Lighting (e.g., bright, dark, back, front, side, colored, etc.)Audio time offset relative to video Community Metadata:KeywordsRatings (e.g., audio quality, video quality, camerawork, clarity, brightness, etc.) Upon arrival at the ID assignment server312, a media ID may be assigned and the media may be stored in a database along with its metadata. At a later time, for example, users may review, add, and change the non-technical metadata associated with each clip. While clips from amateurs may make up the bulk of submissions, in some example embodiments, audio and video clips recorded professionally by the performers, their hosting venue, and/or commercial media personnel may be used to form the backbone of a finished movie. In these example embodiments, these clips may become the reference audio and/or video on top of which all the amateur clips are layered, and may be labeled as such. In some example embodiments, reference audio may be provided directly off the soundboard (e.g., the main mixing board108shown inFIG.1) and may represent the mix played through the public address (PA) system at a concert venue. In certain embodiments, individual instruments or performers provide the reference audio. In some example embodiments, reference video may be provided from a high-quality, stable camera that captures the entire stage, or from additional cameras located throughout the venue and operated by professionals (e.g., the professional camera106.4). While several example embodiments are directed to the assembly of video clips into a larger movie, some example embodiments may be used to assemble still photos, graphics, and screens of text and any other visuals. In these example embodiments, still photos, graphics, and text may be uploaded and analyzed (and optionally fingerprinted) just like movie clips. Although these example embodiments may not need to use the synchronization features of the system300, pure audio clips could be uploaded also. These example embodiments may be useful for alternate or higher-quality background music, sound effects, and/or voice-overs, although the scope of the disclosure is not limited in this respect. In accordance with some example embodiments, the media analysis module304of the media synchronization system300may be used to discover how each clip relates to one or more other clips in a collection of clips, for example, relating to an event. After ingestion of the media into the system300, clips may be transcoded into a standard format, such as Adobe Flash format. Fingerprints for each clip may be computed by a fingerprinting sub-module314and added to a recognition server316. In some embodiments, the recognition server includes a database. The primary fingerprints may be computed from the audio track, although video fingerprints may also be collected, depending on the likelihood of future uses for them. In some example embodiments, additional processing may be applied as well (e.g., by the recognition server316and/or the content analysis sub-module318). Examples of such additional processing may include, but are not limited to, the following:Face, instrument, or other image or sound recognition;Image analysis for bulk features like brightness, contrast, color histogram, motion level, edge level, sharpness, etc.;Measurement of (and possible compensation for) camera motion and shake;Tempo estimation;Event onset detection and synchronization;Melody, harmony, and musical key detection (possibly to join clips from different concerts from the same tour, for instance);Drum transcription;Audio signal level and energy envelope;Image and audio quality detection to recommend some clips over others (qualities may include noise level, resolution, sample/frame rate, etc.);Image and audio similarity measurement to recommend some clips over others (features to analyze may include color histogram, spectrum, mood, genre, edge level, motion level, detail level, musical key, etc.);Beat detection software to synchronize clips to the beat;Image interpolation software to synchronize clips or still images (by deriving a 3-D model of the performance from individual video clips, and a master reference video, arbitrary views may be interpolated, matched, and synchronized to other clips or still images); orSpeech recognition. After initial processing, the fingerprints for a clip may be queried against the internal recognition server to look for matches against other clips. If a clip overlaps with any others, the nature of the overlap may be stored in a database for later usage. The system300may be configured to ignore matches of the clip to itself, regardless of how many copies of the clip have been previously uploaded. In some example embodiments, the system300may maintain a “blacklist” of fingerprints of unauthorized media to block certain submissions. This blocking may occur during initial analysis, or after the fact, especially as new additions to the blacklist arrive. In an example embodiment, a group detection module320is provided. Accordingly, clips that overlap may be merged into groups. For example, if clip A overlaps clip B, and clip B overlaps clip C, then clips A, B, and C belong in the same group. Suppose there is also a group containing clips E, F, and G. If a new clip D overlaps both C and E, then the two groups may be combined with clip D to form a larger group A, B, C, D, E, F, and G. Although many overlaps may be detected automatically through fingerprint matching, there may be times when either fingerprint matching may fail or there is no clip (like D in the example above) that bridges two groups has been uploaded into the system300. In this case, other techniques may be used to form a group. Such techniques may include analysis of clip metadata, or looking for matches on or proximity in, for example:Event name and date;Event location;Clip timestamp;Clip filename;Submitter user ID;Camera footprint;Chord progression or melody; orImage similarity. In an example embodiment, clips that do not overlap anything may be included in the group. Such clips include establishing shots of the outside of the venue, people waiting in line or talking about the performance, shots to establish mood or tone, and other non-performance activity like shots of the crowd, vendors, set-up, etc. These clips may belong to many groups. In some example embodiments, the system300may be configured to allow users to indicate which groups to merge. Since not all users may group clips in the same way, care may be taken to support multiple simultaneous taxonomies. For example, clips associated with the same submitter user ID and/or camera footprint may be grouped together. The temporal offset of one clip from that camera for a given event (relative to other clips or a reference time base) may then be applied to all clips in the group. This temporal offset may also be applied to still images from that camera. In some example embodiments, the system300may be configured to allow users of the system300who all upload clips from the same event to form a group for collaboration, communication, and/or criticism. Automatic messages (e.g., email, SMS, etc.) may be generated to notify other group members if new clips are uploaded, or a finished movie is published. In some example embodiments, the system300may be configured to automatically detect, inter alia, the lead instrument, primary performer, or player focus. While this may be accomplished through image or sound recognition, an alternative heuristic is to notice that, for example, more footage may be available for the guitarist during the solo passage. In these example embodiments, when a lot of footage is available for a scene, it may indicate that the scene may be a solo scene or other focus of the performance, and most media generation devices310may be focused on the soloist or primary performer at that moment. In some example embodiments, the content creation module306of the system300is used to build a finished movie from source clips contained in a media database322. In various example embodiments, after upload and analysis, the user may select clips to include in the final movie. This may be done by a clip browsing and grouping sub-module324that allows a user to select clips from among clips uploaded by the user, clips uploaded by other users, and/or clips identified through user-initiated text metadata searches. A metadata revision sub-module326allows the user to edit the metadata of a clip. In some embodiments, the media database322contains references (e.g., pointers, hyperlinks, and/or uniform resource locators (URLs)) to clips stored outside the system300. If the selected clips are part of any group of clips, the clip browsing and grouping sub-module324may include the other clips in the group in providing the user with a working set of clips that the user may then assemble into a complete movie. As the movie is built, the movie (or a portion of it) may be previewed to assess its current state and determine what work remains to be done. For example, a graphical user interface may be provided by a web interface to allow a user to access and manipulate clips. User involvement, however, is not absolutely necessary, and certain embodiments may build a finished movie automatically (e.g., without user supervision). In some example embodiments, movie editing tools and features may be provided in a movie editing sub-module328. Movie editing tools and features include one or more of the following:Simple cuts;Multitrack audio mixing;Audio fade in and out;Video fade in and out;Audio crossfading;Video dissolving;Wipes and masking;Picture-in-picture;Ducking (automatically lowering other audio during a voice-over or for crowd noise);Titles and text overlays;Chromakeyed image and video overlays and underlays;Video speed-up, slow-motion, and freeze-frame;Audio and video time stretching and shrinking;Video and audio dynamic range compression;Video brightness, contrast, and color adjustment;Color to black & white or sepia conversion;Audio equalization;Audio effects like reverberation, echo, flange, distortion, etc.;Audio and video noise reduction;Multichannel audio mastering (e.g., mono, stereo, 5.1, etc.);Synchronized image interpolation between two or more cameras for its own sake (morphing) or to simulate camera motion;“Matrix”-style effects; and/orSubtitles and text crawls. Because some example embodiments include a basic video editor, there may be essentially no limit to the number of features that may be made available in a user interface (e.g., a web-based user interface). Any available video editing technique or special effect may be integrated into the system300. In some example embodiments, the content creation module306of the system300may be implemented as a web application, accessed through a browser. Since people may be reluctant to install software, a web-based tool may allow for a wider audience, not just due to the cross-platform nature of web applications, but due to the fact that visitors may quickly begin using it, rather than downloading, installing, and configuring software. A web application may also be easier to maintain and administer, since platform variability is significantly reduced versus PC-based applications, although the scope of the disclosure is not limited in this respect. Web-based video editing may place great demands on network bandwidth and server speed. Therefore, some example embodiments of the content creation module306may be implemented on a PC, however the scope of the disclosure is not limited in this respect as embedded devices such as mp3 players, portable game consoles, and mobile phones become more capable of multimedia operations, and network bandwidth increases, these platforms become more likely the target for the user interface. In some example embodiments, the media analysis module304and/or the content creation module306may be implemented, in part or entirely, on a server or on a client device. A central storage server connected to the Internet or a peer-to-peer network architecture may serve as the repository for the user generated clips. A central server system, an intermediary system (such as the end user's PC) or the client system (such as the end user's mobile phone) may be a distributed computational platform for the analysis, editing, and assembly of the clips. In some example embodiments, all the movie synthesis may occur on the server and only a simple user interface may be provided on the client. In these example embodiments, non-PC devices like advanced mobile phones may become possible user platforms for utilization of these example embodiments. These example embodiments may be particularly valuable since these devices are generally capable of recording the very clips that the system300may assemble into a movie. A feature that spans both the content creation module306and the content publishing module308would be the generation of “credits” at the end of the finished movie. These may name the director and also others who contributed clips to the final movie. In some example embodiments, the system300may be configured to automatically generate or manually add these credits. In some example embodiments, the credits may automatically scroll, run as a slide show, or be totally user-controlled. The content publishing module308of the media synchronization system300ofFIG.3may be used to share a finished movie with the world. A movie renderer330generates the finished movie. When the movie is complete, the user may publish it on the system's web site332, publish it to another video sharing site, and/or use it as a clip for another movie. Sharing features, such as RSS feeds, distribution mailing lists, user groups, may be provided. Visitors may allowed to leave comments on the movies they watch, email links to them to friends, embed the movies in their blogs and personal web pages, submit the movie's permalink to shared bookmark and ratings sites. Commentary338, transactions, click counts, ratings, and other metadata associated with the finished movie may be stored in a commentary database334. To respect privacy, some clips and finished movies may be marked private or semi-private, and users may be able to restrict who is allowed to watch their movies. A movie viewing sub-module336may display the finished movie and offer access to movie editing tools328. In some example embodiments, future users may continue where earlier users left off, creating revisions and building on each other's work. While others may derive from one user's movie, only the original creator of a movie may make changes to the original movie. In an example embodiment, all other users work only on copies that develop separately from the original. A basic version control system is optionally provided to facilitate an “undo” feature and to allow others to view the development of a movie. Because various example embodiments of the system300may control the movie creation process and store the source clips, to save space, rarely watched finished movies may be deleted and recreated on-the-fly should someone want to watch one in the future. In addition, while common videos may be edited and displayed at moderate and economical bit rates, premium versions may be automatically generated from the source clips at the highest quality possible, relative to the source clips. If suitable business arrangements may be made, source clips may be pulled from, and finished movies published to, one or more popular video sharing sites. Alternatively, one or more of the example embodiments described herein may be incorporated directly into web sites as a new feature, although the scope of the disclosure is not limited in this respect. Some example embodiments may provide plug-ins that include features of the system300for popular (and more powerful) video editing systems, so that people may use their preferred editors but work with clips supplied by the example system300. In this example scenario, the synchronization information that the system300automatically determines may be associated with the clips as metadata for future use by other editing systems. Example embodiments may be used for the creation of composite mash-up videos, which is done by the moviemakers. Example embodiments may also be used for the consumption of the videos created in the first application, which is done by the movie watchers. Example embodiments may be used to create composite mash-up videos for the following events, and many more. Essentially any event where people are often seen camera-in-hand would make a great subject for a video created using example embodiments. Events include the following:Large-scale concerts;Small-scale club gigs;Dancing and special events at nightclubs;Parties;Religious ceremonies:Weddings,Baptisms,Bar/Bat Mitzvahs;Amateur and professional sports:Skateboarding,Snowboarding,Skiing,Soccer,Basketball,Racing,Other sports;Amusement park attractions:Animal performances,Human performances,Rides;Parades;Street performances;Circuses:Acrobats,Magicians,Animals,Clowns;School and extracurricular events:Dance recitals,School plays,Graduations;Holiday traditions; and/orNewsworthy events:Political rallies,Strikes,Demonstrations,Protests. Some reasons to create a video with aid of the system may include:Pure creativity and enjoyment;Sharing;Sales;Mash-up video contests;Fan-submitted remixes and parodies;Promotion and awareness-raising;Multi-user-generated on-site news reporting; and/orDocumenting flash mobs. Contests and other incentives may be created to generate interest and content. Videos created using example embodiments may be enjoyed through many channels, such as:The system site itself;Video sharing sites;Social networking sites;Mobile phones;Performing artist fan sites;Schools;Personal web pages;Blogs;News and entertainment sites;Email;RSS syndication;Broadcast and cable television; and/orSet-top boxes. Since the operating service controls the delivery of the movie content, advertisements may be added to the video stream to generate revenue. Rights holders for the clips may receive a portion of this income stream as necessary. In some example embodiments, the four primary components of the system may be distributed arbitrarily across any number of different machines, depending on the intended audience and practical concerns like minimizing cost, computation, or data transmission. Some example system architectures are described below, and some of the differences are summarized in the Table 1. In Table 1, each operation may correspond to one illustrated inFIG.3. ArchitectureSingleClient-Server-Peer-To-OperationMachineCentricCentricPeerID AssignmentUser's PCServerServerServerFingerprintingUser's PCUser's PCServerDistributedClip RecognitionUser's PCServerServerDistributedGroup DetectionUser's PCServerServerDistributedContent AnalysisUser's PCUser's PCServerDistributedClip Browsing andUser's PCUser's PCUser's PCUser's PCGroupingMetadata RevisionUser's PCUser's PCUser's PCUser's PCMovie EditingUser's PCUser's PCUser's PCUser's PCMedia DatabaseUser's PCServerServerDistributedMovie ViewingUser's PCUser's PCUser's PCUser's PCMovie RenderingUser's PCUser's PCServerDistributedWeb SiteN/AServerServerServerCommentary DBN/AServerServerServerCommentaryN/AUser's PCUser's PCUser's PC Although the table describes hard lines drawn between the architectures, the scope of the disclosure is not limited in this respect as actual implementations may comprise a mix of elements from one or more architectures.FIG.6, described in more detail below, illustrates an example of a system architecture. Some example embodiments may be configured to run entirely on a single client machine. However, a single user may not have enough overlapping video to make use of the system's automatic synchronization features. Specialized users, like groups of friends or members of an organization may pool their clips on a central workstation on which they would produce their movie. The final movie may be uploaded to a web site, emailed to others, or burned to DVD or other physical media. In some example embodiments, a client-centric implementation may push as much work to the client as possible. In these example embodiments, the server may have minimal functionality, including:a repository of media clips that client machines draw from and that may be displayed on a web site;a fingerprint matching service to detect clip overlap; and/ora central authority for assigning unique IDs to individual clips. The client may handle everything else, including:fingerprinting;content analysis;video editing UI;video and audio processing; and/orfinal movie rendering. These example embodiments may be scaled to handle very large numbers of simultaneous users easily. In other example embodiments, a server-centric implementation may rely on server machines to handle as much work as possible. The client may have minimal functionality, for example, including:data entry;movie editing tool(s); and/ormovie viewing. The server may perform most everything else, for example:fingerprinting;content analysis;video and audio processing; and/orfinal movie rendering. A potential advantage of these example embodiments is that control over the functionality and performance is centralized at the server. Faster hardware, faster software, or new features may be deployed behind the scenes as the need arises without requiring updates to client software. If the client is web-based, even the look, feel, and features of the client user interface may be controlled by the server. Another potential advantage is that the user's system may be extremely low-powered: a mobile phone, tablet PC, or set-top box might be sufficient. In some example embodiments, a distributed architecture may be provided in which there is no central storage of media clips. In these example embodiments, source clips may be stored across the client machines of each member of the user community. Unless they may be implemented in a distributed fashion as well, in an example embodiment there may be a central database mapping clip IDs to host machines, and a centralized fingerprint recognition server to detect clip overlap. Like the client centric example embodiments, in these distributed example embodiments, the client may implement all signal processing and video editing. Finished movies may be hosted by the client as well. To enhance availability, clips and finished movies may be stored on multiple machines in case individual users are offline. A potential advantage of these distributed example embodiments is that the host company needs a potentially minimal investment in hardware, although that would increase if a central clip registry or fingerprint recognition server would need to be maintained. FIG.4illustrates an example movie editing user interface in which media clips may be positioned relative to each other on a timeline, as determined by their temporal overlap. For example, where a user is editing a movie of a concert, media clips may be aligned in a manner that that will preserve the continuity of the music, despite multiple cuts among different scenes and/or camera angles, when the finished movie is presented. In another example, where a user is editing a movie of a lecture, media clips may be aligned in a manner that will preserve the continuity of the lecturer's speech, despite multiple cuts among different scenes and/or camera angles, when the finished movie is presented. In yet another example, where a user is editing a movie of a crime scene, media clips may be aligned in a manner that will preserve the continuity of time code (e.g., local time) from one or more security cameras, despite multiple cuts among different scenes and/or camera angles, when the finished movie is presented. In other example embodiments, alignment of audio and/or text data may be based upon video fingerprinting. Users may be free to adjust this alignment, but they may also rely on it to create well-synchronized video on top of a seamless audio track or time code track. Also, since fingerprint-derived match positions may not be accurate to the millisecond, some adjustment may be necessary to help ensure that the beat phase remains consistent. Due to the differing speeds of light and sound, video of a stage captured from the back of a large hall might lead the audio by a noticeable amount. Some example embodiments may compensate for these differing speeds of light and sound. In some example embodiments, on a clip where the video and audio are out of synchronization, an offset value may be associated with the clip to make the clip work better in assembled presentations (e.g., movies). Like most professionally produced movies, the image and sound need not be from the same clip at the same time. In some example embodiments, the system300may be configured to present audio without the corresponding image for a few seconds, for instance, to create a more appealing transition between scenes. Alternatively, some example embodiments of the system300may be configured to drop a sequence of video-only clips in the middle of a long audio/video clip. Some example embodiments of the system300may also be configured to mix in sounds of the hall or the crowd along with any reference audio that might be present. Different devices may record the audio with different levels of fidelity. To avoid distracting jumps in audio quality, and for general editing freedom, an example embodiment allows cross-fading between audio from multiple clips. In an example embodiment, the system300may be configured to use a reference audio track, if available. Analogous video effects, like dissolves, are provided in an example embodiment. In some example embodiments, the system300includes logic that judges audio and video by duration and quality, and recommends the best trade-off between those two parameters. In some example embodiments, the system300may be configured to allow users to assign ratings to the quality of a clip. Because it may be quite likely that there may be gaps in the coverage of an event, the system300may be configured to provide pre-produced (e.g., canned) effects, wipes, transitions, and bumpers to help reduce or minimize the disruption caused by the gaps, and ideally make them appear to be deliberate edits of the event, and not coverings for missing data. Some example embodiments may provide a user interface to allow clips to be dragged to an upload area410upon which they are transmitted to a central server and processed further. In these example embodiments, as clips are uploaded a dialog box may be displayed to allow metadata to be entered. Clips may then be searched for in a clip browser430. Clips discovered in the browser may be dragged to an editing timeline440. If a newly dragged clip overlaps with other clips in the timeline, the system300may automatically position the new clip to be synchronized with existing clips. Some example embodiments allow users to manipulate the editing timeline to choose which clip is displayed at any point in the final movie, and/or to apply special effects and other editing techniques. As the final movie is edited, the user interface may allow its current state to be viewed in a preview window420. In some example embodiments, at any time a clip may be opened to revise its associated metadata. FIG.5is a block diagram of a processing system500suitable for implementing one or more example embodiments. The processing system500may be almost any processing system, such as a personal computer or server, or a communication system including a wireless communication device or system. The processing system500may be suitable for use as any one or more of the servers or client devices (e.g., PCs) described above that is used to implement some example embodiments, as well as any one or more of the client devices, including wireless devices, that may be used to acquire and video and audio. The processing system500is shown by way of example to include processing circuitry502, memory504, Input/Output (I/O) elements506and network interface circuitry508. The processing circuitry502may include almost any type of processing circuitry that utilizes a memory, and may include one or more digital signal processors (DSPs), one or more microprocessors and/or one or more micro-controllers. The memory504may support processing circuitry502and may provide a cache memory for the processing circuitry502. I/O elements506may support the input and output requirements of the system500and may include one or more I/O elements such as a keyboard, a keypad, a speaker, a microphone, a video capture device, a display, and one or more communication ports. A NIC508may be used for communicating with other devices over wired networks, such as the Internet, or wireless networks using an antenna510. In some example embodiments, when the processing system500is used to capture video and audio and operations as a video capture device, the processing system500may include one or more video recording elements (VRE)512to record and/or store video and audio in a high quality format. Examples of wireless devices may include personal digital assistants (PDAs), laptop and portable computers with wireless communication capability, web tablets, wireless telephones, wireless headsets, pagers, instant messaging devices, MP3 players, digital cameras, and other devices that may receive and/or transmit information wirelessly. FIG.6illustrates an example system architecture in accordance with some example embodiments. The system architecture600may be suitable to implement one or more or the example architectures described above in Table 1. The system architecture600includes one or more user devices602which may be used to receive video and other information from the video capture devices (VCDs)604. The VCDs604may include any device used to capture video information. A user device602may communicate with other user devices602as well as one or more servers608and one or more databases610over a network606. In some example embodiments, the databases610may include the media database discussed above and/or the commentary database discussed above, although the scope of the disclosure is not limited in this respect as theses databases may be stored on one or more of the user devices602. The servers608may include, among other things, the recognition server316discussed above as well as server equipment to support the various operations of the system300discussed by way of example above, although the scope of the disclosure is not limited in this respect as theses operations may be stored on one or more of the user devices602. The processing system500may be suitable for use to implement the user devices602, the VCDs604and/or the servers608. The user devices602may correspond to user's PC, described above. In some example embodiments, consumers/multiple users may contribute multimedia material (video, audio, image, text . . . ) to a common repository/pool (e.g. a specific web site, or in a P2P environment to a specific pool of end user computers), and the method and system of these embodiments may then take the media clips and automatically align them, either spatially or temporarily, using clues within the submitted media or from a reference media. The aligned media clips can then be selected, edited and arranged by consumers/multiple users to create an individual media experience, much like an artistic collage. Although the example system architecture600and the system300are illustrated by way of example as having several separate functional elements, one or more of the functional elements may be combined and may be implemented by combinations of software-configured elements, such as processing elements including digital signal processors (DSPs), and/or other hardware elements. For example, some elements may comprise one or more microprocessors, DSPs, application specific integrated circuits (ASICs), radio-frequency integrated circuits (RFICs) and combinations of various hardware and logic circuitry for performing at least the functions described herein. In some example embodiments, the functional elements of the system may refer to one or more processes operating on one or more processing elements. FIG.7is a flow chart of a method700for synthesizing a multimedia event in accordance with some example embodiments. The operations of method700may be performed by one or more user devices602(seeFIG.6) and/or servers608(seeFIG.6). Operation702includes accessing media clips received from a plurality of sources, such as video capture devices of users. Operation704includes assigning an identifier to each media clip. The operation704may be performed by the media ingestion module302(seeFIG.3). In some embodiments, operation704is omitted. Operation706includes performing an analysis of the media clips to determine a temporal relation between the media clips. The operation706may be performed by the media analysis module304(seeFIG.3). Operation708includes combining the media clips based on their temporal relation to generate a video. In some embodiments, the combining is performed automatically. In certain embodiments, the combining is performed under the supervision of a user. The operation708may be performed by the content creation module306(seeFIG.3). Operation710includes publishing the generated video (e.g., publishing the video to a web site). The operation710may be performed by the content publishing module302(seeFIG.3). For example, the content publishing module302(seeFIG.3) may publish the presentation to a public network (e.g., the Internet), a nonpublic network (e.g., a closed network of video gaming devices), a mobile device (e.g., a cellular phone), and/or a stationary device (e.g., a kiosk or museum exhibit). Although the individual operations of method700are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Unless specifically stated otherwise, terms such as processing, computing, calculating, determining, displaying, or the like, may refer to an action and/or process of one or more processing or computing systems or similar devices that may manipulate and transform data represented as physical (e.g., electronic) quantities within a processing system's registers and memory into other data similarly represented as physical quantities within the processing system's registers or memories, or other such information storage, transmission or display devices. Furthermore, as used herein, a computing device includes one or more processing elements coupled with computer-readable memory that may be volatile or non-volatile memory or a combination thereof. Example embodiments may be implemented in one or a combination of hardware, firmware, and software. Example embodiments may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by at least one processor to perform the operations described herein. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and others.
45,287
11862199
DETAILED DESCRIPTION Systems and methods in accordance with various embodiments of the present disclosure may overcome one or more of the aforementioned and other deficiencies experienced in conventional approaches for editing a video on computing devices (e.g., servers, workstations, desktops, laptops, tablets, smart phones, media players, wearable devices, etc.). In some embodiments, a computing device can receive a video, such as by a camera built into the computing device capturing the video or the computing device receiving the video from another electronic device (e.g., as an attachment to an electronic message, as a download from a remote storage source, as a transferred file from a USB device, etc.). The computing device can display a graphical user interface (GUI) for editing the video, such as to add, edit, and/or remove text, drawings, virtual objects (e.g., stickers, Bitmoji, emoticons, etc.), uniform resource locators (URLs), and/or other data to the video. In some embodiments, the GUI can also allow the computing device to incorporate certain cuts, transitions, or other video effects, such as dissolves (e.g., fade-ins and fade-outs), wipes (e.g., a video frame or set of video frames replacing another frame or set of frames by traveling from one side of the frame to another or with a special shape), close-ups and long shots, L-cuts and J-cuts (e.g., an audio segment playing before the matching video and vice versa), and other types of edits into the video or a portion of the video. In one embodiment, the GUI can include user interface elements for selecting a clip of the video and re-sampling the clip using one or more sampling patterns to generate a new clip. For example, the GUI can include a video scrubber (sometimes also referred to as a video slider) comprising a selection bar and a selection slider (sometimes also referred to as a handle). The selection bar may represent the full length of the video, and the selection slider may represent the portion of the video (or video clip) which the computing device applies a specific cut, transition, or other video effect (e.g., a sampling pattern). The computing device can compute one or more histograms that define the number of frames of the original clip to sample over various time intervals to generate the new video clip. In addition or alternatively, the computing device can identify the function y=f(x) corresponding to the sampling pattern(s), where y can represent the number of frames to sample and x can represent time. The computing device can calculate the area between the line or curve of the function f(x) and the x-axis to determine the number of frames y of the original clip to sample to create the new video clip. The computing device can play the new clip as a preview of how another computing device may present the new clip. The computing device can also generate other previews by applying the cut, transition, or other video effect to a different segment of the video when the selection slider moves to a different location along the selection bar. In addition, the computing device can send the new clip to other computing devices. In this manner, the computing device can provide for advanced video editing techniques using a minimal number of gestures and other inputs. The present disclosure describes various other functions and advantages below in accordance with the various embodiments. FIGS.1A and1Bshow examples of graphical user interfaces100and150, respectively, of a camera application executing on computing device102and displayed on touchscreen104. Graphical user interfaces100and150are but one example of a set of user interfaces for providing advanced video editing techniques and other embodiments may include fewer or more elements. For example, other embodiments may utilize user interfaces without graphical elements (e.g., a voice user interface). An example of an implementation of the camera application is SNAPCHAT® provided by SNAP™ Inc. of Los Angeles, California but the present disclosure may also be applicable to social media and social networking applications, instant messengers, file sharing and file hosting services, video conferencing and web conferencing applications, and team collaboration tools, among others. In this example, the camera application may present graphical user interface100in response to computing device102capturing a video or computing device102receiving the video from another electronic device and opening the video within the camera application, an electronic communication client application (e.g., email client, Short Message Service (SMS) text message client, instant messenger, etc.), a web browser/web application, a file manager or other operating system utility, a database, or other suitable application. Graphical user interface100includes video icon106which may indicate a state of the camera application, such as the camera application currently operating in a video editing mode. In some embodiments, video icon106may also be associated with an interface for sending the video, a portion of the video, an edited version of the video, or an edited clip of the video to local storage, remote storage, and/or other computing devices. Graphical user interface100also includes various icons that may be associated with specific functions or features of the camera application, such as text tool icon108, drawing tool icon110, virtual object editor icon112, scissors tool icon114, paperclip tool icon116, and timer icon118, save tool icon120, add tool icon122, and exit icon124. Selection of text tool icon108, such as by computing device102receiving a touch or tap from a physical pointer or a click from a virtual pointer, can cause computing device102to display a text editing interface to add, remove, edit, format (e.g., bold, underline, italicize, etc.), color, and resize text and/or apply other text effects to the video. In response to receiving a selection of drawing tool icon110, computing device102can present a drawing editor interface for selecting different colors and brush sizes for drawing in the video; adding, removing, and editing drawings in the video; and/or applying other image effects to the video. Scissors tool icon114can be associated with a cut, copy, and paste interface for creating “stickers” or virtual objects that computing device102can incorporate into the video. In some embodiments, scissors tool icon114can also be associated with features such as “Magic Eraser” for deleting specified objects in the video, “Tint Brush” for painting specified objects in different colors, and “Backdrop” for adding, removing, and/or editing backgrounds in the video. Paperclip tool icon116can be associated an interface for attaching websites (e.g., URLs), search queries, and similar content in the video. Timer icon118can be associated with an interface for setting how long the video can be accessible to other users. Save tool icon120can be associated with an interface for saving the video to a personal or private repository of photos, images, and other content (referred to as “Memories” in the Snapchat application). Add tool icon122can be associated with an interface for adding the video to a shared repository of photos, images, and other content (referred to as “Stories” in the Snapchat application). Selection of exit icon124can cause computing device102to exit the video editing mode and to present the last user interface navigated to in the camera application. Graphical user interface100also includes video presentation mode icon126for changing the manner of how the camera application presents the video. For instance, the camera application may support video presentation modes such as a “Play Once” mode in which the camera application may play the video one time, a “Loop” mode in which the camera application may continuously play the video in a loop, and an “Enhanced Clip” mode in which the camera application may edit a specified segment of the video (or video clip) to sample the segment according to one or more sampling patterns and play the segment. In an embodiment, the camera application may switch between these different presentation modes depending on the number of times computing device102detects selection of video presentation mode icon126. For example, the camera application may initially present the video in “Play Once” mode and on every third selection of video presentation mode icon126thereafter, present the video in “Loop” mode after the first selection of video presentation mode icon126and every third selection thereafter, and present the video in “Enhanced Clip” mode after the second selection of video presentation mode icon126and every third selection thereafter. FIG.1Bshows graphical user interface150, which computing device102may display upon the camera application entering the “Enhanced Clip” video presentation mode. Graphical user interface150includes selection bar152for representing the video and selection slider154for representing the video clip from which the camera application samples to generate the special cut, transition, or other video effect. The left side of selection slider154can mark the beginning of the video clip and the right side of selection slider154can mark the end of the video clip. In some embodiments, the length of selection bar152may correspond to the full length of the video and the length of selection slider154may correspond to the length of the video clip. For example, if the full length of the video is 4 s and the length of the video clip is 2 s, then selection slider154would be 50% in length relative to the length of selection bar152. In some embodiments, the camera application may specify a single value for the length of the video clip (e.g., 1.5 s, 2 s, 3 s, etc.). In other embodiments, the camera application can provide a default length for the video clip but may enable customization of the video clip length via a selection of a predetermined length, such as an absolute value (e.g., 2.5 s) or a relative value (e.g., 25% of the full length), from a set of predetermined lengths not exceeding the full length of the video, an alphanumeric input not less than zero and not exceeding the full length of the video, a touch gesture with respect to selection slider154(e.g., a pinching/un-pinching gesture to resize selection slider154), a voice command, and/or a combination of these approaches and/or other gestures/inputs. Graphical user interface150also includes label156for representing the sampling pattern(s) the camera application will apply to the original clip to create the new video clip. In some embodiments, video presentation mode icon126may include graphic158to represent the sampling pattern(s). In other embodiments, label156may incorporate graphic158or the camera application may display graphic158elsewhere within graphical user interface150. In this example, the “Enhanced Clip” mode involves applying the “Bounce” sampling pattern to the video clip corresponding to selection slider154. Applying the Bounce effect to the video clip simulates the clip bouncing back and forth in a curve-eased, speed-ramped loop. In an embodiment, the camera application can sample the video clip at a rate that results in speeding up the first half of the new clip up to 65% of the length of the original clip and sampling the original clip in reverse at the same sped-up rate to form a loop such that the full length of the new clip is 130% of the length of the original clip. In traditional speed ramping, a conventional video editor drops frames (to simulate speeding up a portion of a video) or adds frames (to simulate slowing down a portion of the video) and samples the video at a constant rate to achieve a linear increase or decrease in speed, respectively. In the example ofFIG.1B, however, the camera application may sample the original clip at a non-linear rate consistent with a specified sampling pattern.FIGS.2A,2B, and2Cillustrate examples of sampling patterns the camera application can apply to the original video clip to generate the new video clip. In particular,FIG.2Ashows front-half200of the sampling pattern,FIG.2Bshows back-half230of the sampling pattern, andFIG.2Cshows the entirety of sampling pattern260. In these examples, the x-axes represent time and the y-axes represent the number of frames to sample from the original clip to create the new video clip. For example, if the original clip has a standard frame rate (e.g., 30 frames per second (fps)), and the front half of the new clip is the same length as the original clip but sampled according to front-half200, then the camera application can generate the portion of the new clip from t=0.5 s to t=0.6 s by taking one sample from the original clip over this same time period (or subsampling/downsampling the original clip) because the value of front-half200of the sampling pattern from t=0.5 s to t=0.6 s (e.g., point202) is approximately 1. Using similar reasoning, the camera application can generate the portion of the front-half of the new clip from t=1.4 s to t=1.5 s by taking 5 samples from the original clip, including if necessary, repeating some of the frames and/or merging a pair of the frames (or interpolating the original clip) because the value of front-half200of the sampling pattern from t=1.4 s to t=1.5 s (e.g., point204) is approximately 5. In some embodiments, prior to sampling, subsampling/downsampling, and/or upsampling/interpolating frames of the original clip to generate a portion of the new clip, the camera application may remove duplicate (or de-duplicate or de-dupe) frames of the original clip. This can result in a smoother transition between frames and/or provide a more interesting visual effect because there may be more differences from frame-to-frame. The units of time (e.g., tenths of seconds) used in the above example are illustrative and other embodiments may use smaller units of time (e.g., milliseconds, microseconds, etc.) or greater units of time (e.g., seconds, minutes, etc.). In addition, other embodiments may use videos having a smaller frame rate (e.g., 24 fps) while still other embodiments may use videos having greater frame rates (e.g., 48 fps, 60 fps, 120 fps, 240 fps, etc.). In some embodiments, the camera application may also enable customization of the frame rate for the new clip. In the example ofFIGS.2B and2C, back-half230of sampling pattern260is symmetrical to front-half200. Other embodiments may use different speeds (e.g., 35% of the original clip) and/or different sampling patterns for the middle or back portion(s) of sampling pattern260.FIGS.3A-3Fillustrate various examples of sampling patterns that the camera application can utilize for the first portion, middle portion(s), and/or last portion of sampling pattern260. In particular,FIG.3Ashows sampling pattern300, a linear function (e.g., f(x)=x);FIG.3Bshows sampling pattern310, a step function (e.g., f⁡(x)=∑i=0nαi⁢χAi(x) for all real numbers x, where n≥0, αiare real numbers, Aiare intervals, and χiis the indicator function of A:χA(x)={1⁢if⁢x∈A,0⁢if⁢x∉A); FIG.3Cshows sampling pattern320, a square root function (e.g., f(x)=√{square root over (x)});FIG.3Dshows sampling pattern330, a sinusoidal function (e.g., f(x)=sin x);FIG.3Eshows sampling pattern340, a triangle wave (e.g., f⁡(x)=8π2⁢∑n=1,3,5,…∞(-1)(n-1)/2n2⁢sin⁡(n⁢π⁢xL), with period 2L); andFIG.3Fshows sampling pattern350, a square wave (e.g., f⁡(x)=4π⁢∑n=1,3,5,…∞1n⁢sin⁡(n⁢π⁢xL), with period 2L). In various embodiments, camera application may use any number of sampling patterns and any types of sampling patterns. The camera application may use various approaches to apply a specified sampling pattern to the original clip to generate the new clip. In some embodiments, the camera application can determine the function y=f(x) where x represents time, and y represents the number of samples to extract from the original clip for creating the new clip. In other embodiments, the camera application can compute a histogram, such as shown in histograms400,430, and460ofFIGS.4A,4B, and4C, respectively. The camera application can evaluate the histogram at a particular interval to determine the number of frames to sample from the original clip for determining the corresponding interval of the new clip. For example, the camera application may sub-sample from a portion of the original clip to compute the corresponding portion in the new clip if the number of frames in the histogram is less than the frame rate of the original clip over the corresponding interval (such as at point402). Likewise, the camera application may interpolate frames of the original clip (e.g., repeat frames, merge pairs of frames, etc.) to determine the corresponding portion in the new clip if the number of frames in the histogram is greater than the frame rate of the original clip over the corresponding interval (such as at point404). In still other embodiments, the camera application can determine the number of frames to sample from the original clip as the area between the function f(x) corresponding to sampling pattern260and the x-axis. In other words, the number of frames to sample from the original clip over interval x to x′ is the definite integral of the function f(x) between x and x′ (e.g., ∫xx′f⁡(x)⁢dx). In some embodiments, the camera application may also alter the new clip by speeding up or slowing down the original clip by a factor k. For example, k=½ speeds up the original clip by factor of 2 or by 100% or shortens the length of the new clip by a factor of 2 or by 100% while k=3 slows down the original clip by a factor of 3 or by 200% or lengthens the length of the new clip by a factor of 3 or by 200%. In some embodiments, the camera application may de-duplicate frames of the original clip prior to sampling, subsampling/downsampling, and/or upsampling/interpolating frames of the original clip. As discussed, this can simulate a smoother transition between frames of the new clip and/or provide more noticeable differences between the frames. Returning toFIG.1B, some embodiments may allow for customization of the sampling pattern to apply to a selected portion of a video (or a video clip). For example, when computing device102detects a continuous touch of label156for a predetermined period of time (e.g., 2.5 s), the computing device may display a user interface element for selecting a different sampling pattern (e.g., a selection list) and/or to display a new user interface or user interface element for receiving a drawing of a new sampling pattern. In addition or alternatively, the camera application may include a settings interface for customizing various parameters for editing a video clip. These parameters may include a length of the video clip as discussed elsewhere herein. The parameters may also include one or more sampling patterns to apply to the video clip. For example, the camera application may enable selection of a single sampling pattern that can operate as a front-half of the sampling pattern for the new video clip and the reverse of which may operate as a back-half of the sampling pattern for the new video clip as shown inFIGS.2A,2B, and2C. As another example, the camera application may allow for selection of two or more sampling patterns that the computing device can apply sequentially to the entirety of the original video clip to result in a new clip that is n*L in length of the original clip where n is the number of sampling patterns and L is the length of the original clip. As yet another example, the camera application may support selection of two or more sampling patterns that the computing device can apply to portions of the original video clip such that the sum of the lengths of each application of a selected sampling pattern to a portion of the original clip is equal to the sum of the original clip (e.g., L=Σ(ƒ(x) ƒ or x=0 to t1+g(x) ƒ or x=t2to t3+ . . . )). In some embodiments, the camera application may also support customization of a presentation mode (e.g., playing the video clip once or in a loop), an adjustment factor for adjusting the length of a portion of the new clip corresponding to each sampling pattern relative to the length of the original clip, and the frame rate for the new clip. FIG.5illustrates an example of a method, process500, for editing a video clip by applying a sampling pattern to the clip to generate a new clip. For any process discussed herein, there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of various embodiments unless otherwise stated. A computing device, such as computing device102ofFIGS.1A and1B, client devices620ofFIG.6, or computing system1200or devices1220ofFIG.12, and more particularly, an application (e.g., camera application1134ofFIG.11) executing on the computing device may perform process500. Process500may begin at step502, in which the computing device receives a video or similar data (e.g., an ordered sequence of images). The computing device can receive the video from a built-in camera capturing the video or the computing device receiving the video from another device (e.g., as an attachment to an email or other electronic communication, as a download from the Internet, as a transmission over a local communication channel (e.g., Wi-Fi, Bluetooth, near field communication (NFC), etc.)). Process500can proceed to step504in which the computing device receives a selection of a clip of the video. In an embodiment, the computing device may display a video scrubber for selecting the clip, such as shown inFIG.2B, and/or other user interface elements (e.g., a pair of markers to mark the beginning and the end of the clip). In another embodiment, the computing device may select a clip by default (e.g., select a clip beginning at the start of the video) as well as support user customization of the clip length via gesture (e.g., pinching and un-pinching gesture with respect to selection slider154), alphanumeric entry, voice command, or other suitable input. The computing device can also provide a settings interface to modify the clip length using one or more of these approaches. In addition, the clip length is not necessarily smaller than the received video. In some embodiments, the length of the selected clip may be equal to the length of the video. At step506, the computing device may receive one or more adjustment factors k for determining the length of the new clip relative to the length of the original clip. The computing device may select a default adjustment factor (e.g., k=1) but support customization of the adjustment factor(s). For example, k=¾ may speed up the original clip such that the length of the new clip is 75% of the length of the original clip while k=4 may slow down the original clip such that the length of the new clip is 4 times the length of the original clip. The computing device may apply the same adjustment factor or different adjustment factors if the computing device applies a sampling pattern to the original clip more than once. For example, in an embodiment, the computing device may use a first adjustment factor k=0.65 for a first half of the new clip and a second adjustment factor k=0.35 for a second half of the new clip. At step508, the computing device can receive one or more sampling patterns to apply to the selected video clip. The computing device may select a default sampling pattern, such as the Bounce pattern illustrated inFIG.2C, but the computing device can also support selection of any number of other sampling patterns. For example,FIGS.3A-3Fillustrate various examples of sampling patterns that the computing device may apply to the original clip. In addition, the computing device may support importation of a new sampling pattern or provide a drawing interface for creating a new sampling pattern. The computing device may use any number of sampling patterns and may apply each sampling pattern to the entirety of the original clip, to a portion of the original clip, or both. For example, in an embodiment, the computing device may apply a first sampling pattern to the entirety of the original clip, a second sampling pattern to a first half of the original clip, and a third sampling pattern to a second half of the original clip. Process500may continue to step510in which the computing device determines the number of frames to sample from the first clip for each interval of time over the length of the second clip. For example, if the first clip is 2 s in length with a frame rate of 30 fps, the computing device can divide the first clip into 60 Is intervals and determine individual numbers (e.g., no number of frames for 0 to is, n1number of frames from is to 2 s, n2number of frames from 2 s to 3 s, etc.) to sample from the first clip to determine the frames for the second clip. An approach for determining the numbers of frames to extract from the original clip is for the computing device to determine the function y=f(x) corresponding to the sampling pattern, where x represents time and y represents the number of frames to sample. The computing device can determine the number of frames to sample by evaluating f(x) for each value of x (e.g., increments of 0.01 s, 0.05 s, 0.1 s, is, etc.). Another approach for determining the frames to retrieve from the original clip can involve the computing device generating a histogram corresponding to the sampling pattern and evaluating the histogram per unit of time. In some embodiments, the sum of every bin of the histogram is equal to a product of the frame rate of the new clip and a length of the new clip (and possibly an adjustment factor for lengthening or shortening the length of the new clip relative to the length of the original clip). For example, given histogram400ofFIG.4A, a frame rate of 30 fps for the new clip, 2 s for the length of the new clip, and k=1, the computing device can generate the portion of the new clip from t=0.5 s to t=0.6 s by taking one sample from the original clip over this same time period (or subsampling the original clip), and the computing device can generate the portion of the new clip from t=1.4 s to t=1.5 s by taking 5 samples from the original clip, including if necessary, repeating some of the frames and/or merging a pair of the frames (or interpolating the original clip). Yet another approach for determining the number of frames to sample from the original sampling pattern is to determine the definite integral of f(x) (e.g., y=∫xx′f⁡(x)⁢dx)) and solve for y per unit of time. At step512, the computing device can extract frames of the original clip using the number of frames to sample determined in step510. Then, at step514, the computing device can assemble the new clip from the frames extracted from the original clip. This can include subsampling frames of the original clip during intervals in which the evaluation of f(x), the histogram, the definite integral of f(x), or other suitable approach indicates that the number of frames to sample for the new clip is less than the number of available frames at the corresponding interval of the original clip. This can also include interpolating frames of the original clip (e.g., repeating frames, merging frames, etc.) during intervals in which the evaluation of f(x), the histogram, the definite integral of f(x), or other suitable approach indicates that the number of frames to sample for the new clip is greater than the number of available frames at the corresponding interval of the original clip. Process500may conclude at step514in which the computing device presents the new clip, such as to provide a preview of the new clip by displaying the new clip on a display screen of the computing device. In some embodiments, the computing device may also send the new clip to one or more other computing devices, such as devices associated with friends and other contacts of the user associated with the computing device. In some embodiments, the computing device may send the entire video to the other computing device(s) and metadata for recreating the new clip on the other device(s) (e.g., clip start time, clip end time and/or clip length, clip frame rate, one or more sampling patterns, sampling order for each sampling pattern (e.g., forward sampling or reverse sampling), the order to apply the sampling patterns, one or more adjustment factors for adjusting the length of a portion of the new clip corresponding to each sampling pattern relative to the length of the original clip, etc.). This can enable the other computing device to display the new clip as intended by the user associated with the first computing device but also allow the users of the other computing devices to generate their own clips from the original video. FIG.6shows an example of a system, network environment600, in which various embodiments of the present disclosure may be deployed. For any system or system element discussed herein, there can be additional, fewer, or alternative components arranged in similar or alternative orders, or in parallel, within the scope of the various embodiments unless otherwise stated. Although network environment600is a client-server architecture, other embodiments may utilize other network architectures, such as peer-to-peer or distributed network environments. In this example, network environment600includes content management system602. Content management system602may based on a three-tiered architecture that includes interface layer604, application logic layer606, and data layer608. Each module or component of network environment600may represent a set of executable software instructions and the corresponding hardware (e.g., memory and processor) for executing the instructions. To avoid obscuring the subject matter of the present disclosure with unnecessary detail, various functional modules and components that may not be germane to conveying an understanding of the subject matter have been omitted. Of course, additional functional modules and components may be used with content management system602to facilitate additional functionality that is not specifically described herein. Further, the various functional modules and components shown in network environment600may reside on a single server computer, or may be distributed across several server computers in various arrangements. Moreover, although content management system602has a three-tiered architecture, the subject matter of the present disclosure is by no means limited to such an architecture. Interface layer604includes interface modules610(e.g., a web interface, a mobile application (app) interface, a restful state transfer (REST) application programming interface (API) or other API, etc.), which can receive requests from various client computing devices and servers, such as client devices620executing client applications (not shown) and third-party servers622executing third-party application(s)624. In response to the received requests, interface modules610communicate appropriate responses to requesting devices via wide area network (WAN)626(e.g., the Internet). For example, interface modules610can receive requests such as HTTP requests, or other Application Programming Interface (API) requests. Client devices620can execute web browsers or apps that have been developed for a specific platform to include any of a wide variety of mobile computing devices and mobile-specific operating systems (e.g., IOS™, ANDROID™, WINDOWS® PHONE). Client devices620can provide functionality to present information to a user and communicate via WAN626to exchange information with content management system602. In some embodiments, client devices620may include a camera app such as SNAPCHAT® that, consistent with some embodiments, allows users to exchange ephemeral messages that include media content, including video messages or text messages. In this example, the camera app can incorporate aspects of embodiments described herein. The ephemeral messages are deleted following a deletion trigger event such as a viewing time or viewing completion. In such embodiments, a device uses the various components described herein within the context of any of generating, sending, receiving, or displaying aspects of an ephemeral message. Client devices620can each comprise at least a display and communication capabilities with WAN626to access content management system602. Client devices620may include remote devices, workstations, computers, general purpose computers, Internet appliances, hand-held devices, wireless devices, portable devices, wearable computers, cellular or mobile phones, personal digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, desktops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, network PCs, mini-computers, and the like. Data layer608includes database servers616that can facilitate access to information storage repositories or databases618. Databases618are storage devices that store data such as member profile data, social graph data (e.g., relationships between members of content management system602), and other user data and content data, such as videos, clips, sampling patterns, and the like. Application logic layer606includes video modules614, for supporting various video features discussed herein, and application logic modules612, which, in conjunction with interface modules610, can generate various user interfaces with data retrieved from various data sources or data services in data layer608. Individual application logic modules612may be used to implement the functionality associated with various applications, services, and features of content management system602. For instance, a camera application can be implemented using one or more application logic modules612. The camera application can provide a messaging mechanism for users of client devices620to send and receive messages that include text and media content such as pictures and video. Client devices620may access and view the messages from the camera application for a specified period of time (e.g., limited or unlimited). In an embodiment, a particular message is accessible to a message recipient for a predefined duration (e.g., specified by a message sender) that begins when the particular message is first accessed. After the predefined duration elapses, the message is deleted and is no longer accessible to the message recipient. Of course, other applications and services may be separately embodied in their own application logic modules612. FIG.7shows an example of a content management system700including client application702(e.g., running on client devices620ofFIG.6) and application server704(e.g., an implementation of application logic layer606). In this example, the operation of content management system700encompasses various interactions between client application702and application server704over ephemeral timer interface706, collection management interface708, and annotation interface710. Ephemeral timer interface706is a subsystem of content management system700responsible for enforcing the temporary access to content permitted by client application702and server application704. To this end, ephemeral timer interface1014can incorporate a number of timers that, based on duration and display parameters associated with content, or a collection of content (e.g., messages, videos, a SNAPCHAT® story, etc.), selectively display and enable access to the content via client application702. Further details regarding the operation of ephemeral timer interface706are provided below. Collection management interface708is a subsystem of content management system700responsible for managing collections of media (e.g., collections of text, images, video, and audio data). In some examples, a collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. Collection management interface708may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of client application702. In this example, collection management interface708includes curation interface712to allow a collection manager to manage and curate a particular collection of content. For instance, curation interface712can enable an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, collection management interface708can employ machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain embodiments, compensation may be paid to a user for inclusion of user generated content into a collection. In such cases, curation interface712can automatically make payments to such users for the use of their content. Annotation interface710is a subsystem of content management system700that provides various functions to enable a user to annotate or otherwise modify or edit content. For example, annotation interface710may provide functions related to the generation and publishing of media overlays for messages or other content processed by content management system700. Annotation interface710can supply a media overlay (e.g., a SNAPCHAT® filter) to client application702based on a geolocation of a client device. As another example, annotation interface710may supply a media overlay to client application702based on other information, such as, social network information of the user of the client device. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device. For example, the media overlay including text that can be overlaid on top of a photograph generated taken by the client device. In yet another example, the media overlay may include an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, annotation interface710can use the geolocation of the client device to identify a media overlay that includes the name of a merchant at the geolocation of the client device. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in a database (e.g., database618ofFIG.3) and accessed through a database server (e.g., database server616). In an embodiment, annotation interface710can provide a user-based publication platform that enables users to select a geolocation on a map, and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. Annotation interface710can generate a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In another embodiment, annotation interface710may provide a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, annotation interface710can associate the media overlay of a highest bidding merchant with a corresponding geolocation for a predefined amount of time FIG.8shows an example of data model800for a content management system, such as content management system700. While the content of data model800is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures, such as an object database, a non-relational or “not only” SQL (NoSQL) database, a highly distributed file system (e.g., HADOOP® distributed filed system (HDFS)), etc. Data model800includes message data stored within message table814. Entity table802stores entity data, including entity graphs804. Entities for which records are maintained within entity table802may include individuals, corporate entities, organizations, objects, places, events, etc. Regardless of type, any entity regarding which the content management system700stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). Entity graphs804store information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization), interested-based, activity-based, or based on other characteristics. Data model800also stores annotation data, in the example form of filters, in annotation table812. Filters for which data is stored within annotation table812are associated with and applied to videos (for which data is stored in video table810) and/or images (for which data is stored in image table808). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of various types, including user-selected filters from a gallery of filters presented to a sending user by client application702when the sending user is composing a message. Other types of filers include geolocation filters (also known as geo-filters) which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by client application702, based on geolocation information determined by a GPS unit of the client device. Another type of filer is a data filer, which may be selectively presented to a sending user by client application702, based on other inputs or information gathered by the client device during the message creation process. Example of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device, the current time, or other data captured or received by the client device. Other annotation data that may be stored within image table808can include “lens” data. A “lens” may be a real-time special effect and sound that may be added to an image or a video. As discussed above, video table810stores video data which, in one embodiment, is associated with messages for which records are maintained within message table814. Similarly, image table808stores image data associated with messages for which message data is stored in entity table802. Entity table802may associate various annotations from annotation table812with various images and videos stored in image table808and video table810. Story table806stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a SNAPCHAT® story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in entity table802) A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of client application702may include an icon that is user selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from various locations and events. In some embodiments, users whose client devices have location services enabled and are at a common location event at a particular time may be presented with an option, via a user interface of client application702, to contribute content to a particular live story. The live story may be identified to the user by client application702based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story”, which enables a user whose client device is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). FIG.9shows an example of a data structure of a message900that a first client application (e.g., client application702ofFIG.7) may generate for communication to a second client application or a server application (e.g., server application704). The content of message900is used to populate the message table814stored within data model800and may be accessible by client application702. Similarly, the content of message900is stored in memory as “in-transit” or “in-flight” data of the client device or application server. Message900is shown to include the following components:Message identifier902: a unique identifier that identifies message900;Message text payload904: text, to be generated by a user via a user interface of a client device and that is included in message900;Message image payload906: image data, captured by a camera component of a client device or retrieved from memory of a client device, and that is included in message900;Message video payload908: video data, captured by a camera component or retrieved from a memory component of a client device and that is included in message900;Message audio payload910: audio data, captured by a microphone or retrieved from the memory component of a client device, and that is included in message900,Message annotations912: annotation data (e.g., filters, stickers or other enhancements) that represents annotations to be applied to message image payload906, message video payload908, or message audio payload910of message900;Message duration914: a parameter indicating, in seconds, the amount of time for which content of the message (e.g., message image payload906, message video payload908, message audio payload910) is to be presented or made accessible to a user via client application702;Message geolocation916: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter values may be included in the payload, each of these parameter values being associated with respect to content items included in the content (e.g., a specific image into within message image payload906, or a specific video in message video payload908);Message story identifier918: identifier values identifying one or more content collections (e.g., “stories”) with which a particular content item in message image payload906of message900is associated. For example, multiple images within message image payload906may each be associated with multiple content collections using identifier values;Message tag920: each message900may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in message image payload906depicts an animal (e.g., a lion), a tag value may be included within message tag920that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition;Message sender identifier922: an identifier (e.g., a messaging system identifier, email address or device identifier) indicative of a user of a client device on which message900was generated and from which message900was sent;Message receiver identifier924: an identifier (e.g., a messaging system identifier, email address or device identifier) indicative of a user of a client device to which message900is addressed; The values or data of the various components of message900may be pointers to locations in tables within which the values or data are stored. For example, an image value in message image payload906may be a pointer to (or address of) a location within image table808ofFIG.8. Similarly, values within message video payload908may point to data stored within video table810, values stored within message annotations912may point to data stored in annotation table812, values stored within message story identifier918may point to data stored in story table806, and values stored within message sender identifier922and message receiver identifier924may point to user records stored within entity table802. FIG.10shows an example of data flow1000in which access to content (e.g., ephemeral message1002, and associated multimedia payload of data) and/or a content collection (e.g., ephemeral story1004) may be time-limited (e.g., made ephemeral) by a content management system (e.g., content management system700). In this example, ephemeral message1002is shown to be associated with message duration parameter1006, the value of which determines an amount of time that ephemeral message1002will be displayed to a receiving user of ephemeral message1002by a client application (e.g., client application702). In one embodiment, where client application702is a SNAPCHAT® application client, ephemeral message1002is viewable by a receiving user for up to a maximum of 10 seconds that may be customizable by the sending user for a shorter duration. Message duration parameter1006and message receiver identifier1024may be inputs to message timer1012, which can be responsible for determining the amount of time that ephemeral message1002is shown to a particular receiving user identified by message receiver identifier1024. For example, ephemeral message1002may only be shown to the relevant receiving user for a time period determined by the value of message duration parameter1006. Message timer1012can provide output to ephemeral timer interface1014(e.g., an example of an implementation of ephemeral timer interface706), which can be responsible for the overall timing of the display of content (e.g., ephemeral message1002) to a receiving user. Ephemeral message1002is shown inFIG.10to be included within ephemeral story1004(e.g., a personal SNAPCHAT® story, an event story, a content gallery, or other content collection). Ephemeral story1004may be associated with story duration1008, a value of which can establish a time-duration for which ephemeral story1004is presented and accessible to users of content management system700. In an embodiment, story duration parameter1008, may be the duration of a music concert and ephemeral story1004may be a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator) may specify the value for story duration parameter1008when performing the setup and creation of ephemeral story1004. In some embodiments, each ephemeral message1002within ephemeral story1004may be associated with story participation parameter1010, a value of which can set forth the duration of time for which ephemeral message1002will be accessible within the context of ephemeral story1004. For example, a particular ephemeral story may “expire” and become inaccessible within the context of ephemeral story1004, prior to ephemeral story1004itself expiring in terms of story duration parameter1008. Story duration parameter1008, story participation parameter1010, and message receiver identifier924each provide input to story timer1016, which can control whether a particular ephemeral message of ephemeral story1004will be displayed to a particular receiving user and, if so, for how long. In some embodiments, ephemeral story1004may also be associated with the identity of a receiving user via message receiver identifier1024. In some embodiments, story timer1016can control the overall lifespan of ephemeral story1004, as well as ephemeral message1002included in ephemeral story1004. In an embodiment, each ephemeral message1002within ephemeral story1004may remain viewable and accessible for a time-period specified by story duration parameter1008. In another embodiment, ephemeral message1002may expire, within the context of ephemeral story1004, based on story participation parameter1010. In some embodiments, message duration parameter1006can still determine the duration of time for which a particular ephemeral message is displayed to a receiving user, even within the context of ephemeral story1004. For example, message duration parameter1006can set forth the duration of time that a particular ephemeral message is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message inside or outside the context of ephemeral story1004. Ephemeral timer interface1014may remove ephemeral message1002from ephemeral story1004based on a determination that ephemeral message1002has exceeded story participation parameter1010. For example, when a sending user has established a story participation parameter of 24 hours from posting, ephemeral timer interface1014will remove the ephemeral message1002from ephemeral story1004after the specified 24 hours. Ephemeral timer interface1014can also remove ephemeral story1004either when story participation parameter1010for each ephemeral message1002within ephemeral story1004has expired, or when ephemeral story1004itself has expired in terms of story duration parameter1008. In an embodiment, a creator of ephemeral message story1004may specify an indefinite story duration parameter. In this case, the expiration of story participation parameter1010for the last remaining ephemeral message within ephemeral story1004will establish when ephemeral story1004itself expires. In an embodiment, a new ephemeral message may be added to the ephemeral story1004, with a new story participation parameter to effectively extend the life of ephemeral story1004to equal the value of story participation parameter1010. In some embodiments, responsive to ephemeral timer interface1014determining that ephemeral story1004has expired (e.g., is no longer accessible), ephemeral timer interface1014can communicate with content management system700(and, for example, specifically client application702to cause an indicium (e.g., an icon) associated with the relevant ephemeral message story to no longer be displayed within a user interface of client application702). Similarly, when ephemeral timer interface706determines that message duration parameter1006for ephemeral message1002has expired, ephemeral timer interface1014may cause client application702to no longer display an indicium (e.g., an icon or textual identification) associated with ephemeral message1002. FIG.11shows an example of a software architecture, software architecture1100, which may be used in conjunction with various hardware architectures described herein.FIG.11is merely one example of a software architecture for implementing various embodiments of the present disclosure and other embodiments may utilize other architectures to provide the functionality described herein. Software architecture1100may execute on hardware such as computing system1200ofFIG.12, that includes processors1204, memory/storage1206, and I/O components1218. Hardware layer1150can represent a computing system, such as computing system1200ofFIG.12. Hardware layer1150can include one or more processing units1152having associated executable instructions1154A. Executable instructions1154A can represent the executable instructions of software architecture1100, including implementation of the methods, modules, and so forth ofFIGS.1A,1B,2A,2B,2C,3A,3B,3C,3D,3E,3F,4A,4B,4C, and5. Hardware layer1150can also include memory and/or storage modules1156, which also have executable instructions1154B. Hardware layer1150may also include other hardware1158, which can represent any other hardware, such as the other hardware illustrated as part of computing system1200. In the example ofFIG.11, software architecture1100may be conceptualized as a stack of layers in which each layer provides particular functionality. For example, software architecture1100may include layers such as operating system1120, libraries1116, frameworks/middleware1114, applications1112, and presentation layer1110. Operationally, applications1112and/or other components within the layers may invoke API calls1104through the software stack and receive a response, returned values, and so forth as messages1108. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special-purpose operating systems may not provide a frameworks/middleware layer1114, while others may provide such a layer. Other software architectures may include additional or different layers. Operating system1120may manage hardware resources and provide common services. In this example, operating system1120includes kernel1118, services1122, and drivers1124. Kernel1118may operate as an abstraction layer between the hardware and the other software layers. For example, kernel1118may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. Services1122may provide other common services for the other software layers. Drivers1124may be responsible for controlling or interfacing with the underlying hardware. For instance, drivers1124may include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. Libraries1116may provide a common infrastructure that may be utilized by applications1112and/or other components and/or layers. Libraries1116typically provide functionality that allows other software modules to perform tasks in an easier fashion than to interface directly with the underlying operating system functionality (e.g., kernel1118, services1122, and/or drivers1124). Libraries1116may include system libraries1142(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, libraries1116may include API libraries1144such as media libraries (e.g., libraries to support presentation and manipulation of various media format such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. Libraries1116may also include a wide variety of other libraries1146to provide many other APIs to applications1112and other software components/modules. Frameworks1114(sometimes also referred to as middleware) may provide a higher-level common infrastructure that may be utilized by applications1112and/or other software components/modules. For example, frameworks1114may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. Frameworks1114may provide a broad spectrum of other APIs that may be utilized by applications1112and/or other software components/modules, some of which may be specific to a particular operating system or platform. Applications1112include camera application1134, built-in applications1136, and/or third-party applications1138. Examples of representative built-in applications1136include a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. Third-party applications1138may include any built-in applications1136as well as a broad assortment of other applications. In an embodiment, third-party application1138(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® PHONE, or other mobile operating systems. In this example, third-party application1138may invoke API calls1104provided by operating system1120to facilitate functionality described herein. Applications1112may utilize built-in operating system functions (e.g., kernel1118, services1122, and/or drivers1124), libraries (e.g., system libraries1142, API libraries1144, and other libraries1146), or frameworks/middleware1114to create user interfaces to interact with users of the system. Alternatively, or in addition, interactions with a user may occur through a presentation layer, such as presentation layer1110. In these systems, the application/module “logic” can be separated from the aspects of the application/module that interact with a user. Some software architectures utilize virtual machines. In the example ofFIG.11, this is illustrated by virtual machine1106. A virtual machine creates a software environment where applications/modules can execute as if they were executing on a physical computing device (e.g., computing system1200ofFIG.12). Virtual machine1106is hosted by a host operating system (e.g., operating system1120). The host operating system typically has a virtual machine monitor1160, which may manage the operation of virtual machine1106as well as the interface with the host operating system (e.g., operating system1120). A software architecture executes within virtual machine1106, and may include operating system1134, libraries1132, frameworks/middleware1130, applications1128, and/or presentation layer1126. These layers executing within virtual machine1106can operate similarly or differently to corresponding layers previously described. FIG.12shows an example of a computing device, computing system1200, in which various embodiments of the present disclosure may be implemented. In this example, computing system1200can read instructions1210from a computer-readable medium (e.g., a computer-readable storage medium) and perform any one or more of the methodologies discussed herein. Instructions1210may include software, a program, an application, an applet, an app, or other executable code for causing computing system1200to perform any one or more of the methodologies discussed herein. For example, instructions1210may cause computing system1200to execute process500ofFIG.5. In addition or alternatively, instructions1210may implement the camera application ofFIGS.1A and1B, generate the sampling patterns2A,2B,2C,3A,3B,3C,3D,3E, and3F or the histograms ofFIGS.4A,4B, and4C; application logic modules612or video modules614ofFIG.6; camera application1134, and so forth. Instructions1210can transform a general, non-programmed computer, such as computing system1200into a particular computer programmed to carry out the functions described herein. In some embodiments, computing system1200can operate as a standalone device or may be coupled (e.g., networked) to other devices. In a networked deployment, computing system1200may operate in the capacity of a server or a client device in a server-client network environment, or as a peer device in a peer-to-peer (or distributed) network environment. Computing system1200may include a switch, a controller, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any electronic device capable of executing instructions1210, sequentially or otherwise, that specify actions to be taken by computing system1200. Further, while a single device is illustrated in this example, the term “device” shall also be taken to include a collection of devices that individually or jointly execute instructions1210to perform any one or more of the methodologies discussed herein. Computing system1200may include processors1204, memory/storage1206, and I/O components1218, which may be configured to communicate with each other such as via bus1202. In some embodiments, processors1204(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a radio frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include processor1208and processor1212for executing some or all of instructions1210. The term “processor” is intended to include a multi-core processor that may comprise two or more independent processors (sometimes also referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.12shows multiple processors1204, computing system1200may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. Memory/storage1206may include memory1214(e.g., main memory or other memory storage) and storage1216(e.g., a hard-disk drive (HDD) or solid state device (SSD) may be accessible to processors1204, such as via bus1202. Storage1216and memory1214store instructions1210, which may embody any one or more of the methodologies or functions described herein. Storage1216may also store video data1250, including videos, clips, sampling patterns, and other data discussed in the present disclosure. Instructions1210may also reside, completely or partially, within memory1214, within storage1216, within processors1204(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by computing system1200. Accordingly, memory1214, storage1216, and the memory of processors1204are examples of computer-readable media. As used herein, “computer-readable medium” means an object able to store instructions and data temporarily or permanently and may include random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) and/or any suitable combination thereof. The term “computer-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions1210. The term “computer-readable medium” can also include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions1210) for execution by a computer (e.g., computing system1200), such that the instructions, when executed by one or more processors of the computer (e.g., processors1204), cause the computer to perform any one or more of the methodologies described herein. Accordingly, a “computer-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “computer-readable medium” excludes signals per se. I/O components1218may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific/O components included in a particular device will depend on the type of device. For example, portable devices such as mobile phones will likely include a touchscreen or other such input mechanisms, while a headless server will likely not include a touch sensor. In some embodiments, I/O components1218may include output components1226and input components1228. Output components1226may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. Input components1218may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In some embodiments, I/O components1218may also include biometric components1230, motion components1234, environmental components1236, or position components1238among a wide array of other components. For example, biometric components1230may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure bio-signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. Motion components1234may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. Environmental components1236may include illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. Position components1236may include location sensor components (e.g., a Global Position System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. I/O components1218may include communication components1240operable to couple computing system1200to WAN1232or devices1220via coupling1224and coupling1222respectively. For example, communication components1240may include a network interface component or other suitable device to interface with WAN1232. In some embodiments, communication components1240may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. Devices1220may be another computing device or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via USB). Moreover, communication components1240may detect identifiers or include components operable to detect identifiers. For example, communication components1240may include radio frequency identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via communication components1240, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. In various embodiments, one or more portions of WAN1232may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, WAN1232or a portion of WAN1232may include a wireless or cellular network and coupling1224may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, coupling1224may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. Instructions1210may be transmitted or received over WAN1232using a transmission medium via a network interface device (e.g., a network interface component included in communication components1240) and utilizing any one of several well-known transfer protocols (e.g., HTTP). Similarly, instructions1210may be transmitted or received using a transmission medium via coupling1222(e.g., a peer-to-peer coupling) to devices1220. The term “transmission medium” includes any intangible medium that is capable of storing, encoding, or carrying instructions1210for execution by computing system1200, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
78,000
11862200
In the drawings, the same reference numerals and letters identify the same items or components. DETAILED DESCRIPTION OF SOME EMBODIMENTS As aforesaid, it is one object of the invention to provide a method for the creation of controllable/manoeuvrable interactive audio-video contents of the live-action type for mobile user terminals, as a sequence of video clips, through the use of a plurality of sensors and commands for managing, controlling and manipulating a frame in the video clip, which affect the timeline (time evolution) and the frame-rate (speed) of the video clip. In the present context, the interactive audio-video contents of the “live-action” type are meant to be “live-action” or “real-action” contents, i.e. films played by “real” actors, as opposed to films created through animation (drawing, computer graphics, stop-motion, etc.) Thanks to the invention described herein, it is possible to enjoy “live-action” video narration that can instantly show, without solution of continuity, the results of the video-clip composition actions with no pauses, loadings or interruptions, as a sequence of video segments not known a priori, i.e. a fluid and continuous filmic narration, modified and controlled in real time by the user/spectator, with no image jumps when switching between successive video segments in the nodes, as will be described hereinafter. The basic idea of the invention is, therefore, to provide a method for the creation of controllable/manoeuvrable interactive audio-video contents of the live-action type and a video editor that allows creating independent audio-video contents encapsulated into suitable APPs or readable by a video player capable of recognizing and appropriately reacting to the controls and commands issued by the user through the use of any mode of interaction available in his/her mobile terminal, modifying in real time the succession of the video segments and hence the contents of the filmic narration. MANOEUVRABLE INTERACTIVE VIDEO refers to a filmic narration wherein the time succession of the scenes (also called montage or direction) is not defined a priori by the author of the video, but is built in real time as a function of the interactions and selections INTERACTION COMMANDS made by the spectator (or user) during fruition. The MANOEUVRABLE INTERACTIVE VIDEO (FIG.1) is composed of a set of video narrations or VIDEO SEGMENTS101. VIDEO SEGMENTS are joined together at narrative points called NODES103. A VIDEO CLIP (FIG.5) is the NON-INTERACTIVE OR NON INTEROPERABLE or NON-MANOEUVRABLE filmic narrative element of a MANOEUVRABLE INTERACTIVE VIDEO102. A NODE103is the point of interconnection among different VIDEO SEGMENTS. The NODE is also the INTERACTIVE OR INTEROPERABLE or MANOEUVRABLE filmic narrative element of a MANOEUVRABLE INTERACTIVE VIDEO. A VIDEO CLIP is the time-successive aggregation of video takes or contents, called SEQUENCES—seeFIG.8.1: Video clip=Sequence1+Sequence2+Sequence3+ . . . +Sequencera-1+Sequencen At the end of each video clip there is a node sequence or Sequencenor NODE. A node sequence is a filmic take characterized by a series of [time markers], defined as follows:—seeFIG.8.2.TLi. Loop start timeTLf Loop end timeTf1 . . . Tfn Forward time 1 . . . Forward time nTb1 . . . Tbn Backward time 1 . . . Backward time n Wait interval or LOOP821refers to the narration between the markers TLf822and TLi823. The markers Tfi826,827and Tbi824,825are referred to as exit points. To each exit point, the start828of a VIDEO SEGMENT is connected, seeFIG.1. A node sequence may be the termination of several VIDEO SEGMENTS109. From one node sequence, several VIDEO SEGMENTS109may start. The node sequences without exit points are called narration endings105-108. Given the above definitions, it is assumed that it is per se known how each video clip can be created, which is made up of a sequence of known video and audio frames in a per se known digital format, e.g. 2D or 3D. With reference toFIG.1, in a per se known manner a MULTISTORY is predetermined, i.e. a database of video segments and the multipath network of interconnections among them (or quest tree), which may allow the composition of an interactive audio-video content, consisting of one of the possible clip sequences made possible by the clip interconnections in the multipath network, starting from the start instant of a first clip START104up to the end instant of one of the N possible final clips (narration endings), referred to in the figure as END1, . . . ENDn105-108. The lines in the network symbolize the evolution in time of each clip, while the nodes symbolyze the transitions from one clip to another. Several video clips may meet at one node and/or several lines may start from one node, meaning that it is possible to switch from one clip to one or more other clips according to the specific mode of the invention described below. Entry into the node or into the node sequence occurs at the instant TLi822, i.e. the start point of the wait interval, in which the evolution of the clip occurs automatically, cyclically and continuously forwards and backwards (rewind) between TLI (822) and TLf (823), seeFIG.2. Within this automatic cyclic evolution (from instant TLi to instant TLf), the system is in Loop (201), waiting to receive a command for evolving towards another clip, through any one of the commands of interaction between the mobile terminal and the user, which command may arrive at any instant within the loop, according to the decision of the user, who chooses the instant for exiting the loop, thus obtaining a soft transition from one clip to the next one. Optionally, the system may evolve automatically towards another clip, if no commands are received from the user within a MAXIMUM TIME. Optionally, the wait interval of a node sequence can be reproduced only once (no loop). This permits the creation of situations where, if the user interacts with the correct “interactive instruction” within the wait interval, then the narration will continue following the main narrative flow; otherwise, i.e. if no command or a wrong command is issued, different video segments will be linked. Optionally, the system may automatically handle the timeline and frame-rate of the loop (e.g. for slowing down the scene, . . . ) while waiting for a command from the user. The types of available commands202are many and can be issued through specific user actions, such as the following:a plurality of sensors, such as touch-pad, microphone, gyroscope, camera, . . .a plurality of gestures, such as swipe, pinch, . . .a plurality of combinations of the above sensors and gestures, . . . or issued through [software commands]203, e.g. generated by timers (e.g. maximum time of permanence of the wait interval) or as a consequence of other commands generated by the user e.g. parallel multistories, wherein a user command issued for a first multistory also results in a software command affecting the second multistory). These commands are already known and available, for example, in some types of mobile telephone terminals, such as smartphones, being transformed in a known manner into electric and/or electronic control signals in the terminal. The user terminal comprises an interactive display and/or one or more sensors, from which at least some of said commands can be derived, and/or one or more motion or voice or image or position detectors, from which at least some of said commands can be derived. Based on the specific command received, issued within the wait interval, the system determines how the transition from one clip to another clip should evolve. This means that, based on the type of command received in the loop (e.g. fast or slow shaking of the motion sensor), the time evolution of the node sequence will be modified by managing the flow speed and direction and the point where a jump to the next clip should occur within the node sequence. Therefore, based on the type of command issued, the system will decide how the node sequence should evolve (forwards, back-wards, fast, slow, . . . ) and hence also the point of the node sequence (301,302,305,306) from which to go towards another clip, seeFIG.3(interactive instruction). For every single node within the system, a matching table is defined between command types and evolution types. There is a user interface that senses the command issued by the user and associates it with the type of reaction affecting the evolution of the clip. The available commands may depend on the node, and may therefore be different for each node sequence. Some node sequences may not be associated with any commands, and therefore may not contain a narration endings loop105-108. Some node sequences may consist of the loop only, so that it will be possible to jump from a loop directly to a subsequent segment or loop110. With reference toFIG.2, the evolution of the VIDEO CLIP204ends into a WAIT INTERVAL or LOOP201. Based on the command COMM received 202,203, the table of possible command/transition combinations will determine the exit time marker and the next video segment. If the exit time marker is placed before the start of the loop, then the system will move backwards, by appropriately adjusting the timeline and frame-rate, up to the exit point, thus linking to the next video segment205. If the exit time marker is placed after the end of the loop, then the system will move forwards, by appropriately adjusting the timeline and frame-rate, up to the exit point, thus linking to the next video segment206. For example, if during the wait interval (loop) a swipe right command402is issued (FIG.4), then the node sequence will be executed in forward mode past the TLF marker (310) up to TF1 (311); if a two swipe right command404is issued, then the clip will go forwards past the marker TF1311, up to the marker TF (312), displaying the associated video segment. If during the wait interval (loop) a swipe left command401is issued (FIG.4), then the node sequence will be executed in backward mode past the TLI marker (309) up to TB1 (308); if a two swipe left command403is issued, then the clip will go backwards past the marker TB1308, up to the marker TB2 (307), displaying the associated video segment. INTERACTION COMMANDS can only be issued, and hence interpreted, during the execution of the [Wait interval]. Management commands are, on the contrary, commands not related to interaction, and can be issued at any instant during multistory fruition or development, e.g. in order to impose a rewind action following a wrong or unpleasant selection or to jump to a previous clip. According to a further variant, the same command issued at different time instants within the wait interval may execute the exit from the node sequence in different ways. The wait segment is divided into n time intervals ΔT (304) and associated with an interactive instruction. One Clip(n) will be associated with each ΔT—seeFIG.9.1. According to a further variant, it is possible to assign different commands to the same time interval ΔT within the wait interval in order to develop the node sequence in different ways. In a given time interval ΔT, a defined Clip(n) corresponds to each interactive instruction, seeFIG.9.2. According to a further variant, if the wait interval of a node sequence is a video taken at a frame-rate higher than 24 fps (e.g. 300 fps), the commands of the interactive instruction may increase or decrease the frame-rate of the node sequence. For example (seeFIG.10), upon the given interactive instruction (e.g. tap) any point within the wait interval, the frame-rate decreases (slows down) to allow for better observation of the flight of the humming bird (1001) or, vice versa, the frame-rate increases (accelerates) to allow observing the humming bird in action (1002). For example (seeFIG.11), in the node sequence with a wait interval, upon the given interactive instruction (e.g. tap), in a given time interval, the frame-rate decreases (slows down) to allow increasing the precision of the jump and prevent falling (1102); in fact, should the given interactive instruction be executed in a wrong manner or out of sync, then the player will not land precisely on the nearest bank, thus falling into the void (1106). As an alternative to the given interactive instruction (e.g. tap), in a given time interval the frame-rate increases (accelerates) to allow increasing the elevation of the jump to reach the opposite bank (1104); should the given interactive instruction be executed in a wrong manner or out of sync, the player will not take sufficient run-up and will fall into the void, thus not reaching the opposite bank (1107). Within the same wait segment there may be several interactive instructions, in different time intervals (1101-1105). According to a further variant, based on further types of commands (interaction and management commands) received, simultaneous side-by-side visualization of two or more MULTISTORIES is obtained, each one possibly having a timeline of its own subject to different commands, at different times. With reference toFIG.12.1, two or more node sequences can be executed on the same display, whether in different layouts or superimposed. The interactive instructions assigned to combined node sequences may be:a) mutually independentb) mutually interactive In case of simultaneous vision of multiple multistories, a user command issued on one multistory may be associated with software commands capable of causing the parallel evolution of one or more node sequences of other multistories. Example of combined node sequences with independent interactive instructions: with reference toFIG.12.2, according to the timeline highlighted in red the humming bird can go into slow motion upon the assigned interactive instruction (e.g. tap); according to the timeline highlighted in green, the humming bird can fly off upon the assigned interactive instruction (e.g. swipe up). The two node sequences corribined together do not affect the respective timelines, framerates or interactive instructions by any means. The combined node sequences can be manoeuvred either simultaneously (at the same instant) or separately (at di stinc instants); they will need different interactive instructions in the former case or, in the latter case, indifferent ones. Example of combined node sequences with complementary interactive instructions: with reference toFIG.12.3, according to the timeline highlighted in red the humming bird1231can go into slow motion upon the assigned interactive instruction (e.g. tap); at the same time, the timeline highlighted in green waits in Loop for the sequence highlighted in red to respond to the interactive instruction; once the interactive instruction of the sequence highlighted in red has been executed, the green sequence will execute the clip past the marker (in the drawing, the humming bird1233can reach the humming bird1234, if the humming bird1231makes a precise landing). The two node sequences combined together affect the respective timelines, frame-rates or interactive instructions, because the evolution of one of them implies a different evolution of the other. The following will explain in more detail the operating sequence of the system/method with reference to FIGS.6and7, which show the operating flow charts, FIGS.8.1and8.2, which show the composition of a video clip, and FIGS.3and4, which show some examples of interaction with a user terminal. With reference to the flow chart ofFIG.6: From a given App Store (e.g.: Apple Store or Google Play), the user downloads an .IPA file (or a file in an equivalent format) to his/her own device (smartphone or tablet) (block61). The .IPA file (or file in an equivalent format) downloads to the memory of the device a library of [VIDEO CLIPS] and layouts/templates coded in computer languages (e.g.: C++) compatible with iOS, Android and other operating systems (block62). By clicking on the icon of the .IPA file (or file in an equivalent format), the Application is executed (block63). The initial interface is the menu of the Application, which includes, among others, the “START” button (or equivalent commands, e.g.: BEGIN, START, etc.) (block64andFIG.9.1). The video player displays the first [VIDEO CLIP] or [INITIAL CLIP] (block65andFIG.9.2). The flow continues fromFIG.6toFIG.7. With reference to the flow chart ofFIG.7and toFIGS.8.1,8.2: The software by means of computer code (e.g.: C++) compatible with the operating system of the device (smartphone or tablet) executes the assigned VIDEO SEGMENT, linking in succession the SEQUENCES of the VIDEO CLIP (block70). The last sequence Sequence, or Node Sequence is connected to Sequencen-1at the instant TLi (block71), i.e. the frame of the Node Sequence identified by the time marker TLi will be linked—in succession—to the last frame of Sequencen-1. If the Node Sequence is a final sequence or [Narration ending], the procedure will end (END) (block72), otherwise it will continue. If the procedure goes on, the video clip will move forwards and backwards in the time segment between the markers TLi and TLf [Wait interval], waiting for a command action from the user (block73and303). The software by means of computer code (e.g.: C++) compatible with the operating system of the device (smartphone or tablet) may also appropriately adjust the running speed of the [Wait interval], slowing down or accelerating the frame-rate in order to give more realism to the wait situation (block74). When the reception of a [command] is verified (block75), the software by means of computer code (e.g.: C++) compatible with the operating system of the device (smartphone or tablet) associates a given gesture of the touchscreen (e.g.: swipe, tap, rotate, etc.) or a given input of the sensors of the device (e.g.: gyroscope, volume, etc.) or a given software command with a given time direction (backwards or forwards relative to TLi or TLf) and/or with a given frame-rate of the video clip (acceleration or slowing down) and/or with a given combination of both factors (time direction+frame-rate (block77,FIG.4). If absence of interactions is verified (block75), then the loop between TLi and TLf will continue (block76), and the operations will return to block73(303). In the presence of a command from the user or from the software, the procedure will exit the loop of the wait interval, moving forwards or backwards to the time marker Exit point connected to that user action or command (block78,307-308,311-312). When the Exit point is arrived at, the software selects from the library (see point2) the new VIDEO SEGMENT associated with the type of selection and/or command just executed (block79). The video player displays the new VIDEO CLIP (block80). The process starts again from the beginning (block70). The result is a succession of VIDEO SEGMENTS, the evolution of which—MANOEUVRED by the user's actions—produces a narrative experience—characterized by the choices of the user him/herself—that is unique, original and involving as a whole. The present invention can advantageously be implemented through a computer program VIDEO EDITOR, which comprises coding means for implementing one or more steps of the method when said program is executed by a computer. The following will list the steps of the process of using the method through the VIDEO EDITOR:a) Given a library of (n), composed of all video sequences (including, therefore, all possible branches of the Multistory), the computer expert [OPERATOR] selects the sequences for composing the [VIDEO SEGMENTS], including the sequences transformed into node sequences.b) On the timeline of the node segment, the computer expert sets two time markers that delimit the backward and forward loop Wait interval of the node sequence. In this way, the node sequence will only be executed in backward and forward loop within the two time markers set on the timeline.c) On the timeline within the Wait interval, the computer expert may set other additional time markers, as a function of the interaction gestures expected by the narrative development of the Multistory.d) On the timeline of a video segment, the computer expert also sets any [exit time markers] and connection markers towards the next video segments, in accordance with the narrative construction of the Multistory.e) The computer expert selects a given command readable by the mobile device (smartphone and/or tablet) relating to gestures and sensors of the device capable of sending executable inputs (e.g.: gesture on the touchscreen, voice command through the microphone, rotation of the gyroscope, etc.).f) At each time marker set within the wait interval, the computer expert associates the previously selected command, so that upon that given command the node sequence will be executed past the markers delimiting the wait interval [TLi, TLf], up to the time markers connected with the associated command.g) The computer expert selects from the library the video segments that will follow the executed node sequence based on the associated command; in this way, a given video segment(n) will correspond to the given command associated with the time marker and to the relevant “unlocked” part of the node sequence.h) The computer expert repeats the same process using all the n sequences in the library, alternating video clips and node sequences so as to create the plurality of possible narrative directions of the Multistory [or “quest tree”].i) Once the quest tree has been formed and closed, the expert exports the project as an .IPA or equivalent file readable by the App Stores (e.g.: Apple Store, Google Play, etc.). It is therefore understood that the protection scope extends to said computer program VIDEO EDITOR as well as to computer-readable means that comprise a recorded message, said computer-readable means comprising program coding means for implementing one or more steps of the method when said program is executed by a computer. The above-described non-limiting example of embodiment may be subject to variations without departing from the protection scope of the present invention, comprising all equivalent designs known to a man skilled in the art. The elements and features shown in the various preferred embodiments may be combined together without however departing from the protection scope of the present invention. The advantages deriving from the application of the present invention are apparent, as described below by way of example. Soft switching from one clip to the next is obtained. In prior-art systems, in order to obtain different types of clip evolution, different clips are created, among which the user makes a selection. According to the present invention, on the contrary, the evolution of the clip itself is modified. In prior-art systems, overlays or hyperlinks are added to obtain interactions, which however distract from pure fruition of the video clip (the term “pure” referring herein to viewing the video clip with no additional elements). According to the present invention, on the contrary, the video clip is directly acted upon without requiring the use of any additional elements on the video clip. From the above description, those skilled in the art will be able to produce the object of the invention without introducing any further construction details.
23,495
11862201
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION OF THE INVENTION Public safety personnel (for example, first responders, investigators, and the liked) responding to an incident scene may document the incident scene and the objects of interest located at the incident scene. Public safety personnel may document the incident scene using portable electronic devices to record still images or video of the object of interest, to record audio or text describing the object of interest, or some combination of the foregoing. In some cases, objects of interest are removed from the incident scene to, for example, clean up the scene or to be used later use as evidence in criminal trials or other official investigations or proceedings. Investigations of incident scenes may be stopped and restarted, and may be performed by multiple personnel at many times. An investigation of an incident scene may also be restarted after the incident scene has been fully documented and all objects of interest documented and removed. An investigator may wish to re-create an incident scene, or may wish to compare multiple incidents that occurred at the same scene. Current systems and methods for documenting objects of interest are inefficient in such investigations. Accordingly, because current systems and methods do not provide selection, recognition, and annotation of an object of interest from a live scene based on user input, systems and methods are provided herein for, among other things, displaying an image of an object of interest located at an incident scene. One example embodiment provides a system for displaying an image of an object of interest located at an incident scene. The system includes an image capture device, a display, a memory, and an electronic processor coupled to the image capture device, the display, and the memory. The electronic processor is configured to receive, from the image capture device, a first video stream of the incident scene. The electronic processor is configured to display the first video stream on the display. The electronic processor is configured to receive an input indicating a pixel location in the first video stream. The electronic processor is configured to detect the object of interest in the first video stream based on the pixel location. The electronic processor is configured to determine an object class for the object of interest. The electronic processor is configured to determine an object identifier for the object of interest. The electronic processor is configured to determine metadata for the object of interest including the object class, an object location, an incident identifier corresponding to the incident scene, and a time stamp. The electronic processor is configured to receive an annotation input for the object of interest. The electronic processor is configured to associate the annotation input and the metadata with the object identifier. The electronic processor is configured to store, in the memory, the object of interest, the annotation input, and the metadata. Another example embodiment provides a method for displaying an image of an object of interest located at an incident scene. The method includes receiving, from the image capture device, a first video stream of the incident scene. The method includes displaying the first video stream on the display. The method includes receiving an input indicating a pixel location in the first video stream. The method includes detecting the object of interest in the first video stream based on the pixel location. The method includes determining an object class for the object of interest. The method includes determining an object identifier for the object of interest. The method includes determining metadata for the object of interest including the object class, an object location, an incident identifier corresponding to the incident scene, and a time stamp. The method includes receiving an annotation input for the object of interest. The method includes associating the annotation input and the metadata with the object identifier. The method includes storing, in a memory, the object of interest, the annotation input, and the metadata. For ease of description, some or all of the example systems presented herein are illustrated with a single exemplar of each of its component parts. Some examples may not describe or illustrate all components of the systems. Other example embodiments may include more or fewer of each of the illustrated components, may combine some components, or may include additional or alternative components. FIG.1is a block diagram of a system100for displaying an image of an object of interest located at an incident scene. In the example illustrated, the system100includes a portable electronic device102, a server104, a database106, and a network108. The portable electronic device102and the server104are communicatively coupled via the network108. The network108is a communications network including wireless and wired connections. The network108may be implemented using a land mobile radio (LMR) network, and a cellular network (for example, a Long Term Evolution (LTE) network). However, the concepts and techniques embodied and described herein may be used with networks using other protocols, for example, Global System for Mobile Communications (or Groupe Special Mobile (GSM)) networks, Code Division Multiple Access (CDMA) networks, Evolution-Data Optimized (EV-DO) networks, Enhanced Data Rates for GSM Evolution (EDGE) networks, 3G networks, 4G networks, combinations or derivatives thereof, and other suitable networks, including future-developed network architectures. In some embodiments, communications with other external devices (not shown) occur over the network108. The portable electronic device102, described more particularly below with respect toFIG.2, is a wireless communication device that includes hardware and software that enable it to communicate via the network108. The portable electronic device102includes an image capture device (for example, a camera), and is capable of capturing, storing, analyzing, displaying, and transmitting captured images of the incident scene110, including images of the object of interest112. The portable electronic device102operates using, among other things, augmented reality technology, where live images are captured by the image capture device and displayed (for example, on a screen) with text, graphics, or graphical user interface elements superimposed on or otherwise combined with the live images. As described in detail below, the superimposed text or graphics may be used to record or convey information about the incident scene110, the object of interest112, or both. The incident scene110is the scene of an incident to which public safety personnel may respond (for example, the scene of a traffic accident or a crime scene). The incident scene110may be located indoors or outdoors. The object of interest112may be any object present at the incident scene, which object is related to the incident (for example, involved in or relevant to an investigation of the incident). Objects of interest may include, for example, automobiles (for example, in the case of a traffic accident) and weapons (for example, in the case of a crime scene). Objects of interest may also be tangible things not commonly thought of as objects, but which are still removable or transitory in nature (for example, fluids leaked from automobiles, debris from damaged property, blood stains, broken glass, skid marks, and fingerprints). In some embodiments, a person (for example, a crime or accident victim, persons gathered at the scene, and the like) may also be an object of interest. The incident scene110may include more than one object of interest112. The server104is a computer server that includes an electronic processor (for example, a microprocessor, or other electronic controller), a memory, a network interface, and other various modules coupled directly, by one or more control or data buses, or a combination thereof. The memory may include read-only memory, random access memory, other non-transitory computer-readable media, or a combination thereof. The electronic processor is configured to retrieve instructions and data from the memory and execute, among other things, instructions to perform the methods described herein. The server104sends and receives data over the network108using the network interface. The server104reads and writes data to and from the database106. As illustrated inFIG.1, the database106may be a database housed on a suitable database server communicatively coupled to and accessible by the server104. In alternative embodiments, the database106may be part of a cloud-based database system external to the system100and accessible by the server104and the portable electronic device102over one or more additional networks. In some embodiments, all or part of the database106may be locally stored on the server104. In some embodiments, as described below, the database106electronically stores data on objects of interest (for example, the object of interest112) and annotations for the objects of interest. In some embodiments, the server104and the database106are part of a computer-aided dispatch system. FIG.2is a diagram of an example of the portable electronic device102. In the embodiment illustrated, the portable electronic device102includes an electronic processor205, a memory210, an input/output interface215, a baseband processor220, a transceiver225, an antenna230, a microphone235, a camera240, and a display245. The illustrated components, along with other various modules and components are coupled to each other by or through one or more control or data buses that enable communication therebetween. The use of control and data buses for the interconnection between and exchange of information among the various modules and components would be apparent to a person skilled in the art in view of the description provided herein. The electronic processor205obtains and provides information (for example, from the memory210and/or the input/output interface215), and processes the information by executing one or more software instructions or modules, capable of being stored, for example, in a random access memory (“RAM”) area of the memory210or a read only memory (“ROM”) of the memory210or another non-transitory computer readable medium (not shown). The software can include firmware, one or more applications, program data, filters, rules, one or more program modules, and other executable instructions. The electronic processor205is configured to retrieve from the memory210and execute, among other things, software related to the control processes and methods described herein. The memory210can include one or more non-transitory computer-readable media, and includes a program storage area and a data storage area. The program storage area and the data storage area can include combinations of different types of memory, as described herein. In the embodiment illustrated, the memory210stores, among other things, metadata250and annotation input255(both described in detail below), and an object classifier260. As described in detail below, the object classifier260(for example, a Haar feature-based cascade classifier) may be executed by the electronic processor205to electronically detect and classify objects within images and video streams captured by the camera240. The input/output interface215is configured to receive input and to provide system output. The input/output interface215obtains information and signals from, and provides information and signals to, (for example, over one or more wired and/or wireless connections) devices both internal and external to the portable electronic device102. The electronic processor205is configured to control the baseband processor220and the transceiver225to transmit and receive video and other data to and from the portable electronic device102. The baseband processor220encodes and decodes digital data sent and received by the transceiver225. The transceiver225transmits and receives radio signals to and from various wireless communications networks (for example, the network108) using the antenna230. The electronic processor205, the baseband processor220, and the transceiver225may include various digital and analog components, which for brevity are not described herein and which may be implemented in hardware, software, or a combination of both. Some embodiments include separate transmitting and receiving components, for example, a transmitter and a receiver, instead of a combined transceiver225. The microphone235is capable of sensing sound, converting the sound to electrical signals, and transmitting the electrical signals to the electronic processor205via the input/output interface215. The electronic processor205processes the electrical signals received from the microphone235to, for example, produce an audio stream. The camera240is an image capture device for capturing images and video streams, including a portion or the entire incident scene110, by, for example, sensing light in at least the visible spectrum. The camera240communicates the captured images and video streams to the electronic processor205via the input/output interface215. It should be noted that the terms “image” and “images,” as used herein, may refer to one or more digital images captured by the camera240, or processed by the electronic processor205, or displayed on the display245. Further, the terms “image” and “images,” as used herein, may refer to still images or sequences of images (that is, a video stream). As illustrated, the camera240is integrated into the portable electronic device102. In alternative embodiments, the camera240is separate from the portable electronic device102, and communicates captured images to the portable electronic device102via a wired or wireless connection. For example, the camera240may be integrated into a body-worn camera or a vehicle's dash or roof mount camera, which communicates with the portable electronic device102. In some embodiments, the camera240may be a stereoscopic camera, or the portable electronic device102may include a stereoscopic camera. In such embodiments, the portable electronic device102can capture three-dimensional information about the incident scene110and the object of interest112. In some embodiments, three-dimensional information may be captured using radar sensors or infrared ranging sensors (not shown). The display245is a suitable display such as, for example, a liquid crystal display (LCD) touch screen, or an organic light-emitting diode (OLED) touch screen. The portable electronic device102implements a graphical user interface (GUI) (for example, generated by the electronic processor205, from instructions and data stored in the memory210, and presented on the display245), that enables a user to interact with the portable electronic device102. In some embodiments, the portable electronic device102operates or is integrated with a head-mounted display (HMD) or an optical head-mounted display (OHMD). In some embodiments, the portable electronic device102operates or is integrated with a LCD touch screen console display or heads up display (HUD) in a vehicle. As described in detail below, the portable electronic device102is capable of receiving and processing images captured by the camera240, and displaying processed images in a graphical user interface on the display245. Computerized image capturing and processing techniques are known, and will not be described in detail. In some embodiments, the portable electronic device102is a smart telephone. In other embodiments, the portable electronic device102may be a tablet computer, a vehicle's dash console, a smart watch, a portable radio, or another portable or mobile electronic device containing software and hardware enabling it to operate as described herein. Returning toFIG.1, an investigator responding to the incident scene110, using the portable electronic device102may wish to document the object of interest112. Accordingly,FIG.3illustrates an example method300for selecting and annotating the object of interest112at the incident scene110. The method300is described with respect toFIG.4, which illustrates the incident scene110and a graphical user interface displayed on the portable electronic device102. The method300is described as being performed by the portable electronic device102and, in particular, the electronic processor205. However, it should be understood that in some embodiments, portions of the method300may be performed by other devices, including for example, the server104. At block302, the electronic processor205receives, from the camera240, a first video stream402of the incident scene110. At block304, the electronic processor205controls the display of the first video stream402on the display245. In some embodiments, the portable electronic device102continually captures and displays video streams of the incident scene110, for example, as in an augmented reality display. At block305, the electronic processor205receives an input indicating a pixel location in the first video stream402(for example, in a frame of the first video stream402). The input may be in response to a touch, tap, or press on the display245, which indicates one or more pixels at a pixel location in the first video stream402. In embodiments where the portable electronic device102operates or is integrated with a head-mounted display (HMD) or an optical head-mounted display (OHMD), the input may be in response to a detected hand gesture, a detected eye movement, and the like. At block306, the electronic processor205detects the object of interest112in the first video stream402based on the pixel location. For example, the electronic processor205may direct the object classifier260to detect an object within a limited area surrounding the pixel location. In some embodiments, the object classifier260continuously detects and classifies multiple objects in the first video stream402, and the input and corresponding pixel location are used to select one of the detected objects. Accordingly, it should be noted that it is not a requirement to display the video stream402in order to detect an object or objects of interest. In some embodiments, multiple objects may possibly be selected due to object recognition ambiguity, partial overlap of objects in the scene, and close proximity of objects to each other. In these embodiments, the best match will be selected. It should be understood that multiple object selections may be retained for later selection by a user. In some embodiments, the electronic processor205determines a boundary404for the object of interest112based on the pixel location (for example, using edge analytics). At block308, the electronic processor205determines an object class for the object of interest. For example, the electronic processor205may determine an object class using the object classifier260. In the example illustrated, the object of interest112is a vehicle, which has been involved in a traffic accident. In this example, the object class is “vehicle.” In some embodiments, the object class may be more or less specific (for example, “compact car,” or “transportation”). At block310, the electronic processor205determines an object identifier for the object of interest112. The object identifier is an electronic identifier, for example, a serial number, which may be used to uniquely identify the object of interest112in the database106. At block312, the electronic processor205determines metadata250for the object of interest including the object class, an object location, an incident identifier corresponding to the incident scene, and a time stamp. In some embodiments, the metadata250includes a user identifier for the user who selected the object of interest112. The metadata may also include data based on the object class. In one example, metadata250for the object class “vehicle” may include the color, type (for example, sedan, compact car, truck, or sport utility vehicle), and the license plate of the vehicle (for example, as determined by an optical character recognition analysis of the first video stream402). The incident identifier is a unique electronic identifier for the incident provided by, for example, a computer aided dispatch system. The time stamp may be, for example, the time and date when the first video stream402is captured. The object location is the location of the object of interest112within the incident scene110. The object location may be based on the location of the portable electronic device102(for example, as reported by a global positioning system receiver) and the location of the object of interest112relative to the portable electronic device102. In some embodiments, the relative location may be determined using image analysis, for example, by comparing portions of the incident scene near the object of interest112to items within the incident scene of a known size to determine a distance. In some embodiments, the relative location may be determined using a range imaging technique (for example, stereo triangulation). In some embodiments, the portable electronic device102may be equipped with a range-sensing device, such as, for example, an infrared transceiver for determining the relative location of the object of interest112. In some embodiments, the location of the object of interest112within the incident scene110may be determined based on the distance (for example, in pixels) of particular points along the boundary404relative to particular points in the first video stream402. In some embodiments, the metadata250includes information indicating the orientation of the object of interest112within the incident scene110, relative to, for example, a fixed vector in the incident scene110, a compass direction, or the vector representing the orientation of the portable electronic device102. At block314, the electronic processor205receives an annotation input255for the object of interest. The annotation input255may be received from a user of the portable electronic device102and may include, for example, text-based annotation, audio annotation, video annotation, and image annotation. At block316, the electronic processor205associates the annotation input255and the metadata250with the object identifier, and, at block318, stores (for example, in the memory210) the object of interest112(for example, an image or video of, a reference to, or a description of the object of interest112, or some combination of the foregoing), the annotation input255, and the metadata250. In some embodiments, the object of interest112(that is, an image of the object of interest112), the annotation input255, and the metadata250are communicated to the server104and stored in the database106in addition to, or in place of, being stored in the memory210. As illustrated, the metadata250and the annotation input255are displayed by the graphical user interface of the portable electronic device102associated with the object of interest112. In some embodiments, the annotation input255and the metadata250are not displayed directly. For example, an icon or icons representing the annotation input255and the metadata250may be displayed. Inputs received by the electronic processor205selecting the icons would allow a user of the portable electronic device102to access the annotation input255and the metadata250. In some embodiments, the electronic processor may receive a second input (for example, a tap or touch) selecting the object of interest, and in response to the second input, display an executable menu based on at least one of the incident identifier, the object identifier, and metadata250. For example, a second investigator may arrive on the incident scene110after the object of interest112has been annotated and stored. The second investigator, using another mobile electronic device, may view the incident scene110and touch on the object of interest112on a display may pop up an executable menu allowing the second investigator to view and edit the metadata250or the annotation input255(for example, by interacting with a touch screen or a keypad, or by providing voice-to-text inputs). In such embodiments, the metadata250may be updated to indicate who edited the metadata250, and when the edits were made. In some embodiments, all versions of the metadata250are stored in order to create an audit trail. The executable menu may trigger the display or more detailed annotations, for example, drilling down from a summary view (for example, “the vehicle contained 4 passengers”) into individual data points (for example, the names and vital statistics for the passengers). As noted above, the object of interest112, the annotation input255, and the metadata250may be stored in the database106. Accordingly, they may be made available for other users to access, using one or more computers or portable electronic devices. In such embodiments, each device synchronizes with the database106, allowing each device to have access to the latest information regarding the incident scene110and the object of interest112. In addition, the annotation input255and the metadata250may be available for viewing outside of an augmented reality display, for example, in a list format using a note-taking or other application that may or may not be tied to the object of interest112. Embodiments of the system100include more than one portable electronic device102. In such embodiments, the other portable electronic devices are able to see the annotations added according to the method300. For example, the electronic processor205receives, from the image capture device, a video stream of the incident scene that includes the object of interest112(similar to the first video stream402ofFIG.4). The electronic processor205displays the video stream on the display245, and, as described above, detects the object of interest in the video stream. When the object of interest112is detected, the electronic processor205retrieves, from the memory210, the annotation input255and the metadata250based on, for example, the incident identifier and the object identifier, and displays the annotation input255and the metadata250for the object of interest. Returning toFIG.1, the investigator may return to re-investigate the incident scene110after the initial investigation is complete and the real-word object of interest112has been removed (for example, the vehicle has been towed). Accordingly,FIG.5illustrates an example method500for displaying an image of the object of interest112at the incident scene110. The method500is described with respect toFIGS.6A and6B, which illustrate the incident scene110and a graphical user interface displayed on the portable electronic device102. The method500is described as being performed by the portable electronic device102and, in particular, the electronic processor205. However, it should be understood that in some embodiments, portions of the method500may be performed by other devices, including for example, the server104. At block502, the electronic processor205receives, from the camera240, a second video stream602of the incident scene110. At block504, the electronic processor205retrieves (for example, from the memory210or the database106) the object of interest112based on the incident identifier. In some embodiments, the incident identifier is supplied by the investigator. In some embodiments, the incident identifier is determined automatically based on the location of the portable electronic device102as compared to the location of the incident scene110. In some embodiments, the incident identifier is determined and received from a computer aided dispatch system. In some embodiments, when an object of interest is nearby but not in the video stream, the electronic processor205may display an indicator of which direction to point the portable electronic device102in order to bring the object of interest112into view. In some embodiments, the second video stream602is not captured. In such embodiments, the incident scene110may be reconstructed based on the location of the incident scene110(for example, as provided by the incident scene identifier) and a direction in which the portable electronic device102is pointed. At block506, the electronic processor205superimposes or otherwise combines, on the second video stream602, based on the object location, the object of interest112to create a superimposed video stream604of the incident scene110. The investigator (or any other user) viewing the incident scene110on the portable electronic device102, can now see the object of interest112as it appeared at the time of the capture of the first video stream402. As illustrated inFIG.6B, the portable electronic device102presents an augmented reality view of the incident scene110. In some embodiments, the electronic processor205may use three-dimensional information captured for the object of interest112to display a three-dimensional model of the object of interest112. In some embodiments, if three-dimensional information is incomplete, the model may be completed using data retrieved based on the object class or the metadata250. For example, if the object class and the metadata250indicate that the object of interest112is a particular make, model, and year of vehicle, an automotive database might be queried to retrieve images and dimensional information for that vehicle. In some embodiments, the object of interest112may be displayed on a 2D map of the incident scene110. As illustrated inFIG.6B, the object of interest is displayed with an annotation indicator606. The annotation indicator606indicates that the object of interest112has annotations associated with it. The annotation indicator606may be an icon, for example, including a title(s) and the type(s) of annotation(s) associated with the object of interest112. At block508, the electronic processor205receives a second input selecting annotation indicator in the superimposed video stream604. The second input may be, for example, a tap or touch on the display245. At block510, the electronic processor205retrieves the annotation input255, the metadata250, or both, based on the second input. For example, the electronic processor205retrieves, from the database106, the annotation input255and the metadata250stored using the method300, based on the object identifier for the object of interest112. In some embodiments, the annotation input255may be shown directly (for example, when the annotation consists of a short note). At block512, the electronic processor205displays the annotation input255and the metadata250for the object of interest112. In some embodiments, the metadata250includes a current location for the object of interest112. For example, the metadata may indicate that a weapon from a crime scene has been stored in a particular evidence locker at the police station, or the metadata may indicate that a vehicle is in the police impound lot. In some instances, an investigator may return to re-investigate the incident scene110after the initial investigation is complete, but before the real-world object of interest112has been removed. Accordingly,FIG.7illustrates an example method700for highlighting a visual change in an image of an object of interest112located at an incident scene110. The method700is described with respect toFIGS.8A and8B, which illustrate the incident scene110and a graphical user interface displayed on the portable electronic device102. The method700is described as being performed by the portable electronic device102and, in particular, the electronic processor205. However, it should be understood that in some embodiments, portions of the method700may be performed by other devices, including for example, the server104. At block702, the electronic processor205and the camera240capture a second video stream802of the incident scene110. At block704, the electronic processor205displays the second video stream802on the display245. At block706, the electronic processor205locates, in the second video stream802, the object of interest112, as describe above with respect to the method300. At block708, the electronic processor205determines the object identifier based on the object of interest112, as described above. At block710, the electronic processor205retrieves (for example, from the memory210, the database106, or both) the annotation input255and the metadata250based on the object identifier. At block712, the electronic processor205displays the annotation input255and the metadata250for the object of interest112. At block712, the electronic processor205identifies a visual change804in the object of interest112. For example, the electronic processor205may use image processing techniques to compare the object of interest112from the first video stream402with the object of interest112in the second video stream802to determine if any portions of the object of interest112have changed since the first video stream402was captured (for example, by comparing the time stamp to the current time). In some embodiments, the electronic processor205identifies a change in state (for example, the size, shape, location, color, the presence of smoke, a door or window is now open or closed, and the like) for the object of interest112. In some embodiments, the electronic processor205identifies a change as something (for example, a license plate) missing from the object of interest. At block716, the electronic processor205highlights, on the object of interest112, the change. For example, as illustrated inFIG.8B, the change may be shaded to highlight the area. In some embodiments, the electronic processor205displays, on the display245, a timeline based on the incident scene110and the time stamp. For example, the timeline may display the time stamp on one end, the current time on the other end, and hash marks in between noting divisions of time (for example, hours). In such embodiments, the electronic processor may receive an input selecting a selected time (for example, a tapping of one of the hash marks) on the timeline, and update the object of interest112, the annotation input255, and the metadata250based on the selected time. For example, any updates since the selected time to those items may not be displayed, or may be greyed out to indicate that they are not applicable to the currently-selected time. In some embodiments, the electronic processor205may display other information related to the incident scene110. For example, the electronic processor205may display an incident scene perimeter, or the location(s) of other personnel at or relative to the incident scene110. In some cases, an investigator may want to indicate whether a line of sight exists between two points in the incident scene110. For example, in a crime scene, it may be advisable to know whether a line of sight exists between where a suspect was located and where a victim was wounded by gunfire. Accordingly,FIG.9illustrates an example method900for annotating the incident scene110. The method900is described with respect toFIG.10, which illustrates the incident scene110and a graphical user interface displayed on the portable electronic device102. The method900is described as being performed by the portable electronic device102and, in particular, the electronic processor205. However, it should be understood that in some embodiments, portions of the method900may be performed by other devices, including for example, the server104. At block902, the electronic processor205receives an input corresponding to a first location1002at the incident scene110. At block904, the electronic processor205receives an input corresponding to a second location1004at the incident scene110. The inputs received may be, for example, taps or touches on the display245. A line of sight is an unobstructed path between the first location1002and the second location1004. At block906, the electronic processor205determines a line of sight1006based on the first location1002and the second location1004. The line of sight1006is determined, for example, through image analysis and range imaging. At block908, the electronic processor determines a distance1008between the first location1002and the second location1004. In some embodiments, the distance1008is determined similarly to determining the relative location for the object of interest112, as described above with respect to the method300. At block910, the electronic processor205displays, on the display245, the line of sight1006and the distance1008. It should be noted that the systems and methods described above refer to a single incident scene110and a single object of interest112. However, the systems and methods apply to multiple incident scenes and multiple objects of interest. For example, selecting an object of interest, as described above, may apply to selecting one from several objects of interest displayed. It should also be noted that the systems and methods presented herein are applicable outside of public safety field. For example, public or private utility workers may use the systems and methods described above to identify and annotate infrastructure objects (for example, utility poles, fire hydrants, transformers, control boxes, and the like). In another example, construction workers may use the systems and methods described above to identify and annotate objects of interest at construction sites (for example, by noting items needing attention for the next shift or crew coming in). In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has,” “having,” “includes,” “including,” “contains,” “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a,” “has . . . a,” “includes . . . a,” or “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially,” “essentially,” “approximately,” “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed. It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used. Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
43,778
11862202
DETAILED DESCRIPTION FIG.2Ashows a data storage device in the form of a disk drive200, in accordance with aspects of this disclosure. Disk drive200comprises a head202configured to be actuated by a head actuator (VCM)204over a disk206, and a spindle motor208configured to rotate disk206. Spindle motor208comprises a plurality of windings and is powered by a drive voltage generated in response to a host voltage210. Disk drive200further comprises control circuitry212configured to execute method220depicted in the flow diagram ofFIG.2B. During operation of disk drive200, a power interruption or failure may sometimes occur, which is termed an emergency power off (EPO) event. During an EPO event, it is important that head202be parked before the air bearing between head202and disk206dissipates to prevent damage to head202and/or disk206, such as by unloading head202onto a ramp near the outer diameter of disk206. Disk drive200may need to perform other functions during an EPO event, such as egressing cached write data from a volatile semiconductor memory, such as dynamic random-access memory (DRAM), to a non-volatile semiconductor memory, such as flash memory. The drive voltage needs to be carefully managed during an EPO event to support these functions. In particular, when the host voltage falls below a threshold during a power interruption or failure, it is important to prevent reverse current from flowing to the host and thereby depleting the drive voltage. As shown inFIG.3A(block diagram301-a), control circuitry212comprises an isolation field effect transistor (ISOFET)300-aconfigured to prevent excessive reverse current from flowing to the host during an EPO event. In one implementation, ISOFET300-ais an n-channel metal-oxide semiconductor (NMOS) transistor M1configured on a power large scale integrated circuit (PLSI)302-a. ISOFET300-aofFIG.3Ais connected to the 5 V host voltage (Host_5 V or H5V) and the 5 V drive voltage (Drive_5 V or D5V). In particular, Host 5 V is coupled to the drain of ISOFET300-a, and Drive_5 V is coupled to the source of ISOFET300-a. ISOFET300-ais connected to additional circuitry as shown inFIG.3Ato prevent excessive reverse current from flowing to the host during an EPO event. Though not shown inFIG.3A, an additional ISOFET is provided for the 12 V host and drive voltages and is configured in the same fashion as ISOFET300-aofFIG.3A.FIG.3Bshows a block diagram301-bdepicting an ISOFET300-bprovided for the 12 V host and drive voltages, according to various aspects of the disclosure. In this example, the ISOFET300-bis configured in the same or similar fashion as ISOFET300-adescribed in relation toFIG.3A. For instance, in one implementation, ISOFET300-bis an NMOS transistor M2configured on the PLSI302-b, where ISOFET300-bis connected to the 12 V host voltage (Host 12 V or H12V) and the 12 V drive voltage (Drive 12 V or D12V). Similar toFIG.3A, Host 12 V is coupled to the drain of ISOFET300-band Drive 12 V is coupled to the source of ISOFET300-b. ISOFET300(e.g., ISOFET300-aor ISOFET300-b) is in an ON state during normal operation. When excessive reverse current is flowing through ISOFET300, ISOFET300is turned OFF to prevent such excessive reverse current from flowing to the host and draining the drive voltage. The determination as to whether excessive reverse current is flowing through ISOFET300(e.g., ISOFET300-a) is made by comparing the difference between the host and drive voltages with a voltage turnoff threshold. In particular, the host voltage Host_5 V is coupled to the inverting input of comparator U1via resistor R1, and the drive voltage Drive_5 V is coupled to the non-inverting input of comparator U1via resistor R2. Resistor R3is coupled between the non-inverting input of comparator U1and ground. The output of comparator U1(the difference between the host and drive voltages, or H5V-D5V) is coupled to the inverting input of comparator U4, as well as back to the inverting input of comparator U1via resistor R4. Comparator U4compares the output of comparator U1(H5V-D5V) with a voltage turnoff threshold (TurnOff_Th or Vthreshold) supplied to the non-inverting input of comparator U4. Based on this comparison, an output signal 5 V_ISO_Off is generated and supplied to the gate of ISOFET300-a. When the difference between the host and drive voltages exceeds the voltage turnoff threshold, indicating that an EPO event such as a power failure or interruption has occurred, the resultant 5 V_ISO_Off signal is operative to turn ISOFET300-aoff, thereby preventing reverse current from flowing to the host. Similarly, inFIG.3B, the host voltage Host 12 V is coupled to the inverting input of comparator U10via resistor R5, and the drive voltage Drive 12 V is coupled to the non-inverting input of comparator U10via resistor R6. Resistor R7is coupled between the non-inverting input of comparator U10and ground. The output of comparator U10(the difference between the host and drive voltages) is coupled to the inverting input of comparator U14, as well as back to the inverting input of comparator U10via resistor R8. Comparator U14compares the output of comparator U10(Host 12 V-Drive 12 V) with a voltage turnoff threshold (TurnOff_Th) supplied to the non-inverting input of comparator U14. Based on this comparison, an output signal 12 V_ISO_Off is generated and supplied to the gate of ISOFET300-b. When the difference between the host and drive voltages exceeds the voltage turnoff threshold, the resultant 12 V_ISO_Off signal is operative to turn ISOFET300-boff, thereby preventing reverse current from flowing to the host. The total resistance between the drain and the source of ISOFET300(e.g., ISOFET300-a, ISOFET300-b) is the drain-source on resistance RDS(on), or Rdson. Thus, when the drive voltage falls below the host voltage, a reverse current flows through ISOFET300(e.g., ISOFET300-a, ISOFET300-b) towards the host due to the voltage difference. ISOFET300may have a wide range of Rdson values, typically between 25 mΩ and 100 mΩ . Because Rdson is variable in this manner, and because the difference between the drive and the host voltages relative to a voltage turnoff threshold is used to determine when ISOFET300is turned off (rather than, for example, a direct measurement of the reverse current flow), the reverse current level ((H5V-D5V)/Rdson) at which ISOFET300is turned off is variable. Ideally, ISOFET300would be turned off at a fixed turnoff current level rather than varying turnoff current levels. One aspect of this disclosure, implemented by method220(FIG.2B) and performed by control circuitry212, is to provide a more consistent turnoff current level at which ISOFET300(e.g., ISOFET300-a, ISOFET300-b) is turned off. With reference toFIG.2B, in step222, a desired turnoff current value (ITurnoffCurrent) at which ISOFET300is turned off is set. In one non-limiting example, the turnoff current for a 5 V ISOFET (ISOFET300-a) is approximately—150 mA, and the turnoff current for a 12 V ISOFET (ISOFET300-b) is approximately −200 mA. In step224, the Rdson of the ISOFET is determined. For non-limiting purposes of illustration, the typical Rdson range for a 5 V ISOFET (e.g., ISOFET300-a) is approximately 25 mΩ to 100 mΩ, and the typical Rdson range for a 12 V ISOFET (e.g., ISOFET300-b) is approximately 15 mΩ to 70 mΩ. In step226, an appropriate voltage turnoff threshold (Vthreshold) is selected in view of the Rdson of the ISOFET and the desired turnoff current (ITurnoffcurrent), that is, Vthreshold=ITurnoffCurrent* Rdson. In one implementation, the voltage turnoff threshold is selected from among multiple pre-programmed voltage turnoff thresholds, based on where the Rdson of the ISOFET falls within the typical range of Rdson values. In one non-limiting example, the voltage turnoff threshold is selected from among four voltage turnoff thresholds that are stored in two bits of the PLSI hardware to provide programmable flexibility. For the 5 V ISOFET, where the desired turnoff current is typically about −150 mA and the Rdson range is approximately 25 mΩ to 100 mΩ, the voltage turnoff threshold selections are −4 mV, −6 mV, −9 mV and −12 mV. The interplay between the Rdson range, turnoff current and voltage turnoff thresholds for the 5 V ISOFET is illustrated in Table 1 below. TABLE 15 V ISOFETRdson Range (Ohms)VoltageMinMaxTurnoff0.0250.050.0750.1Threshold (V)Turnoff Current (A)−0.004−0.16−0.08−0.053333−0.04−0.006−0.24−0.12−0.08−0.06−0.009−0.36−0.18−0.12−0.09−0.012−0.48−0.24−0.16−0.12 Thus, with reference to Table 1, a turnoff current of approximately −150 mA (−0.15 mA) is desired for the 5 V ISOFET. When the Rdson of the 5 V ISOFET is determined to be at or near the typical range minimum of 25 mΩ(0.025 mΩ), the voltage turnoff threshold of −4 mV (−0.004 V) is selected since it yields the closest turnoff current (−160 mA) to the desired turnoff current (−150 mA). When the Rdson is determined to be at or near 50 mΩ, either the voltage turnoff threshold of −6 mV (turnoff current =−120 mA) or the voltage turnoff threshold of −9 mV (turnoff current =−180 mA) may be selected. When the Rdson is determined to be at or near 75 mΩ, the voltage turnoff threshold of −12 mV (turnoff current =−160 mA) is selected. When the Rdson is determined to be at or near the typical range maximum of 100 mΩ, the voltage turnoff threshold of −12 mV (turnoff current =−120 mA) is selected. For the 12 V ISOFET, where the desired turnoff current is typically about −200 mA and the Rdson range is approximately 15 mΩ to 70 mΩ, the voltage turnoff threshold selections are −4 mV, −7 mV, −10 mV and −15 mV. The interplay between the Rdson range, turnoff current and voltage turnoff thresholds for the 12 V ISOFET is illustrated in Table 2 below. TABLE 212 V ISOFETRdson Range (Ohms):VoltageMinMaxTurnoff0.0150.0330.050.07Threshold (V)Turnoff Current (A)−0.004−0.266667−0.121212−0.08−0.057143−0.007−0.466667−0.212121−0.14−0.1−0.01−0.666667−0.30303−0.2−0.142857−0.015−1.0−0.454545−0.3−0.214286 Thus, with reference to Table 2, a turnoff current of approximately −200 mA is desired for the 12 V ISOFET. When the Rdson of the 12 V ISOFET is determined to be at or near the typical range minimum of 15 mΩ, the voltage turnoff threshold of −4 mV is selected since it yields the closest turnoff current (−267 mA) to the desired turnoff current (−200 mA). When the Rdson is determined to be at or near 33 mΩ, the voltage turnoff threshold of −7 mV (turnoff current =−212 mA) is selected. When the Rdson is determined to be at or near 50 mΩ, the voltage turnoff threshold of −10 mV (turnoff current =−200 mA) is selected. When the Rdson is determined to be at or near the typical range maximum of 70 mΩ, the voltage turnoff threshold of −15 mV (turnoff current =−214 mA) is selected. As can be seen from Tables 1 and 2, if the Rdson of the ISOFET can be roughly measured, a more consistent ISOFET turnoff current can be obtained by selecting the right voltage turnoff threshold. In this regard, according to further aspects of this disclosure, methods are provided for roughly measuring the Rdson of the ISOFET such that an appropriate voltage turnoff threshold selection can be made. In particular, according to aspects of this disclosure, both the 5 V and 12 V currents can be read through an analog-to-digital converter (ADC). Two current levels can be used for both the 5 V and 12 V lines to cancel and remove error contributions from the ADC and other circuit errors. For the 12 V line, for example, two levels of VCM current towards the OD (outer disk—no VCM movement) can be commanded. In one implementation, two current levels of 500 mA and 1 A of VCM current towards the OD are commanded. Multiple current levels for the VCM towards the OD may already be present in a load calibration routine, for example. Thus, extra ADC commands of H12V and D12V may be added during a load calibration sequence without adding any extra time for spin up. For the 5V line, for example, the read channel may be turned on/off which creates a significant 5V change in current. In addition, voltages on both sides of the 5V ISOFET (H5V/D5V) and the 12 V ISOFET (H12V/D12V) can be read through the ADC as well. By using H5V-D5V (or H12V-D12V), voltage ADC error is eliminated because the error is common to both readings. Once these voltage readings are obtained at both current levels, Rdson can be calculated, and the voltage turnoff threshold can then be calculated based on the desired turnoff current. FIG.4is a flow diagram of a method400for measuring Rdson values and calculating voltage turnoff thresholds, in accordance with aspects of this disclosure. In step402, command current1is set for both the 5 V and 12 V lines. In one implementation, command current1is set as read channel OFF for the 5 V line and is set as 500 mA of VCM current towards the OD for the 12 V line. In step404, for both the 5 V and 12 V lines, the host and drive ADC voltages are measured at command current1, and the 5 V and 12 V ADC currents are measured at command current1. Thus, the following values are obtained in step404:5 V ISOFETH5V1=Host 5 V voltage at command current1(read channel OFF)D5V1=Drive 5 V voltage at command current1(read channel OFF)I1(5 V) =5 V current at command current1(read channel OFF)12 V ISOFETH12V1=Host 12 V voltage at command current1(500 mA VCM current)D12V1=Drive 12 V voltage at command current1(500 mA VCM current)I1(12 V) =12 V current at command current1(500 mA VCM current) In step406, command current2is set for both the 5 V and 12 V lines. In one implementation, command current2is set as read channel ON for the 5 V line and is set as1A of VCM current towards the OD for the 12 V line. In step408, for both the 5 V and 12 V lines, the host and drive ADC voltages are measured at command current2, and the 5 V and 12 V ADC currents are measured at command current2. Thus, the following values are obtained in step408:5 V ISOFETH5V2=Host 5 V voltage at command current2(read channel ON)D5V2=Drive 5 V voltage at command current2(read channel ON)I2(5 V) =5 V current at command current2(read channel ON)12 V ISOFETH12V2=Host 12 V voltage at command current2(1A VCM current)D12V2=Drive 12 V voltage at command current2(1A VCM current)I2(12 V) =12 V current at command current2(1A VCM current) In step410, Rdson is calculated for both the 5 V and 12 V ISOFETs as follows:5 V ISOFET R⁢d⁢s⁢o⁢n5⁢V=[(H5⁢V⁢2+V⁢error)-(D⁢5⁢V⁢2+V⁢error)]-[(H5⁢V⁢1+V⁢error)-(D⁢5⁢V⁢1+V⁢error)][(I⁢2⁢(5⁢V)+V⁢CurrentError)-(I⁢1⁢(5⁢V)+VC⁢urrentError)]Rdson5⁢V=(H⁢5⁢V⁢2-D⁢5⁢V⁢2)-(H⁢5⁢V⁢1-D⁢5⁢V⁢1)I⁢2⁢(5⁢V)-I⁢1⁢(5⁢V)Rdson5⁢V=Δ⁢V⁢2-Δ⁢V⁢1I⁢2-I⁢112 V ISOFET R⁢d⁢s⁢o⁢n12⁢V=[(H1⁢2⁢V⁢2+V⁢error)-(D⁢1⁢2⁢V⁢2+V⁢error)]-[(H1⁢2⁢V⁢1+V⁢error)-(D⁢1⁢2⁢V⁢1+V⁢error)][(I⁢2⁢(1⁢2⁢V)+V⁢CurrentError)-(I⁢1⁢(1⁢2⁢V)+V⁢CurrentError)]Rdson12⁢V=(H⁢1⁢2⁢V⁢2-D⁢1⁢2⁢V⁢2)-(H⁢1⁢2⁢V⁢1-D⁢1⁢2⁢V⁢1)I⁢2⁢(1⁢2⁢V)-I⁢1⁢(1⁢2⁢V)Rdson12⁢V=Δ⁢V⁢2-Δ⁢V⁢1I⁢2-I⁢1 In step412, the voltage turnoff thresholds for the 5 V and 12 V ISOFETs are calculated as follows, using a 150 mA turnoff current for the 5 V ISOFET and a 200 mA turnoff current for the 12 V ISOFET:5 V ISOFETVthreshold(5 V) =ITurnoffCurrent(5 V) * RdSon5VVthreshold(5 V) =−0.150 * Rdson5V12 V ISOFETVthreshold(12 V) =ITurnoffCurrent(12 V) * Rdson12VVthreshold(12 V) =−0.200 * Rdson12V In step414, the voltage turnoff threshold closest to the calculated Vthresholdis selected. Thus, for the 5 V ISOFET, the closest of −4 mV, −6 mV, −9 mV and −12 mV to the calculated Vthreshold(5 V) is selected, and for the 12 V ISOFET, the closest of −4 mV, −7 mV, −10 mV and −15 mV to the calculated Vthreshold(12 V) is selected. Aspects of this disclosure advantageously provide a more consistent ISOFET turnoff current level, which in turn allows for a more accurate EPOR model in which the precise time that the ISOFET will turn off is more accurately known. Without the teachings of this disclosure, which considers the Rdson value of the ISOFET in setting the voltage turnoff threshold, the ISOFET could be turned off prematurely in cases where the Rdson value is very high. Conversely, in cases where the Rdson value is very low, the ISOFET is susceptible to being turned off too late, leading to a drain of too much power from the drive supply to the host. In addition, the firmware may be implemented in various ways in order to monitor the drive temperature. In this aspect if the drive temperature changes, the firmware can monitor via the drive temperature sensor (e.g., drive temperature sensor350inFIGS.3A and3B) which will consequently also change the ISOFET Rdson value. In one embodiment, the threshold setting is updated to a new level based upon a mathematical resistance change with temperature formula. In another embodiment, calibration sequence is recalibrated to get a more exact Rdson value to choose a new threshold because of the temperature change. Any suitable control circuitry may be employed to implement the flow diagrams in the above examples, such as any suitable integrated circuit or circuits. For example, the control circuitry may be implemented within a read channel integrated circuit, or in a component separate from the read channel, such as a disk controller, or certain operations described above may be performed by a read channel and others by a disk controller. In one example, the read channel and disk controller are implemented as separate integrated circuits, and in an alternative example they are fabricated into a single integrated circuit or system on a chip (SOC). In addition, the control circuitry may include a suitable preamp circuit implemented as a separate integrated circuit, integrated into the read channel or disk controller circuit, or integrated into a SOC. In one example, the control circuitry comprises a microprocessor executing instructions, the instructions being operable to cause the microprocessor to perform the flow diagrams described herein. The instructions may be stored in any computer-readable medium. In one example, they may be stored on a non-volatile semiconductor memory external to the microprocessor, or integrated with the microprocessor in a SOC. In another example, the instructions are stored on the disk and read into a volatile semiconductor memory when the disk drive is powered on. In yet another example, the control circuitry comprises suitable logic circuitry, such as state machine circuitry. A disk drive may include a magnetic disk drive, an optical disk drive, etc. In addition, while the above examples concern a disk drive, this disclosure is not limited to a disk drive and can be applied to other data storage devices and systems, such as magnetic tape drives, solid state drives, hybrid drives, etc. In addition, some embodiments may include electronic devices such as computing devices, data server devices, media content storage devices, etc. that comprise the storage media and/or control circuitry as described above. The various features and processes described above may be used independently of one another or may be combined in various ways. All possible combinations and subcombinations are intended to fall within the scope of this disclosure. In addition, certain method, event, or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described tasks or events may be performed in an order other than that specifically disclosed, or multiple may be combined in a single block or state. The example tasks or events may be performed in serial, in parallel, or in some other manner. Tasks or events may be added to or removed from the disclosed implementations. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments. While certain implementation examples have been described, these examples are presented by way of example only, and are not intended to limit the scope of this disclosure. Thus, nothing in the foregoing description is intended to imply that any feature, characteristic, step, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions, and changes in the form of the methods and systems described herein may be made without departing from the spirit of the embodiments disclosed herein.
20,649
11862203
DETAILED DESCRIPTION Generally, approaches to a mass data storage library utilizing disk cartridges housing disk media are described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described herein. It will be apparent, however, that the embodiments of the invention described herein may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention described herein. Introduction Terminology References herein to “an embodiment”, “one embodiment”, and the like, are intended to mean that the particular feature, structure, or characteristic being described is included in at least one embodiment of the invention. However, instances of such phrases do not necessarily all refer to the same embodiment. The term “substantially” will be understood to describe a feature that is largely or nearly structured, configured, dimensioned, etc., but with which manufacturing tolerances and the like may in practice result in a situation in which the structure, configuration, dimension, etc. is not always or necessarily precisely as stated. For example, describing a structure as “substantially vertical” would assign that term its plain meaning, such that the sidewall is vertical for all practical purposes but may not be precisely at 90 degrees throughout. While terms such as “optimal”, “optimize”, “minimal”, “minimize”, “maximal”, “maximize”, and the like may not have certain values associated therewith, if such terms are used herein the intent is that one of ordinary skill in the art would understand such terms to include affecting a value, parameter, metric, and the like in a beneficial direction consistent with the totality of this disclosure. For example, describing a value of something as “minimal” does not require that the value actually be equal to some theoretical minimum (e.g., zero), but should be understood in a practical sense in that a corresponding goal would be to move the value in a beneficial direction toward a theoretical minimum. Context Recall that a vast magnetic disk “library” containing a significantly large number of magnetic recording disks is considered an ultimate low-cost solution to the challenges associated with archival data storage. Usage patterns of such a disk library are envisioned as similar to a tape library, including primarily sequential write operations with no standard block size (from the host perspective) along with occasional (low duty cycle) large, largely sequential, library-wide read operations. As such, random seeks are not common and enterprise-grade performance is not of primary concern. Thus, the command interface to the library and specifically to the media drives (as exposed to the host) in the library need not rely on a standard HDD command set, but rather may mimic and therefore be more compatible with the streaming commands used by tape drives in tape libraries. It follows that the capacity requirement and operational functionality of such modified drives are less strict than with a conventional HDD, which could provide more design freedom resulting in cost and reliability benefits, for example. Such an “archival interface” favors sequential writes, variable media capacity, and more efficient disk defect handling, for non-limiting examples. In view of the foregoing, front-loaded head wear would be expected in view of the large writes to first populate the library, especially if write-verify operations are employed such as in the context of shingled magnetic recording (SMR), in which the data tracks are written to disk sequentially in a partially overlapping manner similar to shingles on a roof. Hence, head replacement capability is desirable (i.e., swapping out drives), as well as general flexibility with respect to inserting new media (and possibly moving media among compatible libraries), adding more drives, reconfiguring robotics, and the like, such as in response to changing workloads. Data may be striped on the upper and lower surfaces of the disk media and two independent heads may alternate between write and verify, where the write verify operation is built into the functionality of the library. As such, verify is performed on data after adjacent tracks have been written, to account for the signal degradation caused by SMR recording (e.g., the drive will rewrite downstream any data chunks that fall below a specified quality threshold, such as due to a defect of the disk media which may be indicated by degradation of the signal-to-noise (SNR) ratio), and this write verify increases data reliability and lifetime in an archival data storage system by guaranteeing a minimum data recording quality. This is enabled at least in part by use of the inherent caching available in disk drives, in contrast with tape drives, such that the verify operation can wait for adjacent tracks to be written and are more stable at that point, thus leading to higher data reliability. This operational behavior may also reduce media cost by eliminating the need to scan for media defects in the factory. A disk cartridge library system is considered scalable, for example, in that the number of media, drives, and robots, i.e., the constituent components, are all readily scalable. Further, the capacity is expandable, such as by adding additional columns of cartridge storage bays to the system. The library is serviceable, for example, in that cartridges that may become dirty can be readily removed and new cartridges are easily added to the system. Also, the library can be readily shipped, built, and upgraded in a modular manner as constituent components and modules can be packaged, transported, maintained separately and independently. The library is reliable in that there is no single point of failure, as the blast radius due to a failure is effectively limited to a single medium, drive or robot, which are each readily replaceable as discussed, and therefore a failure does not extend to or encompass additional components. In the various approaches of the disk cartridge library, the conventional HDD as described in reference toFIG.1is modified so that the magnetic medium120is made removable from the rest of the HDD, and the other HDD components are modified to (1) accommodate the loading and unloading of the medium and (2) provide other functionalities needed to support the recording and retrieval of data in the library environment. One possible approach to such a data storage library utilizing magnetic recording disk media involves use of disk cartridges housing multiple disk media for use in storing and accessing data stored thereon by a read-write device. However, such a disk cartridge library may present challenges with respect to maintaining “clean” environment(s) necessary for successful, reliable and long-standing data operations (generally, read and write operations) involving clean magnetic recording disk media, which may need to be stored and transferred around within the library in “dirty” environment(s). The term “clean” is used herein to refer generally to a typical largely sealed magnetic-recording environment utilizing read-write transducers (or “heads”) “flying” within very small distances over a corresponding disk surface, such as inside a hard disk drive, by creating and maintaining a substantially and relatively low, controlled contaminant particle count, i.e., a “contaminant-controlled” environment. By contrast, a “dirty” environment refers to an environment in which a relatively high, relatively uncontrolled particle count is or may be present, i.e., a “less-contaminant-controlled” environment, including uncontrolled, relative to a clean contaminant-controlled environment. Because modern hard disk drives (HDDs) fly the read-write head so very close to the disk surface, the presence of surface contaminants attached to either the head slider and/or the disk can cause undesirable flying height changes which increases the likelihood of head-disk contact (or “crash”) and thus read-write (I/O) errors. Conventional HDDs operate in a clean environment, i.e., a sealed operating environment relatively free of contaminant particles, outgases, and the like, which is typically maintained after manufacturing by utilizing one or more internal filters. Breather and/or other HDD filters often are designed and configured to serve multiple functions, such as absorbing contaminants, adsorbing contaminants, controlling humidity, and the like. Magnetic Disk Cartridges for Data Storage Library A data storage library employing disk cartridges (also, “disk cartridge library”) may be configured and operated such that magnetic disk media and read-write drive (or “media drive”) interior/internal environments are maintained “clean” (“contaminant-controlled”) while modular rack components are “dirty” (“less-contaminant-controlled” relative to clean environments). With various approaches to a disk cartridge library, magnetic disk media (e.g., “hard disks”) that are typically in conventional hard disk drives are housed in disk cartridges organized in a library. Under the use of robotic automation, cartridges are retrieved and disk media are extracted from the cartridges for access by media drives for reading and writing operations. After access, media are returned to cartridges, which are returned to the library for storage. FIG.2Ais a perspective view illustrating a magnetic recording disk cartridge, according to an embodiment. Disk cartridge200comprises multiple internally-clean isolated compartments202each configured to house an internally-clean disk tray204configured to house a clean magnetic recording disk medium206. Thus, each disk cartridge200is considered and maintained internally-clean, while being externally-dirty (i.e., outer surfaces may be dirty) so that each disk cartridge200can be transported around and within a larger data storage library. Similarly, each disk tray204is considered and maintained internally-clean because it is stored in an internally-clean compartment202of the disk cartridge200, while the only surface of the disk tray204that may be dirty is a faceplate205, as it faces an environment external to the disk cartridge200. Faceplate205comprises a tray locking mechanism205aand a pin-receiving feature205b. According to an embodiment, tray locking mechanism205acomprises a locking bolt205a-1or structure positioned in a slot205a-2and configured to move in the slot into and out of a corresponding locking bolt receiver205a-3(“receiver205a-3”). According to an embodiment, per-disk system metadata, customer metadata, and like information may be stored in cartridge-based memory, such as onboard non-volatile flash memory. FIG.2Bis a perspective view illustrating a disk tray corresponding to the disk cartridge ofFIG.2A, according to an embodiment. Here, disk medium206is shown exploded from the corresponding disk tray204, which is shown here in isolation outside of the compartment202(FIG.2A) and disk cartridge200(FIG.2A) structure, and shown without its faceplate205for clarity. Each disk tray204may have a circular cutout204awith sloped walls which hold disk medium206by its outer edge while stored and during transfer between clean environments, according to an embodiment. As described in more detail elsewhere herein, each disk tray204along with its corresponding disk medium206is configured for automated extraction from a corresponding compartment202of a corresponding disk cartridge200, at least in part via the pin-receiving feature205b(FIG.2A). It is contemplated that disk trays204are loaded, and possibly replaced or swapped out, into a compartment202of a disk cartridge200in a clean environment, such as a cleanroom. Once loaded into a clean compartment202of a disk cartridge200, the disk cartridge200can be moved around outside of a clean environment because, as explained, the outer surfaces of the disk cartridge200and the faceplate205of each disk tray204can be dirty and still maintain operational capability within a disk cartridge library (generally, “data storage system”) as described in more detail herein throughout. Disk cartridges200may be added to or removed from a disk cartridge library via designated import-export locations in a library rack. FIG.2Cis a perspective view illustrating a high-density magnetic recording disk cartridge, according to an embodiment. High-density disk cartridge250also comprises multiple internally-clean isolated compartments252each configured to house an internally-clean disk tray254configured to house a clean magnetic recording disk medium256. Here too, each high-density disk cartridge250is considered and maintained internally-clean, while being externally-dirty so that each high-density disk cartridge250can be transported around and within a larger data storage library. Similarly, each disk tray254is considered and maintained internally-clean because it is stored in an internally-clean compartment252of the high-density disk cartridge250, while the only surface of the disk tray254that may be dirty is a faceplate255(e.g., configured similar to or the same as faceplate205of disk tray204of disk cartridge200ofFIG.2A), as it faces an environment external to the high-density disk cartridge250. Disk Tray Extractor Mechanism FIG.3Ais a perspective view illustrating a magnetic recording media drive with disk tray extractor mechanism, according to an embodiment. Media drive300(may also be referred to as “read-write device”) is configured for use in a disk cartridge library as described herein, according to an embodiment. According to an embodiment, media drive300comprises a drive bay302positioned adjacent to a disk cartridge bay304. The illustration ofFIG.3Ais simplified for clarity by foregoing the depiction of the common read-write operational components (e.g., a head slider housing the read-write transducer, an actuator, a spindle motor, etc.). However, the physical and operational description of a digital data storage device (DSD) such as a hard disk drive (HDD), or a modified version of an HDD such as may be employed in drive bay302, is set forth in reference toFIG.1. Media drive300comprises a disk tray extractor mechanism306(“extractor mechanism306”) at a physical interface of the drive bay302and the disk cartridge bay304, and may be considered constituent to media drive300according to illustrated embodiment. Extractor mechanism306enables transporting disk media (see, e.g., disk medium206ofFIGS.2A-2B) between storage and active use by “opening” the cartridge without contaminating the clean environments associated with the disk medium206, disk tray204(FIGS.2A-2B), disk cartridge200compartment202(FIGS.2A-2B), and media drive300. Stated otherwise, extractor mechanism306maintains isolation of the clean drive bay302from the dirty disk cartridge bay304of media drive300. Extractor mechanism306comprises a seal mechanism comprising a movable, translatable seal plate306a, a set of pins306b(e.g., “locking pins” or “alignment pins”) configured to extend through the seal plate306aand to move in a certain direction while extended through the seal plate306a, and a shroud306csurrounding seal plate306a. To remove a disk tray204, the extractor mechanism306transitions through a sequence of positions described in reference toFIGS.3B-3E. FIG.3Bis a perspective view illustrating an idle position for the extractor mechanism of the media drive ofFIG.3A, according to an embodiment. The illustrated idle position for extractor mechanism306shows the seal plate306aflush with shroud306c, with the pins306bin a recessed position. FIG.3Cis a perspective view illustrating an engaged position for the extractor mechanism of the media drive ofFIG.3A, according to an embodiment. The illustrated engaged position for extractor mechanism306now shows the pins306bin an extended position, whereby seal plate306awould be aligned with a disk tray (see, e.g., disk tray204). FIG.3Dis a perspective view illustrating a locked position for the extractor mechanism of the media drive ofFIG.3A, according to an embodiment. The illustrated locked position for extractor mechanism306now shows the pins306bin a moved and locked position, depicted here as shifted inward, thereby unlocking the tray locking mechanism205a(FIG.2A). Tray locking mechanism205ais unlocked by sliding locking bolt205a-1(FIG.2A) within the slot205a-2(FIG.2A) and out of the receiver205a-3(FIG.2A). FIG.3Eis a perspective view illustrating an extracting position for the extractor mechanism of the media drive ofFIG.3A, according to an embodiment. The illustrated extracting position for extractor mechanism306now shows the pins306band seal plate306arecessing into the clean internal area of media drive300. It is noteworthy that the seal plate306acovers the faceplate205(FIG.2A) of disk tray204, thereby physically, structurally, mechanically isolating the dirty faceplate205from the clean internal portion of the disk tray204and the other clean areas. Shown here also is that the shroud306ccovers the adjacent cartridge surfaces when the disk tray204is extracting/extracted. While different components, environments, spaces, etc. may be referred to herein as either “clean” or “dirty”, embodiments do not absolutely require that those referred to as “dirty” are necessarily contaminant-uncontrolled, as described embodiments can be implemented to maintain and control the level of contamination of disk media and read-write device(s) for successful, reliable, available data read-write operations. Disk Tray Extracting Sequence FIGS.4A-4Irepresent an automated disk tray extracting sequence, whereby a disk tray is extracted from a disk cartridge into a media drive/media drive bay by the media drive itself.FIG.4Ais a perspective view illustrating a magnetic recording media drive with disk tray extractor, according to an embodiment. Like media drive300(FIG.3A), media drive400depicted here comprises a media drive bay402positioned adjacent to a disk cartridge bay404. While media drive400may populate two bays of a modular storage library rack401, media drive400may be configured as an integral/integrated component including both the media drive bay402(which may itself be referred to as a “media drive402”) and the disk cartridge bay404.FIG.4Afurther depicts a disk cartridge200partially loaded into the disk cartridge bay404. According to an embodiment, a robotic machine of the disk cartridge library inserts the disk cartridge200into the disk cartridge bay404of media drive400.FIG.4Bis a perspective view illustrating the media drive ofFIG.4Awith inserted disk cartridge, according to an embodiment. Thus, depicted here is disk cartridge200fully inserted and housed in the disk cartridge bay404of media drive400, with the faceplate205side of each disk tray204facing the extractor mechanism306(FIGS.3A-3D) of media drive bay402of media drive400. Here, as withFIG.2A, disk cartridge200is depicted with an arbitrary number of five compartments202each housing a corresponding disk tray204. FIG.4Cis a perspective view illustrating the media drive ofFIG.4Awith aligned seal mechanism, according to an embodiment. According to an embodiment, media drive400is configured to raise disk cartridge200to precisely align seal plate306awith the faceplate205of a requested disk tray204-3(e.g., the third tray in this example).FIG.4Dis a perspective view illustrating the media drive ofFIG.4Awith idle tray extractor, according to an embodiment. Here, the disk cartridge200and the other disk trays204other than disk tray204-3are omitted for clarity.FIG.4Ddepicts the particular disk tray204-3now aligned to an idle disk tray extractor mechanism306(see alsoFIG.3B). FIG.4Eis a perspective view illustrating the media drive ofFIG.4Awith locking pins engaged, according to an embodiment. Here, extractor mechanism306has now engaged the disk tray204-3by extending pins306bthrough the faceplate205(see alsoFIG.2A) and the pin-receiving feature205b(FIG.2A) of disk tray204-3(see alsoFIG.3C).FIG.4Fis a perspective view illustrating the media drive ofFIG.4Awith locking pins locked, according to an embodiment. Here, pins306bare shifted (for a non-limiting example, moved inward) to secure/hold the disk tray204-3and to unlock the disk tray204-3from the disk cartridge200(FIG.4B). As described in reference toFIG.3D, disk tray204-3is unlocked from disk cartridge200via tray locking mechanism205aby sliding locking bolt205a-1(FIG.2A) within the slot205a-2(FIG.2A) and out of the receiver205a-3(FIG.2A). FIG.4Gis a perspective view illustrating the media drive ofFIG.4Awith tray being extracted, according to an embodiment. Here, disk tray204-3is shown being pulled by the media drive400from the dirty disk cartridge bay404into the clean media drive bay402, through the shroud306cwith the seal plate306aattached to and covering the faceplate205of disk tray204-3.FIG.4His a perspective view illustrating the media drive ofFIG.4Awith tray being extracted, according to an embodiment. Here, the disk cartridge200and the other disk trays204other than disk tray204-3are now shown, thus depicting the clean disk tray204-3“merging” with the clean environment of the media drive bay402while the other disk trays204remain isolated in their respective internally-clean compartment202(see alsoFIG.3B). Here again the seal plate306acovers the faceplate205of disk tray204-3, thereby physically, structurally, mechanically isolating the dirty faceplate205from the clean portion of the disk tray204and the other clean areas. FIG.4Iis a perspective view illustrating the media drive ofFIG.4Awith tray extracted and inside media drive, according to an embodiment. Once the disk tray204-3is fully inside the media drive bay402, media drive400mounts the disk medium206-3onto a spindle (not shown here; see, e.g., spindle124ofFIG.1) and raises the disk medium206-3above the level of the disk tray204-3where it can be spun up and accessed. While different components, environments, spaces, etc. may be referred to herein as either “clean” or “dirty”, embodiments do not absolutely require that those referred to as “dirty” are necessarily contaminant-uncontrolled, as described embodiments can be implemented to maintain and control the level of contamination of transportable and mountable/removable disk media and read-write device(s) for successful, reliable, available data read-write operations, regardless of the degree of cleanliness or dirtiness of other components, environments, spaces, etc. High-Density Cartridge Arrangement FIG.5is a perspective view illustrating a high-density magnetic recording disk cartridge and media drive system, according to an embodiment. High-density disk cartridge250(“HD disk cartridge250”) was introduced in reference toFIG.2C, and comprises a larger number of internally-clean isolated compartments252(FIG.2C) than does disk cartridge200ofFIG.2A, each for housing a disk tray254(FIG.2C) for housing a magnetic recording disk medium256(FIG.2C). Consequently, multiple media drives500can be employed in a given drive bay of a library rack401(arbitrarily, space for five shown here with one depicted largely in phantom). FIG.6is a perspective view illustrating a media robot for a disk cartridge library, according to an embodiment. Implementation and use of a HD cartridge250is at least in part enabled by the implementation and use of an internally-clean robotic machine600for extracting from HD cartridge250(FIGS.2C,5) and loading disk trays504(FIG.5), or disk media506(FIG.5) directly, into a media drive500(FIG.5). According to an embodiment, robotic machine600comprises a disk transfer mechanism similar to or the same as disk tray extractor mechanism306(see, e.g.,FIGS.3A-3E,4B-4H). Therefore and according to an embodiment, robotic machine600is configured to extract a disk tray204(FIGS.2A-2C) from a compartment252(FIG.2C) or slot of HD cartridge250into a clean compartment of robotic machine600utilizing extractor mechanism306, remove the disk medium206(FIGS.2A-2B) from the disk tray204, insert the disk medium206into a media drive500, and reinsert the disk tray204into HD cartridge250to restore the clean seal of internally-clean HD cartridge250. Alternatively and according to an embodiment, robotic machine600is configured to extract a disk tray204(FIGS.2A-2C) from HD cartridge250and insert the entire disk tray204with disk medium206into a media drive500, and cover the empty compartment of HD cartridge250to restore the clean seal to internally-clean HD cartridge250in the case in which the robotic machine600has to move to another bay of rack401while the disk tray204is still extracted from HD cartridge250. In any case, each HD cartridge250may remain in place indefinitely in a cartridge bay504(FIG.5) adjacent to the bank of media drives500(FIG.5) and does not necessarily need to be returned to a library storage bay as often, if at all. Furthermore, HD cartridge250can be larger and of larger capacity (more disk media) than disk cartridge200(FIGS.2A-2B) because it is not inserted into the cartridge bay304,404of media drive300,400, where vertical clearance is needed to align a particular disk tray204with the extractor mechanism306of media drive300,400(thus cartridge height is less constrained and media drive dimensions not limited by cartridge size). Still further, multiple disk media206from a given disk cartridge250may be in place and in use in a corresponding media drive500at any given time. Method of Transferring a Magnetic Recording Disk Medium FIG.7is a flow diagram illustrating a method of transferring a magnetic recording disk medium, according to an embodiment. At block702, a media drive having a clean internal environment extends a set of locking pins through a dirty faceplate of an internally-clean disk tray housed in an externally-dirty disk cartridge and supporting a clean magnetic recording disk medium, including covering the dirty faceplate with a seal plate, through which the set of locking pins extend, to physically isolate the dirty faceplate from the clean internal portion of the disk tray and corresponding compartment and the clean internal environment of the media drive. For example, media drive300,400(FIGS.3A,4A) having a clean, contaminant-controlled internal environment extends a set of locking pins306b(FIGS.3a-3E,4E-4F) through a dirty, less-contaminant-controlled faceplate205(see, e.g.,FIGS.2A,4A,4D) of an internally-clean contaminant-controlled disk tray204-3(FIGS.4C-4I) housed in an externally-dirty less-contaminant-controlled disk cartridge200(FIGS.2A,4A-C,4H-4I), and supporting a clean contaminant-controlled magnetic recording disk medium206(FIGS.2A-2B),206-3(FIGS.4D-4G),256(FIG.2C), including covering the dirty faceplate205with a seal plate306a(FIGS.3A-3E,4G-4H), through which the set of locking pins306b(FIGS.3A-3E,4E-4F) extend, to physically isolate the dirty faceplate205from the clean contaminant-controlled internal portion of the disk tray204-3and corresponding contaminant-controlled compartment202(FIGS.2A,4A-4B,4H) and the clean contaminant-controlled internal environment of the media drive300,400. In the case of HD cartridge250, according to an embodiment the robotic machine600(FIG.6) extends locking pins306bthrough the faceplate255(FIG.2C) of the disk tray254(FIG.2C) housed in the HD disk cartridge250(FIGS.2C,5). At block704, the media drive moves the set of locking pins to unlock the disk tray from the disk cartridge and to hold the disk tray. For example, media drive300,400moves the set of locking pins306binward to unlock the disk tray204-3from the disk cartridge200,250and to hold the disk tray204-3. In the case of HD cartridge250, according to an embodiment the robotic machine600moves the set of locking pins306binward to unlock the disk tray254from the HD disk cartridge250and to hold the disk tray254. At block706, the media drive pulls the disk tray with the disk medium from the disk cartridge completely into the clean internal environment of the media drive through a shroud covering disk cartridge surfaces around the dirty faceplate. For example, media drive300,400pulls the disk tray204-3with the disk medium206from the disk cartridge200,250completely into the clean contaminant-controlled internal environment of the media drive300,400through a shroud306c(FIGS.3A-3E,4D-4G) covering disk cartridge200,250surfaces around the dirty less-contaminant-controlled faceplate205. According to an embodiment, pulling the disk tray204-3from the disk cartridge200,250includes pulling the seal plate306acovering the faceplate205of the disk tray204-3. In the case of HD cartridge250, according to an embodiment the robotic machine600pulls the disk tray254with the disk medium256(FIG.2C) from the HD disk cartridge250, completely into the clean internal environment of the robotic machine600through a shroud306ccovering HD disk cartridge250surfaces around the dirty faceplate255. Here further, the robotic machine600may insert the disk medium256into media drive500for one or more data operations. Physical Description of Illustrative Operating Context(s) Embodiments may be implemented to use digital data storage devices (DSDs) such as hard disk drive (HDDs). Thus, in accordance with an embodiment, a plan view illustrating a conventional HDD100is shown inFIG.1to aid in describing how a conventional HDD typically operates, keeping in mind the modifications described herein. FIG.1illustrates the functional arrangement of components of the HDD100including a slider110bthat includes a magnetic read-write head110a. Collectively, slider110band head110amay be referred to as a head slider. The HDD100includes at least one head gimbal assembly (HGA)110including the head slider, a lead suspension110cattached to the head slider typically via a flexure, and a load beam110dattached to the lead suspension110c. The HDD100also includes at least one recording medium120rotatably mounted on a spindle124and a drive motor (not visible) attached to the spindle124for rotating the medium120. The read-write head110a, which may also be referred to as a transducer, includes a write element and a read element for respectively writing and reading information stored on the medium120of the HDD100. The medium120or a plurality of disk media may be affixed to the spindle124with a disk clamp128. According to embodiments described herein, disk media are not permanently affixed to a spindle (such as spindle124) but are inserted into a read-write device where they can be temporarily/removably mounted onto a spindle and held thereon for facilitating read/write operations. The HDD100further includes an arm132attached to the HGA110, a carriage134, a voice-coil motor (VCM) that includes an armature136including a voice coil140attached to the carriage134and a stator144including a voice-coil magnet (not visible). The armature136of the VCM is attached to the carriage134and is configured to move the arm132and the HGA110to access portions of the medium120, all collectively mounted on a pivot shaft148with an interposed pivot bearing assembly152. In the case of an HDD having multiple disks, the carriage134may be referred to as an “E-block,” or comb, because the carriage is arranged to carry a ganged array of arms that gives it the appearance of a comb. An assembly comprising a head gimbal assembly (e.g., HGA110) including a flexure to which the head slider is coupled, an actuator arm (e.g., arm132) and/or load beam to which the flexure is coupled, and an actuator (e.g., the VCM) to which the actuator arm is coupled, may be collectively referred to as a head-stack assembly (HSA). An HSA may, however, include more or fewer components than those described. For example, an HSA may refer to an assembly that further includes electrical interconnection components. Generally, an HSA is the assembly configured to move the head slider to access portions of the medium120for read and write operations. With further reference toFIG.1, electrical signals (e.g., current to the voice coil140of the VCM) comprising a write signal to and a read signal from the head110a, are transmitted by a flexible cable assembly (FCA)156(or “flex cable”, or “flexible printed circuit” (FPC)). Interconnection between the flex cable156and the head110amay include an arm-electronics (AE) module160, which may have an on-board pre-amplifier for the read signal, as well as other read-channel and write-channel electronic components. The AE module160may be attached to the carriage134as shown. The flex cable156may be coupled to an electrical-connector block164, which provides electrical communication, in some configurations, through an electrical feed-through provided by an HDD housing168. The HDD housing168(or “enclosure base” or “baseplate” or simply “base”), in conjunction with an HDD cover, provides a semi-sealed (or hermetically sealed, in some configurations) protective enclosure for the information storage components of the HDD100. Other electronic components, including a disk controller and servo electronics including a digital-signal processor (DSP), provide electrical signals to the drive motor, the voice coil140of the VCM and the head110aof the HGA110. The electrical signal provided to the drive motor enables the drive motor to spin providing a torque to the spindle124which is in turn transmitted to the medium120that is affixed to the spindle124. As a result, the medium120spins in a direction172. The spinning medium120creates a cushion of air that acts as an air-bearing on which the air-bearing surface (ABS) of the slider110brides so that the slider110bflies above the surface of the medium120without making contact with a thin magnetic-recording layer in which information is recorded. Similarly in an HDD in which a lighter-than-air gas is utilized, such as helium for a non-limiting example, the spinning medium120creates a cushion of gas that acts as a gas or fluid bearing on which the slider110brides. The electrical signal provided to the voice coil140of the VCM enables the head110aof the HGA110to access a track176on which information is recorded. Thus, the armature136of the VCM swings through an arc180, which enables the head110aof the HGA110to access various tracks on the medium120. Information is stored on the medium120in a plurality of radially nested tracks arranged in sectors on the medium120, such as sector184. Correspondingly, each track is composed of a plurality of sectored track portions (or “track sector”) such as sectored track portion188. Each sectored track portion188may include recorded information, and a header containing error correction code information and a servo-burst-signal pattern, such as an ABCD-servo-burst-signal pattern, which is information that identifies the track176. In accessing the track176, the read element of the head110aof the HGA110reads the servo-burst-signal pattern, which provides a position-error-signal (PES) to the servo electronics, which controls the electrical signal provided to the voice coil140of the VCM, thereby enabling the head110ato follow the track176. Upon finding the track176and identifying a particular sectored track portion188, the head110aeither reads information from the track176or writes information to the track176depending on instructions received by the disk controller from an external agent, for example, a microprocessor of a computer system. An HDD's electronic architecture comprises numerous electronic components for performing their respective functions for operation of an HDD, such as a hard disk controller (“HDC”), an interface controller, an arm electronics module, a data channel, a motor driver, a servo processor, buffer memory, etc. Two or more of such components may be combined on a single integrated circuit board referred to as a “system on a chip” (“SOC”). Several, if not all, of such electronic components are typically arranged on a printed circuit board that is coupled to the bottom side of an HDD, such as to HDD housing168. References herein to a hard disk drive, such as HDD100illustrated and described in reference toFIG.1, may encompass an information storage device that is at times referred to as a “hybrid drive”. A hybrid drive refers generally to a storage device having functionality of both a traditional HDD (see, e.g., HDD100) combined with solid-state storage device (SSD) using non-volatile memory, such as flash or other solid-state (e.g., integrated circuits) memory, which is electrically erasable and programmable. As operation, management and control of the different types of storage media typically differ, the solid-state portion of a hybrid drive may include its own corresponding controller functionality, which may be integrated into a single controller along with the HDD functionality. A hybrid drive may be architected and configured to operate and to utilize the solid-state portion in a number of ways, such as, for non-limiting examples, by using the solid-state memory as cache memory, for storing frequently-accessed data, for storing I/O intensive data, and the like. Further, a hybrid drive may be architected and configured essentially as two storage devices in a single enclosure, i.e., a traditional HDD and an SSD, with either one or multiple interfaces for host connection. Extensions and Alternatives In the foregoing description, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Therefore, various modifications and changes may be made thereto without departing from the broader spirit and scope of the embodiments. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. In addition, in this description certain process steps may be set forth in a particular order, and alphabetic and alphanumeric labels may be used to identify certain steps. Unless specifically stated in the description, embodiments are not necessarily limited to any particular order of carrying out such steps. In particular, the labels are used merely for convenient identification of steps, and are not intended to specify or require a particular order of carrying out such steps.
39,055
11862204
DETAILED DESCRIPTION Embodiments provide a magnetic disk device and a method capable of efficiently reading data even when the off-track state occurs. In general, according to an embodiment, a magnetic disk device includes a magnetic disk including a track having a plurality of sectors, a motor configured to rotate the magnetic disk, a magnetic head, and a controller. The controller is configured to perform a first read operation of reading target sectors among the sectors of the track, with the magnetic head during a first revolution of the magnetic disk, detect an off-track state of the magnetic head during the first revolution of the magnetic disk, perform a first error correction with respect to data read from the target sectors during the first read operation, and perform a second read operation of selectively reading a part of the target sectors for which the off-track state has been detected or the first error correction is unsuccessful, with the magnetic head during a second revolution of the magnetic disk. Hereinafter, the magnetic disk device and method according to embodiments will be described in detail with reference to the accompanying drawings. It is noted that the disclosure is not limited to these embodiments. First Embodiment FIG.1is a schematic diagram illustrating an example of a configuration of a magnetic disk device1according to a first embodiment. The magnetic disk device1is connected to a host2. The magnetic disk device1can receive access commands such as a write command and a read command from the host2. The magnetic disk device1includes a magnetic disk11having a magnetic layer formed on a surface thereof. The magnetic disk device1writes data to the magnetic disk11or reads data from the magnetic disk11according to an access command. The writing and reading of the data are performed through a magnetic head22. Specifically, in addition to the magnetic disk11, the magnetic disk device1includes a spindle motor12, a lamp13, an actuator arm15, a voice coil motor (VCM)16, a motor driver integrated circuit (IC)21, the magnetic head22, a hard disk controller (HDC)23, a head IC24, a read write channel (RWC)25, a processor26, a RAM27, a flash read only memory (FROM)28, and a buffer memory29. The magnetic disk11is rotated at a predetermined rotation speed by the spindle motor12mounted coaxially. The spindle motor12is driven by the motor driver IC21. The processor26controls rotation of the spindle motor12and rotation of the VCM16via the motor driver IC21. The magnetic head22performs the writing or reading of the data to or from the magnetic disk11by a write head22wand a read head22rincluded therein. Further, the magnetic head22is attached to the tip of the actuator arm15. The magnetic head22is moved in the radial direction of the magnetic disk11by the VCM16driven by the motor driver IC21. When the rotation of the magnetic disk11is stopped, the magnetic head22is moved onto the lamp13. The lamp13is configured so as to retain the magnetic head22at a position separated from the magnetic disk11. During the reading, the head IC24amplifies and outputs a signal read from the magnetic disk11by the magnetic head22and supplies the signal to the RWC25. Further, during the writing, the head IC24amplifies a signal corresponding to data supplied from the RWC25and supplies the signal to the magnetic head22. The RWC25performs modulation including error correction coding on the data supplied from the HDC23and supplies the modulated data to the head IC24. Further, the RWC25performs demodulation including error correction on data read from the magnetic disk11and supplied from the head IC24and outputs the demodulated data as digital data to the HDC23. The HDC23performs controlling the transmission/reception of data to/from the host2, controlling the buffer memory29, and the like via an I/F bus. In addition, the HDC23also includes a register31. Information stored in the register31will be described below. The buffer memory29is used as a buffer for data transmitted to and received from the host2. The buffer memory29is configured with, for example, a volatile memory capable of a high-speed operation. The type of the memory constituting the buffer memory29is not limited to a specific type. The buffer memory29may be configured with, for example, a dynamic random access memory (DRAM), a static random access memory (SRAM), or a combination thereof. The processor26is, for example, a central processing unit (CPU). The RAM27, the flash read only memory (FROM)28, and the buffer memory29are connected to the processor26. The FROM28stores firmware (e.g., program data), various operating parameters, and the like. It is noted that the firmware may be stored in the magnetic disk11. The RAM27is configured with, for example, a DRAM, an SRAM, or a combination thereof. The RAM27is used as an area where firmware is loaded by the processor26or as an area where various management parameters are cached or buffered. The processor26performs overall control of the magnetic disk device1according to the firmware stored in the FROM28. For example, the processor26loads the firmware from the FROM28to the RAM27and executes control of the motor driver IC21, the head IC24, the RWC25, the HDC23, and the like according to the loaded firmware. It is noted that the configuration including the RWC25, the processor26, and the HDC23may also be regarded as a controller30. In addition to these elements, the controller30may include other elements (e.g., the RAM27, the FROM28, the buffer memory29, the RWC25, and the like). Further, the firmware program may be stored in the magnetic disk11. Further, some or all of functions of the controller30may be implemented by a hardware circuit such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). FIG.2is a schematic diagram illustrating an example of a configuration of the magnetic disk11in the first embodiment. In the manufacturing process, servo information is written to the magnetic disk11, for example, by a servo writer or by self-servo writing (SSW). According toFIG.2, the servo areas42disposed radially are illustrated as a location example of the servo areas in which the servo information is written. The servo information includes sector/cylinder information, burst patterns, post codes, and the like. The sector/cylinder information may include a servo address (e.g., servo sector address) in the circumferential direction and a track position (e.g., track number) set in the radial direction of the magnetic disk11. The track number obtained from the sector/cylinder information is an integer value representing the position of the track, and the burst pattern represents an offset amount after the decimal point with respect to the position represented by the track number. The post code is a correction amount for correcting distortion of a shape of the track set on the basis of the burst pattern (more accurately, a combination of the sector/cylinder information and the burst pattern). Data areas43in which data (i.e., data received from the host) can be written are provided between the servo areas42. One servo area42and one data area43following the servo area42configures a servo sector44. A plurality of concentric tracks41are set in the radial direction of the magnetic disk11. A plurality of data sectors are provided in the data areas43along each track41. The writing and reading of the data are executed with respect to each data sector by the magnetic head22. The storage capacity of each data sector is freely set, but is basically uniform in the magnetic disk11. Each data sector may be located across a plurality of data areas43on one track41. Alternatively, only one data sector may be provided in each data area43on one track41. Alternatively, the plurality of data sectors may be provided in each data area43on one track41. Hereinafter, unless otherwise specified, a sector denotes a data sector. FIG.3is a schematic diagram illustrating an example of a configuration of one track41in the first embodiment. InFIG.3, the servo area42is not illustrated. Each sector is identified by a sector number. The sector of which sector number is x is referred to as sector #x. In the example illustrated inFIG.3, the track41has 11 sectors from sector #0 to sector #10. The data written to each sector includes an error correction code (ECC). The RWC25can execute sector-by-sector error correction with respect to data read from one sector using the error correction code included therein. The error correction code included in the data written to each sector is referred to as a first ECC. Further, the error correction using the first ECC is referred to as a first correction. The method of the first ECC is not limited to a specific method. In one example, as the first ECC, a low-density parity-check code is applied. The writing of data is executed, for example, as follows. First, data including the first ECC is written to each of the sector #0 to the sector #9 in the order of the sector numbers. Then, another error correction code is written to the sector #10 in the end of the track41. It is noted that the error correction code written to the sector #10 also includes the first ECC. The error correction code written to the sector #10 is used to protect the data (data #0 to data #9) written to the sector #0 to the sector #9 from occurrence of an error. That is, the error correction code written to the sector #10 is used to protect the data in units of the track41. The error correction code written to the sector #10 is referred to as a second ECC. Further, the error correction using the second ECC is referred to as a second correction. During the reading of the data from the track41protected with the second ECC, when there is a sector in which the first correction fails, the RWC25can acquire the expected data of that sector through the second correction. The method of the second ECC is not limited to a specific method. In one example, the second ECC is generated by bit-wise operation of XOR for data #0 to data #9. It is noted that, in the first embodiment, implementation of the second correction function is freely performed. For example, inFIG.3, the sector #10 in the end of the track41may store typical data in the same manner as other sectors. In the read operation, the controller30positions the magnetic head22on the track41(denoted as the target track41) provided with the sector (denoted as the target sector) of the read target, and then reads the data from the target sector. However, in some cases, during the execution of the positioning control of the magnetic head22, the off-track state, that is, deviation of the magnetic head22from the target track41may occur due to various factors such as an impact or vibration applied to the magnetic disk device1from the outside. In some cases, if the off-track state occurs when the magnetic head22passes through the target sector, the data written to the track41adjacent to the target track41or old data remaining between the target track41and the track41adjacent to the target track41may be read. That is, in some cases, data different from the data written to the target sector may be read. As described above, the controller30(more accurately, the RWC25) can correct the error in the data written to each sector through the first correction. However, due to the occurrence of the off-track state, when the data written to the track41adjacent to the target track41or the old data remaining between the target track41and the track41adjacent to the target track41is read, the first correction may become successful (even though incorrect data is read). Therefore, in a case where the off-track state occurs when the magnetic head22passes the target sector, the data after the first correction is not always expected data even when the first correction on the data read from the target sector is successful. Therefore, the data obtained when the off-track state occurs cannot be used even when the first correction is successful. Herein, a technique to be compared with the first embodiment will be described. The technique to be compared with the first embodiment is referred to as a comparative example. In the comparative example, considered is a case where the plurality of sectors on one track are read targets, in other words, a case where the plurality of target sectors exist on the target track. According to the comparative example, the controller positions the magnetic head on the target track and then executes the read operation with respect to the plurality of target sectors in the order of the sector numbers. The controller executes the first correction with respect to data read from each target sector during the read operation. Typically, one rotation of a magnetic disk enables the read operation on all of the plurality of target sectors. However, when the off-track state occurs during the read operation on any of the plurality of target sectors, the controller executes the read operation again on all of the plurality of target sectors when the magnetic disk further makes one rotation. That is, even when the plurality of target sectors include a sector with respect to which the read operation is executed during the on-track state of the magnetic head and the first correction for the data read from the sector is successful, the read operation is executed again on all of the plurality of target sectors. Therefore, according to the comparative example, when the off-track state occurs, a large amount of time for acquiring the expected data from all the target sectors and a large amount of calculation resources for the first correction are required. On the other hand, according to the first embodiment, the controller30operates as follows. That is, first, the controller30executes the read operation on each of all the target sectors when the magnetic disk11makes one rotation. The read operation is followed by the first correction. When the off-track state occurs during the read operation on any of the target sectors, the controller30executes the read operation again selectively on one or more of the target sectors with respect to which the read operation is performed during the off-track state of the magnetic head22. This read operation executed secondly or later selectively on the one or more target sectors is also referred to as a read retry operation. The controller30does not perform the read retry operation on the target sector with respect to which the read operation is performed during the state where the magnetic head22is not off-track state (that is, the on-track state of the magnetic head22) and the first correction for the read data is successful. Therefore, according to the first embodiment, when the off-track state occurs, an increase in time for acquiring the expected data from all target sectors and an increase in calculation resources for the error correction can be reduced compared with the comparative example. That is, the data can be read efficiently when the off-track state occurs. In the first embodiment, the controller30stores information indicating a sector in which the read operation has been performed during the off-track state of the magnetic head22. Then, the controller30selects the target sector of the read retry operation on the basis of the information. This information is referred to as off-track sector information311in the present disclosure. The off-track sector information311is stored in, for example, the register31as illustrated inFIG.4. Hereinafter, a sector for which the read operation has been performed during the off-track state of the magnetic head22is referred to as an off-track sector. Similarly, a sector for which the read operation has been performed during the on-track state of the magnetic head22is referred to as an on-track sector. Subsequently, the operation of the magnetic disk device1according to the first embodiment will be described. FIG.5is a flowchart illustrating an example of a read procedure for the target track41by the magnetic disk device1according to the first embodiment. First, the controller30executes a read operation on all target sectors in the target track41in the order of sector numbers during one rotation of the magnetic disk11(S101). FIG.6is a flowchart illustrating an example of a procedure of the read operation on one target sector by the magnetic disk device1according to the first embodiment. First, the RWC25reads data from the target sector (S201). Then, the RWC25executes the first correction on the data read from the target sector (S202). During the read operation of the target sectors, whether the off-track state occurs is periodically determined at very short time intervals. In one example, each time the magnetic head22passes through the servo area42, the controller30acquires a deviation amount of the current radial position of the magnetic head22from the center of the target track41on the basis of the servo information read from the servo area42. When the deviation amount is less than a predetermined value, the controller30determines that the magnetic head22is in the on-track state. When the deviation amount exceeds a predetermined value, the controller30determines that the magnetic head22is in the off-track state, that is, the off-track state occurs. It is noted that the method for determining whether the off-track state occurs is not limited to the above-mentioned example. Further, the setting of the timing for determining whether the off-track state occurs is not limited to the above-mentioned example. Further, any component in the controller30can execute the determination as to whether the off-track state occurs. For example, whether the off-track state occurs may be determined by the RWC25or may be determined by the processor26. Hereinafter, determining that the off-track state occurs may be described as detecting the off-track state. The controller30executes the determination of whether the off-track state occurs, and when the off-track state is detected (S203: Yes), the controller30adds the sector number of the target sector in the off-track sector information311(S204). Then, when the first correction started in S202is not completed yet, the RWC25cancels the execution of the first correction (S205). The cancellation of the first correction denotes that the first correction is not performed when the first correction has not started yet, and the first correction is terminated when the first correction has been started (but not completed). After the process of S205, the read operation on the target sector ends. When the first correction is completed without detecting the off-track state during the read operation (S203: No), the controller30deletes the sector number of the target sector from the off-track sector information311(S206). When the sector number of the target sector is not included in the off-track sector information311, the process of S206is skipped. After the process of S206, the read operation on the target sector is completed. The series of operations illustrated inFIG.6are executed on each target sector in S101ofFIG.5. The description is returned toFIG.5. When the read operation on all the target sectors in the target track41is completed, the controller30sets each of on-track target sectors for which the first correction is successful as a non-target sector (S102). That is, the controller30excludes the sector with respect to which the read operation is executed during the on-track state of the magnetic head22and from which error-corrected data is acquired from the target of the read retry operation. The controller30specifies the off-track sector by, for example, referring to the off-track sector information311. Subsequently, the controller30determines whether the target sector remains (S103). When the target sector remains (S103: Yes), the control proceeds to S101, and the read operation, that is, the read retry operation is executed again for all remaining target sectors. When no target sector remains (S103: No), the reading from the target track41is completed. FIG.7is a diagram illustrating an example of a timing of each process during reading of the target track41of the magnetic disk device1according to the first embodiment. InFIG.7, the horizontal axis indicates the elapsed time. The sector in which the magnetic head22is located is drawn as a position of the magnetic head22. Further,FIG.7illustrates sectors for which the first correction is being executed. Further, convenience of illustration, it is assumed that the target track41includes sector #0 to sector #6. Further,FIG.7illustrates a waveform of the read gate. The read gate is a signal indicating a timing of instructing the RWC25to capture a signal from the read head22r. The read gate is generated in the controller30(for example, the processor26or the HDC23) with reference to the timing at which the servo information is read and is supplied to the RWC25. The RWC25captures the signal during a period indicated by the read gate. In the example illustrated inFIG.7, in relation to the waveform of the read gate, an H level indicates a period during which the signal capturing is performed, and an L level indicates a period during which the signal capturing is prohibited. In the example illustrated inFIG.7, it is assumed that the sector #0 to the sector #4 are the initial read targets. Therefore, during the period in which the magnetic disk11makes first one rotation, in the period from the timing t0 when the magnetic head22passes through the beginning of the sector #0 to the timing t1 when the magnetic head22passes through the end of the sector #4, the controller30maintains the read gate at the H level, and the signal output from the read head22rduring this period is captured by the RWC25as read data. The RWC25executes demodulation including the first correction with respect to the data read from the sector #0 to the sector #4. Herein, in the example illustrated inFIG.7, the off-track state is detected when the magnetic head22passes through the sector #0 and when the magnetic head22passes through the sector #1. Therefore, the controller30adds the sector numbers of the sector #0 and the sector #1 to the off-track sector information311through the process of S204illustrated inFIG.6. Further, the controller30cancels the first correction for the data read from the sector #0 and the first correction for the data read from the sector #1 through the process of S205illustrated inFIG.6. When the magnetic disk11makes first one rotation, the first correction is successful for the data read from the sector #2 to the sector #4. That is, the controller30succeeds in acquiring the expected data from the sector #2 to the sector #4 during the first one rotation of the magnetic disk11. After that, when the magnetic disk11makes another rotation, the controller30executes the read retry operation on the sector #0 and the sector #1 indicated by the off-track sector information311. The controller30does not execute the read retry operation on the sector #2 to the sector #4 from which the expected data have been acquired. Specifically, during the period from the timing t2 when the magnetic head22passes through the beginning of the sector #0 to the timing t3 when the magnetic head22passes through the end of the sector #1, the controller30maintains the read gate at the H level and allows the read gate to be changed from the H level to the L level at the timing t3. The signal output from the read head22rduring the period from the timing t2 to the timing t3 is captured by the RWC25and then demodulated. No off-track state is detected during the period from the timing t2 to the timing t3, and the first correction is successful for the data read during this period, that is, the data read from the sector #0 and the data read from the sector #1. That is, the controller30succeeds in acquiring the expected data from the sector #0 and the sector #1. In this way, the controller30executes the read operation and the first correction for each of target sectors when the magnetic disk11makes one rotation. After that, the controller30executes the read retry operation selectively on each of one or more of the target sectors that include off-track sectors and on-track sectors for which the first correction is unsuccessful. Therefore, as compared with the comparative example, an increase in the time for acquiring the data and an increase in calculation resources for the error correction can be reduced. That is, the data can be read efficiently when the off-track state occurs. Further, according to the first embodiment, the controller30does not perform or terminate the first correction with respect to a target sector when the off-track state occurs during the read operation on the target sector. Therefore, when the off-track state occurs, the calculation resource for the error correction can be further reduced. Second Embodiment In a second embodiment, a magnetic disk device in which a second correction function is implemented will be described. The magnetic disk device according to the second embodiment is referred to as a magnetic disk device1a. It is noted that, in the second embodiment, the description of the same configurations, functions, and operations as those of the first embodiment will be omitted or made briefly. FIG.8is a flowchart illustrating an example of a procedure of the read operation on target sectors by the magnetic disk device1aaccording to the second embodiment. First, the controller30sets all sectors in the target track41as the target sectors when the magnetic disk11makes one rotation (S301). Then, the controller30executes a read operation on the target sectors in the target track41in the order of sector numbers (S302). In S302, a series of operations illustrated inFIGS.5and6are executed on each target sector. Further, S302includes reading of the second ECC written to the sector in the end of the target track41. Subsequently, the controller30executes the second correction using the data read through the process of S302(S303). Then, the controller30determines whether the first correction or the second correction is successful in all the target sectors (S304). For example, when there is one or more off-track sectors and there is one or more on-track sectors for which the first correction fails, the expected data cannot be acquired even through the second correction for the on-track sector for which the first correction fails. When there is a target sector for which both the first correction and the second correction fail (S304: No), the controller30sets each on-track sector as a non-target sector (S305). That is, the controller30excludes each on-track sector from the target of the read retry operation. Then, the control proceeds to S302, and the read operation again, that is, the read retry operation is executed with respect to all remaining target sectors. When the first correction or the second correction is successful in all the target sectors (S304: Yes), the controller30remove all sector numbers from the off-track sector information311(S306). Then, the reading from the target track41is completed. It is noted that, when no sector number is included in the off-track sector information311, the process of S306is skipped. FIG.9is a diagram illustrating an example of a timing of each process during reading of the target track41of the magnetic disk device1aaccording to the second embodiment. Similarly toFIG.7,FIG.9illustrates the position of the magnetic head22and the waveform of the read gate. In addition,FIG.9illustrates the type of error correction executed and the result of the error correction. Further, it is assumed that the target track41includes sector #0 to sector #6 and the second ECC is written to the sector #6. It is noted that, for ease of understanding, the “second ECC” is drawn as a sector representing the sector #6. In the example illustrated inFIG.9, all sectors in the target track41, that is, the sector #0 to the sector #6 are set as initial read targets. Therefore, when the magnetic disk11makes first one rotation, during the period from the timing t10 when the magnetic head22passes through the beginning of the sector #0 to the timing t11 when the magnetic head22passes through the end of the sector #6, the controller30maintains the read gate at the H level, and the signal output from the read head22rduring this period is captured by the RWC25as read data and then demodulated. The RWC25performs demodulation including the first correction with respect to the data read from the sector #0 to the sector #6. Herein, in the example illustrated inFIG.9, the off-track state occurs when the magnetic head22passes through the sector #0 and when the magnetic head22passes through the sector #1. Therefore, the controller30adds the sector numbers of the sector #0 and the sector #1 to the off-track sector information311through the process of S204illustrated inFIG.6. Further, the controller30cancels the first correction for the data read from the sector #0 and the first correction for the data read from the sector #1 through the process of S205illustrated inFIG.6. Further, the controller30succeeds in the first correction for the data read from the sector #2 to the sector #4 and the sector #6, and fails in the first correction for the data read from the sector #5. That is, the controller30succeeds in acquiring the expected data from the sector #2 to the sector #4 through the first one rotation of the magnetic disk11, and succeeds in acquiring the second ECC from the sector #6. After the magnetic disk11makes one rotation, the controller30executes the second correction by the process of S303illustrated inFIG.8. In the second correction, the error correction is executed on the data read from the sector #5 for which the first correction failed. However, in the example illustrated inFIG.9, there are sectors (i.e., the sector #0 and the sector #1) for which the first correction is cancelled. Since the data that can be used for the second correction is not obtained from these sectors, the expected data cannot be obtained from the sector #5 through the second correction. Therefore, the controller30fails in the second correction for the data read from the sector #5. Subsequently, the controller30executes the read retry operation with respect to the sector #0 and the sector #1 indicated by the off-track sector information311. The read retry operation is not executed on the sector #2 to the sector #6 for which the read operation has been executed in the on-track state of the magnetic head22. During the period from the timing t12 when the magnetic head22passes through the beginning of the sector #0 to the timing t13 when the magnetic head22passes through the end of the sector #1, the controller30maintains the read gate at the H level. At the timing t13, the read gate is changed from the H level to the L level. During the period from the timing t12 to the timing t13, the signal output from the read head22ris captured by the RWC25and then demodulated. During the period from timing t12 to timing t13, no off-track state is detected, and the first correction for the data read during this period, that is, the data read from the sector #0 and the data read from the sector #1 is successful. That is, the controller30succeeds in acquiring the expected data from the sector #0 to the sector #1. When the controller30acquires the data for which the first correction is performed from the sector #0 to the sector #1, the data of the sector #0 to the sector #6 are provided. As a result, the second correction can be performed. The controller30performs the second correction on the data read from the sector #5 and succeeds in acquiring the expected data of the sector #5. As a result, the controller30completes the acquisition of the expected data from all sectors in the track41. As described above, also in the second embodiment, similarly to the first embodiment, the execution of the read retry operation on the sector for which the read operation has been executed during the on-track state of the magnetic head22and the first correction is successful is omitted. Therefore, as compared with the comparative example, an increase in the time for acquiring the data and an increase in calculation resources for the error correction can be reduced. That is, the data can be read efficiently even when the off-track state occurs. FIG.10is a diagram illustrating another example of a timing of each process during reading of the target track41of the magnetic disk device1aaccording to the second embodiment. Similarly toFIG.7,FIG.10illustrates the position of the magnetic head22and the waveform of the read gate. In addition,FIG.10illustrates the type of error correction executed and the result of the error correction. Further, it is assumed that the sector #0 to the sector #6 are included in one track41and the second ECC is written to the sector #6. In the example illustrated inFIG.10, similarly to the example illustrated inFIG.9, all the sectors in the target track41, that is, the sector #0 to the sector #6 are set as initial read targets. Therefore, when the magnetic disk11makes first one rotation, during the period from the timing t20 when the magnetic head22passes through the beginning of the sector #0 to the timing t21 when the magnetic head22passes through the end of the sector #6, the controller30maintains the read gate at the H level, and the signal output from the read head22rduring this period is captured by the RWC25as read data and then demodulated. The RWC25executes demodulation including the first correction with respect to the data read from the sector #0 to the sector #6. Herein, in the example illustrated inFIG.10, the off-track state occurs when the magnetic head22passes through the sector #1. Therefore, the controller30adds the sector number of the sector #1 to the off-track sector information311. Further, the controller30cancels the first correction for the data read from the sector #1. Further, the controller30succeeds in the first correction for the data read from the sector #0 and the sector #3 to the sector #6. That is, the controller30succeeds in acquiring the expected data from the sector #0 and the sector #3 to the sector #5 and succeeds in acquiring the second ECC from the sector #6 by the first one rotation of the magnetic disk11. According to the second ECC, when the number of sectors from which data including no error has not been obtained is only one, this data can be obtained through the second correction regardless of whether the cause of not obtaining this data is the off-track state. The controller30executes the second correction through the process of S303illustrated inFIG.8after the magnetic disk11makes another rotation after the first one rotation. In the example illustrated inFIG.10, among the tracks41, only from the sector #1, which is an off-track sector, data including no error has not been obtained. The controller30acquires the data written to the sector #1 without error through the second correction. As described above, according to the second embodiment, when the number of off-track sectors is one, and there is no target sector for which the read operation has been executed in the on-track state of the magnetic head22and the first correction failed, the controller30acquires the expected data of the off-track sector through the second correction. Therefore, it is possible to reduce execution frequency of the read retry operation when the off-track state occurs. It is noted that, when the number of off-track sectors is two or more, the controller30cannot acquire the expected data of all the off-track sectors through the second correction. In such a case, the controller30executes the read retry operation with respect to the off-track sectors. Further, even when the number of off-track sectors is one, if there is a sector for which the read operation has been executed in the on-track state of the magnetic head22and the first correction failed, the controller30cannot acquire the expected data of all the off-track sectors through the second correction. Even in such a case, the controller30executes the read retry operation on the off-track sector. As described above, according to the second embodiment, when all sectors of the target track41are the target sectors, the controller30executes the read operation on all the target sectors when the magnetic disk11makes one rotation and executes the second correction after the read operation on all the target sectors. Then, when the off-track sector exists and the data written in the off-track sector can be acquired through the second correction, the controller30does not execute the read retry operation on the off-track sector. Therefore, even when the off-track state occurs, the data can be read efficiently. It is noted that, according to the above-mentioned examples, the controller30executes the read operation with respect to all target sectors of the target track41in the order of sector numbers (S302), and then executes the second correction using the data read through the process of S302(S303). When it is clear from the result of executing the process of S302that the second correction fails, the controller30may cancel the second correction, that is, S303, and execute the process of S305. As described above, as a result of the process of S302, when the number of off-track sectors is two or more, or even when the number of off-track sectors is one and there is a sector for which the read operation has been executed in the on-track state of the magnetic head22and the first correction fails, the second correction will fail. In such a case, the controller30may cancel the execution of the second correction and may proceed to S305. In the example illustrated inFIG.9, during the period (timing t10 to t11) in which the magnetic disk11makes first one rotation, the number of off-track sectors exceeds one at the time point when the magnetic head22passes through the sector #1. Therefore, it becomes clear that the second ECC will fail at this time point. The controller30may detect that the number of off-track sectors exceeds one, and thus, may cancel the first execution of the second correction. It is noted that, in the second embodiment, the interleaving technique may be applied to track-by-track error correction coding. FIG.11is a diagram illustrating another example of a timing of each process during reading of the target track41when the interleaving technique is applied to the magnetic disk device1aaccording to the second embodiment. As inFIG.7,FIG.11illustrates the position of the magnetic head22and the waveform of the read gate. In addition,FIG.11illustrates the type of error correction executed and the result of the error correction. Further, the sector #0 to the sector #6 are provided on one track41. Then, the second ECC #0 generated on the basis of the data written to the sector #0, the sector #2, and the sector #4 is written to the sector #5, and the second ECC #1 generated on the basis of the data written to the sector #1 and the sector #3 is written to the sector #6. In the example illustrated inFIG.11, similarly to the example illustrated inFIG.9, all the sectors in the target track41, that is, the sector #0 to the sector #6 are set as initial read targets. Therefore, when the magnetic disk11makes first one rotation, during the period from the timing t30 when the magnetic head22passes through the beginning of the sector #0 to the timing t31 when the magnetic head22passes through the end of the sector #6, the controller30maintains the read gate at the H level, and the signal output from the read head22rduring this period is captured into the RWC25as read data and then demodulated. The RWC25performs demodulation including the first correction with respect to the data read from the sector #0 to the sector #6. Herein, in the example illustrated inFIG.11, the off-track state occurs when the magnetic head22passes through the sector #0 and when the magnetic head22passes through the sector #1. Therefore, the controller30adds the sector number of the sector #0 and the sector number of the sector #1 to the off-track sector information311. Further, the controller30cancels the first correction for the data read from the sector #0 and the first correction for the data read from the sector #1. Further, the controller30succeeds in the first correction for the data read from the sector #2 to the sector #6. That is, the controller30succeeds in acquiring the expected data from the sector #2 to the sector #4, succeeds in acquiring the second ECC #0 from the sector #5 and the second ECC #1 from the sector #6 by the first one rotation of the magnetic disk11. As described above, the second ECC #0 is generated on the basis of the data written to the sector #0, the sector #2, and the sector #4. Therefore, when there is one off-track sector provided in the sector #0, the sector #2, and the sector #4 and there is no target sector for which the read operation has been executed in the on-track state and the first correction failed among the sector #0, the sector #2, and the sector #4, the controller30can acquire the expected data of this off-track sector through the second correction. Further, the second ECC #1 is generated on the basis of the data written to the sector #1 and the sector #3. Therefore, when there is one off-track sector in the sector #1 and the sector #3 and there is no target sector for which the read operation has been executed in the on-track state and the first correction failed among the sector #1 and the sector #3, the controller30can acquire the expected data of the off-track sector through the second correction. In the case of the example illustrated inFIG.11, since there is one off-track sector in the sector #0, the sector #2, and the sector #4 and there is no target sector for which the read operation has been executed in the on-track state and the first correction failed among the sector #0, the sector #2, and the sector #4, the controller30succeeds in acquiring the expected data of the sector #0 through the second correction using the second ECC #0. Further, since there is one off-track sector in the sector #1 and the sector #3 and there is no target sector for which the read operation has been executed in the on-track state and the first correction failed among the sector #1 and the sector #3, the controller30succeeds in acquiring the expected data of the sector #1 through the second correction using the second ECC #1. As the foregoing illustrates, the second embodiment may be combined with the technique of interleaving. It is noted that the technique of interleaving may also be used in combination with the first embodiment. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions, and changes in the form of the embodiments described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure.
43,628
11862205
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). The present disclosure generally relates to a magnetic recording device having a magnetic recording head comprising a spintronic device. The spintronic device is disposed between a main pole and a trailing shield at a media facing surface. The spintronic device comprises a spin torque layer (STL) and a multilayer seed layer disposed in contact with the STL. The spintronic device may further comprise a field generation layer disposed between the trailing shield and the STL. The multilayer seed layer comprises an optional high etch rate layer, a heat dissipation layer comprising Ru disposed in contact with the optional high etch rate layer, and a cooling layer comprising Cr disposed in contact with the heat dissipation layer and the main pole. The high etch rate layer comprises Cu and has a high etch rate to improve the shape of the spintronic device during the manufacturing process. FIG.1is a schematic illustration of a magnetic recording device100, according to one implementation. The magnetic recording device100includes a magnetic recording head, such as a write head. The magnetic recording device100is a magnetic media drive, such as a hard disk drive (HDD). Such magnetic media drives may be a single drive/device or include multiple drives/devices. For the ease of illustration, a single disk drive is shown as the magnetic recording device100in the implementation illustrated inFIG.1. The magnet recording device100(e.g., a disk drive) includes at least one rotatable magnetic disk112supported on a spindle114and rotated by a drive motor118. The magnetic recording on each rotatable magnetic disk112is in the form of any suitable patterns of data tracks, such as annular patterns of concentric data tracks on the rotatable magnetic disk112. At least one slider113is positioned near the rotatable magnetic disk112. Each slider113supports a head assembly121. The head assembly121includes one or more magnetic recording heads (such as read/write heads), such as a write head including a spintronic device. As the rotatable magnetic disk112rotates, the slider113moves radially in and out over the disk surface122so that the head assembly121may access different tracks of the rotatable magnetic disk112where desired data are written. Each slider113is attached to an actuator arm119by way of a suspension115. The suspension115provides a slight spring force which biases the slider113toward the disk surface122. Each actuator arm119is attached to an actuator127. The actuator127as shown inFIG.1may be a voice coil motor (VCM). The VCM includes a coil movable within a fixed magnetic field, the direction and speed of the coil movements being controlled by the motor current signals supplied by a control unit129. The head assembly121, such as a write head of the head assembly121, includes a media facing surface (MFS) such as an air bearing surface (ABS) that faces the disk surface122. During operation of the magnetic recording device100, the rotation of the rotatable magnetic disk112generates an air or gas bearing between the slider113and the disk surface122which exerts an upward force or lift on the slider113. The air or gas bearing thus counter-balances the slight spring force of suspension115and supports the slider113off and slightly above the disk surface122by a small, substantially constant spacing during operation. The various components of the magnetic recording device100are controlled in operation by control signals generated by control unit129, such as access control signals and internal clock signals. The control unit129includes logic control circuits, storage means and a microprocessor. The control unit129generates control signals to control various system operations such as drive motor control signals on a line123and head position and seek control signals on a line128. The control signals on line128provide the desired current profiles to optimally move and position slider113to the desired data track on rotatable magnetic disk112. Write and read signals are communicated to and from the head assembly121by way of recording channel125. In one embodiment, which can be combined with other embodiments, the magnetic recording device100may further include a plurality of media, or disks, a plurality of actuators, and/or a plurality number of sliders. FIG.2is a schematic illustration of a cross sectional side view of a head assembly200facing the rotatable magnetic disk112shown inFIG.1or other magnetic storage medium, according to one implementation. The head assembly200may correspond to the head assembly121described inFIG.1. The head assembly200includes a media facing surface (MFS)212, such as an air bearing surface (ABS), facing the rotatable magnetic disk112. As shown inFIG.2, the rotatable magnetic disk112relatively moves in the direction indicated by the arrow232and the head assembly200relatively moves in the direction indicated by the arrow233. In one embodiment, which can be combined with other embodiments, the head assembly200includes a magnetic read head211. The magnetic read head211may include a sensing element204disposed between shields S1and S2. The sensing element204is a magnetoresistive (MR) sensing element, such an element exerting a tunneling magneto-resistive (TMR) effect, a magneto-resistance (GMR) effect, an extraordinary magneto-Resistive (EMR) effect, or a spin torque oscillator (STO) effect. The magnetic fields of magnetized regions in the rotatable magnetic disk112, such as perpendicular recorded bits or longitudinal recorded bits, are detectable by the sensing element204as the recorded bits. The head assembly200includes a write head210. In one embodiment, which can be combined with other embodiments, the write head210includes a main pole220, a leading shield206, a trailing shield (TS)240, and a spintronic device230disposed between the main pole220and the TS240. The main pole220serves as a first electrode. Each of the main pole220, the spintronic device230, the leading shield206, and the trailing shield (TS)240has a front portion at the MFS. The main pole220includes a magnetic material, such as CoFe, CoFeNi, or FeNi, other suitable magnetic materials. In one embodiment, which can be combined with other embodiments, the main pole220includes small grains of magnetic materials in a random texture, such as body-centered cubic (BCC) materials formed in a random texture. In one example, a random texture of the main pole220is formed by electrodeposition. The write head210includes a coil218around the main pole220that excites the main pole220to produce a writing magnetic field for affecting a magnetic recording medium of the rotatable magnetic disk112. The coil218may be a helical structure or one or more sets of pancake structures. In one embodiment, which can be combined with other embodiments, the main pole220includes a trailing taper242and a leading taper244. The trailing taper242extends from a location recessed from the MFS212to the MFS212. The leading taper244extends from a location recessed from the MFS212to the MFS212. The trailing taper242and the leading taper244may have the same degree or different degree of taper with respect to a longitudinal axis260of the main pole220. In one embodiment, which can be combined with other embodiments, the main pole220does not include the trailing taper242and the leading taper244. In such an embodiment, the main pole220includes a trailing side and a leading side in which the trailing side and the leading side are substantially parallel. The TS240includes a magnetic material, such as FeNi, or other suitable magnetic materials, serving as a second electrode and return pole for the main pole220. The leading shield206may provide electromagnetic shielding and is separated from the main pole220by a leading gap254. In some embodiments, the spintronic device230is positioned proximate the main pole220and reduces the coercive force of the magnetic recording medium, so that smaller writing fields can be used to record data. In such embodiments, an electron current is applied to spintronic device230from a current source270to produce a microwave field. The electron current may include direct current (DC) waveforms, pulsed DC waveforms, and/or pulsed current waveforms going to positive and negative voltages, or other suitable waveforms. In other embodiments, an electron current is applied to spintronic device230from a current source270to produce a high frequency alternating current (AC) field to the media. In one embodiment, which can be combined with other embodiments, the spintronic device230is electrically coupled to the main pole220and the TS240. The main pole220and the TS240are separated in an area by an insulating layer272. The current source270may provide electron current to the spintronic device230through the main pole220and the TS240. For direct current or pulsed current, the current source270may flow electron current from the main pole220through the spintronic device230to the TS240or may flow electron current from the TS240through the spintronic device230to the main pole220depending on the orientation of the spintronic device230. In one embodiment, which can be combined with other embodiments, the spintronic device230is coupled to electrical leads providing an electron current other than from the main pole220and/or the TS240. FIGS.3A-3Billustrate media facing surface (MFS) views of spintronic devices300,350, respectively, disposed between a main pole302and a trailing shield304, according to various embodiments. Each of the spintronic devices300,350may independently be a STO, and as such, may be referred to herein as STO300and STO350. Both the STO300and the STO350may independently be utilized in the magnetic recording device100, such as in the head assembly121. Both the STO300and the STO350may independently be the spintronic device230ofFIG.2, the main pole302may be the main pole220ofFIG.2, and the trailing shield304may be the TS240ofFIG.2. InFIG.3A, the STO300comprises a seed layer306disposed on the main pole302, a spin torque layer (STL)308disposed on the seed layer306, a spacer layer310disposed on the STL308, a field generation layer (FGL)312disposed on the spacer layer310, and a second spacer layer or spin-blocking cap layer316disposed on the FGL312. As shown inFIG.3A, the trailing shield304may optionally comprise a notch314disposed in contact with the FGL312. The spintronic device350ofFIG.3Bis the same as the spintronic device300ofFIG.3A; however the spintronic device350does not comprise a FGL. Rather, the spacer layer310is disposed in contact with the trailing shield304or the notch314of the trailing shield304instead. The seed layer306may comprise a multilayer structure, as discussed below inFIGS.4A-4C. The STL308may comprise single layers or multilayer combinations of Ni, Fe, Co, binary or ternary alloys of Ni, Fe, Co, and half-metallic Heusler alloys, for instance Co2MnGe having a thickness in the y-direction of about 2 nm to about 12 nm. The spacer layers310and316may each individually comprise a long spin-diffusion length material such as Cu, Ag, or Cu and Ag alloys, or combinations thereof having a thickness in the y-direction of about 2 nm to about 8 nm. In some embodiments, the second spacer layer316may comprise Cr. The FGL312may comprise single layers or multilayer combinations of Ni, Fe, Co, binary or ternary alloys of Ni, Fe, Co, and half-metallic Heusler alloys, for instance Co2MnGe having a thickness in the y-direction of about 5 nm to about 15 nm. When an electric current is applied, the electrons may flow from the main pole302through the STO300, or the STO350, to the trailing shield304in the y-direction, as shown by the arrow labeled e-flow. FIGS.4A-4Billustrate MFS views of spintronic devices or STOs400,450, respectively, according to various embodiments. Each STO400,450may be, or be utilized with, the STO300ofFIG.3Aor the STO350ofFIG.3B. Each STO400,450may independently be utilized in the magnetic recording device100, such as in the head assembly121. Each STO400,450may independently be the spintronic device230ofFIG.2. As noted above with respect toFIG.3A, the spintronic devices400,450may not include the FGL312in some embodiments. As such, the FGL312ofFIGS.4A-4Bmay be optional. In each STO400,450, the seed layer306, the STL308, and the spacer layer310may be the STL308and spaced layer310ofFIGS.3A-3B. The FGL312in each STO400,450may be the FGL312ofFIG.3A. While each STO400,450is shown comprising the FGL312, the STOs400,450may not comprise the FGL, like shown inFIG.3B. In the STO400ofFIG.4A, the seed layer306is a multilayer structure comprising a first layer420disposed in contact with the STL308, a second layer422disposed in contact with the first layer420, a third layer424disposed in contact with the second layer422, a fourth layer426disposed in contact with the third layer424, and a fifth layer428disposed in contact with the fourth layer426and a main pole (shown inFIG.5A). The first layer420may be referred to as a first texture layer420or an anti-damping layer420, as the first layer420lowers damping in the STO400. The first layer420comprises NiAl, or a tantalum alloy containing an atomic percent content of tantalum in a range from 20% to 50%, such as in a range from 25% to 35%, and has a thickness in the y-direction (i.e., at the MFS) of about 2.5 nm to about 3.5 nm, such as about 3 nm. The second layer422may be referred to as a second texture layer422. The second layer422comprises Ru or a Ru alloy and has a thickness in the y-direction of about 2 nm to about 2.5 nm, such as about 2.3 nm. The third layer424may be referred to as an amorphous layer424. The third layer424comprises NiFeTa and has a thickness in the y-direction of about 2.5 nm to about 3 nm, such as about 2.7 nm. The first, second, and third layer420,422,424may be referred to as the functional seed layers of the STO400. In various embodiments, the functional seed layer ensure a good texture break with complete spin mixing between the main pole material and the STO, set up the preferred texture for growth of the STO, and reduce ferromagnetic damping of the STO to reduce the critical current Jc for STO reversal. Whereas a minimum seed layer thickness is required to provide its necessary functions, it is often advantageous to increase the thickness of the seed layer by adding non-functional layers under the functional ones. For instance, a thick seed layer can be shaped during STO device fabrication into a long tail that helps distribute the heat generated from device operation and improve the reliability of the STO. An example material for the tail under the functional layers of the seed is Ru. However, adding a tail under the STO can impact the shape of the STL in the STO and negatively impact the requirement of critical current Jc. To recover the lost performance, sufficient overmilling into the tail may be required, which limits the effectiveness of the tail at distributing the excess heat. Various embodiments described herein provide a tail with an improved multilayer structure that addresses the above issues. As shown, the tail includes the fourth layer426and the fifth layer428. The fourth layer426may be referred to as a heat dissipation layer426. The fourth layer426comprises, for instance, Cr, Ta, Ru, or combinations thereof, and has a thickness in the y-direction of about 6 nm to about 12 nm, such as about 6.5 nm. The fifth layer428may be referred to as a cooling layer428. The fifth layer428comprises a positive Seebeck coefficient material such as Cr, or alloys of Fe—Cr, Ni—Cr and Fe—W and has a thickness in the y-direction of about 1.5 nm to about 2.5 nm, such as about 2 nm. In combination, the heat dissipation layer426and the cooling layer428address the heat dissipation issue noted above. In the STO450ofFIG.4B, the seed layer306is a multilayer structure comprising the first layer420disposed in contact with the STL308, the second layer422disposed in contact with the first layer420, the third layer424disposed in contact with the second layer422, a high etch rate (HER) layer458disposed in contact with the third layer424, a fourth layer456disposed in contact with the HER layer458, and the fifth layer428disposed in contact with the fourth layer456and a main pole (shown inFIG.5B). The HER layer458may be referred to as a sixth layer458. The HER layer458has a high etch rate relative to the third layer424below it, such as about twice the etch rate of the third layer424. The first layer420or the first texture layer420or an anti-damping layer420comprises NiAl, or a tantalum alloy containing an atomic percent content of tantalum in a range from 20% to 50%, such as in a range from 25% to 35%, and has a thickness in the y-direction (i.e., at the MFS) of about 2.5 nm to about 3.5 nm, such as about 3 nm. The second layer422or second texture layer422comprises Ru or a Ru alloy and has a thickness in the y-direction of about 2 nm to about 2.5 nm, such as about 2.3 nm. The third layer or amorphous layer424comprises NiFeTa and has a thickness in the y-direction of about 2.5 nm to about 3 nm, such as about 2.7 nm. As noted above, adding a tail under the STO can impact the shape of the STL in the STO and can negatively impact the requirement of the critical current Jc. Here, the HER layer458, or the sixth layer, has a high etch rate to improve the shape of the STO450(e.g., the STL and the functional seed layers) during the manufacturing process, allowing for reduction of the critical current Jc, as will be further shown inFIG.6. The HER layer458comprises a material with high sputter etch rate such as Cu and has a thickness in the y-direction of about 1.5 nm to about 2.5 nm, such as about 2 nm. The fourth layer456, or heat dissipation layer456, is similar to the fourth layer426ofFIG.4A. The fourth layer456comprises, for instance, Cr, Ta, Ru, or combinations thereof, but has a thickness in the y-direction of about 4 nm to about 5 nm, such as about 4.5 nm. The fifth layer428, or cooling layer428, comprises a positive Seebeck coefficient material such as Cr, or alloys of Fe—Cr, Ni—Cr and Fe—W and has a thickness in the y-direction of about 1.5 nm to about 2.5 nm, such as about 2 nm. The HER layer458, the fourth layer456, and the fifth layer428may be referred to as a tail of the STO450while the first, second, and third layer420,422,424may be referred to as the functional seed layers of the STO450. FIGS.5A-5Billustrate MFS views of spintronic devices or STOs400,450, ofFIGS.4A and4B, respectively, after being deposited and etched, according to various embodiments. The STO400ofFIG.5Acorresponds to the STO400ofFIG.4A, and the STO450ofFIG.5Bcorresponds to the STO450ofFIG.4B.FIGS.5A-5Billustrate the spintronic devices or STOs400,450after fabrication, whereasFIGS.4A-4Billustrate conceptual views of the spintronic devices or STOs400,450. As noted above with respect toFIG.3A, the spintronic devices400,450may not include the FGL312in some embodiments. As such, the FGL312ofFIGS.5A-5Bmay be optional. As noted above, the STO450ofFIGS.4B and5Bcomprises the HER layer458, or the sixth layer, which has a high etch rate to improve the shape of the STO450during the manufacturing process. As shown byFIGS.5A-5B, the STO450ofFIG.5Bhas a more well-defined shape than the STO400ofFIG.5A. For example, the STL308, the first layer420, the second layer422, the third layer424, and the HER layer458of the STO450each has a substantially equal width in the x-direction. The fourth layer456in the STO450has a greater width in the x-direction than the other layers312,310,308,420,422,424,458of the STO450, and the fifth layer428of the STO has a greater width in the x-direction than the fourth layer456and the main pole302. Comparatively, in the STO400ofFIG.5A, the STL308has a greater width in the x-direction than the FGL312and the spacer layer310, the first layer420has a greater width in the x-direction than the STL308, the second layer422has a greater width in the x-direction than first layer420, the third layer424has a greater width in the x-direction than second layer422, the fourth layer426has a greater width in the x-direction than the third layer424, and the fifth layer428has a greater width in the x-direction than the fourth layer426. As such, the STO400an STL308with a has more of a pyramid-like shape with sloped sides, and the STO450has a more overall rectangular shape with substantially straight sides in the y-direction from the STL308to the HER layer458. The more-defined shape of the STO450improves the reliability and performance while reducing the critical current (Jc) through the STO450, as shown inFIG.6. While the STO400has an improved reliability and performance compared to conventional STOs, the STO400has a higher critical current than the STO450. Thus, by including the HER layer458in the STO450to improve the shape of the STO450during the manufacturing process, increased reliability and performance is achieved while the critical current is reduced. FIG.6illustrates a graph600of normalized critical current (Jc) versus over milling depth in nm for the STO400ofFIGS.4A and5A, the STO450ofFIGS.4B and5B, and an STO comprising only functional seed layers without a tail, according to one embodiment. The over milling of the STOs defines depth and/or shape of the taper of the overall structure, as discussed above inFIGS.5A-5B. The STO comprising only functional seed layers may comprise only the FGL312, the spacer layer310, the STL308, the first layer420, the second layer422, and the third layer424, where the third layer424is disposed in contact with a main pole. As shown inFIG.6, the STO400has a greater normalized critical current than the STO450, even when the STO450is over milled about 6 nm. Additionally, even when the STO450is over milled about 6 nm, the STO450has a lower normalized critical current than the STO comprising only functional seed layers. Thus, by including the HER layer458in the STO450, the normalized critical current is reduced compared to the STO400and the STO comprising only functional seed layers. Therefore, by including a multilayer seed layer within a spintronic device, the multilayer seed layer comprising a high etch rate layer, a heat dissipation layer, and a cooling layer, the overall shape of the etched spintronic device is improved. The improved shape of the spintronic device results in the spintronic device having increased reliability and performance while reducing the critical current. In one embodiment, a magnetic recording head comprises a main pole, a trailing shield disposed adjacent to the main pole, and a spintronic device disposed between the main pole and the trailing shield, the spintronic device comprising: a field generation layer disposed adjacent to the trailing shield, a spacer layer disposed in contact with the field generation layer, a spin torque layer disposed in contact with the spacer layer, and a multilayer seed layer disposed in contact with the spin torque layer and the main pole, the multilayer seed layer comprising a heat dissipation layer and a cooling layer disposed in contact with the heat dissipation layer. The heat dissipation layer comprises Ru and the cooling layer comprises Cr. The heat dissipation layer has a thickness of about 6 nm to about 12 nm, and the cooling layer has a thickness of about 1.5 nm to about 2.5 nm. The spintronic device further comprises an anti-damping layer disposed in contact with the spin torque layer, a texture layer disposed in contact with the anti-damping layer, and an amorphous layer disposed in contact with the texture layer and the heat dissipation layer. The cooling layer is disposed in contact with the main pole. The anti-damping layer comprises NiAl having a thickness of about 2 nm to about 4 nm, the texture layer comprises Ru having a thickness of about 2 nm to about 2.5 nm, and the amorphous layer comprises NiFeTa having a thickness of about 2.5 nm to about 3 nm. The spintronic device further comprises a field generation layer disposed between the trailing shield and the spacer layer. A magnetic recording device comprises the magnetic recording head. In another embodiment, a magnetic recording head comprises a main pole, a trailing shield disposed adjacent to the main pole, and a spintronic device disposed between the main pole and the trailing shield, the spintronic device comprising: a first spacer layer disposed adjacent to the trailing shield, a spin torque layer disposed in contact with the first spacer layer, and a multilayer seed layer disposed in contact with the spin torque layer and the main pole, the multilayer seed layer comprising a high etch rate layer, a heat dissipation layer disposed in contact with the high etch rate layer, and a cooling layer disposed in contact with the heat dissipation layer and the main pole. The high etch rate layer comprises Cu having a thickness of about 1 nm to about 3 nm, the heat dissipation layer comprises Ru having a thickness of about 4 nm to about 5 nm, and the cooling layer comprises Cr having a thickness of about 1.5 nm to about 2.5 nm. The spintronic device further comprises a NiAl layer having a thickness of about 2 nm to about 4 nm disposed in contact with the spin torque layer, a Ru layer having a thickness of about 2 nm to about 2.5 nm disposed in contact with the NiAl layer, and a NiFeTa layer having a thickness of about 2.5 nm to about 3 nm disposed in contact with the Ru layer and the high etch rate layer. The spintronic device, the main pole, and the trailing shield are disposed at a media facing surface. The trailing shield comprises a notch, and the first spacer layer is disposed in contact with the notch. The spintronic device further comprises a field generation layer disposed between the trailing shield and the first spacer layer, and a second spacer layer or cap layer disposed between the field generation layer and the trailing shield. A magnetic recording device comprises the magnetic recording head. In yet another embodiment, a magnetic recording head comprises a main pole, a trailing shield disposed adjacent to the main pole, and a spintronic device disposed between the main pole and the trailing shield, the spintronic device comprising: a spin torque layer, a Cu layer disposed under the spin torque layer, a Ru layer disposed in contact with the Cu layer, and a Cr layer disposed in contact with the Ru layer and the main pole. The spintronic device further comprises a NiAl layer having a thickness of about 2 nm to about 4 nm disposed in contact with the spin torque layer, a Ru layer having a thickness of about 2 nm to about 2.5 nm disposed in contact with the NiAl layer, and a NiFeTa layer having a thickness of about 2.5 nm to about 3 nm disposed in contact with the Ru layer and the Cu layer. The Cu layer has a thickness of about 1 nm to about 3 nm, the Ru layer has a thickness of about 4 nm to about 5 nm, and the Cr layer has a thickness of about 1.5 nm to about 2.5 nm. The spintronic device further comprises a first spacer layer disposed in contact with the spin torque layer and the trailing shield. The spintronic device further comprises a first spacer layer disposed in contact with the spin torque layer, a field generation layer disposed between the trailing shield and the spacer layer, and a second spacer layer or cap layer disposed between the field generation layer and the trailing shield. A magnetic recording device comprises the magnetic recording head. While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
29,158
11862206
DETAILED DESCRIPTION In general, according to one embodiment, a magnetic recording/reproducing device is an assisted magnetic recording/reproducing device. The assisted magnetic recording/reproducing device includes a plurality of magnetic recording media each including a recording surface, a plurality of assisted magnetic recording heads each provided with the recording surface in order to perform assisted recording, and an assisting amount control part connected to the assisted magnetic recording heads in order to control an assisting amount of each assisted magnetic recording head corresponding to a recording capacity of the recording surface. Furthermore, according to an embodiment, a magnetic recording/reproducing method uses an assisted magnetic recording/reproducing device with a plurality of magnetic recording media each including a recording surface and a plurality of assisted magnetic recording heads each provided with the recording surface in order to perform assisted recording, and the method includes calculating an initial value of recording capacity of each recording surface from a constant assisting amount, acquiring a ratio of the initial value with respect to a sum of the initial values as a ratio of the recording capacity of each recording surface, and performing adjustment of an assisting amount of the magnetic head based on the ratio of each recording capacity. The adjustment of the assisting amount can be performed using an assisting amount adjustment part connected to the magnetic head, for example. Furthermore, the adjustment of the assisting amount includes calculation of a write time ratio, which is a ratio of adjusted write time with respect to total write time corresponding to the recording capacity, from the ratio of the recording capacity, and backward calculation of the assisting amount to be suitable for the write time ratio. Hereinafter, with reference to the drawings, a disk device as a magnetic recording/reproducing device of an embodiment will be explained. The disclosure is merely an example and is not limited by contents described in the embodiments described below. Modification which is easily conceivable by a person of ordinary skill in the art comes within the scope of the disclosure as a matter of course. In order to make the description clearer, the sizes, shapes and the like of the respective parts may be changed and illustrated schematically in the drawings as compared with those in an accurate representation. Constituent elements corresponding to each other in a plurality of drawings are denoted by the same reference numerals and their detailed descriptions may be omitted unless necessary. First Embodiment As a magnetic recording/reproducing device, a hard disk drive (HDD) of the first embodiment will be described. FIG.1is a perspective view of HDD of the embodiment shown in a disassembled manner in which the cover is removed. As inFIG.1, HDD100has a rectangular-shaped housing310. The housing310includes a rectangular box-shaped base12with an open top surface and a cover (top cover) not shown. The base12includes a rectangular bottom wall12aand side walls12berected along the periphery of the bottom wall, and is integrally molded from, for example, aluminum. The cover is formed of stainless steel, for example, in the shape of a rectangular plate, and is screwed onto the side wall12bof the base12such that the upper opening of the base12can be hermetically sealed. As inFIG.1, in the housing310, a plurality of magnetic disks arranged to be opposed to each other as magnetic disk1as disk-shaped magnetic recording media, and a spindle motor19to support and rotate the magnetic disk1are disposed. The spindle motor19is located on the bottom wall12a. Each magnetic disk1is formed in the shape of a disk of, for example, 95 mm (3.5 inches) in diameter, and includes a substrate formed of a nonmagnetic material, for example, glass and magnetic recording layers formed on the upper (first) and lower (second) surfaces of the substrate. InFIG.1, for example,61-1ais the recording surface on the first side. The magnetic disk1is fitted to a hub on a common spindle, not shown in the figure, and is further clamped by a clamping spring20. As a result, the magnetic disk1is supported in a position parallel to the bottom wall of the base12. The magnetic disk1is rotated by the spindle motor19in the direction of arrow B at a predetermined revolution. Inside the housing310, there are a plurality of magnetic heads10that record and resume information on the magnetic disk1, and an actuator assembly22that freely supports the magnetic head10with respect to the magnetic disk1. In addition, in the housing310, there are a voice coil motor (VCM)24that rotates and positions the actuator assembly22, ramp load mechanism25that holds the magnetic head10in the unloaded position apart from the magnetic disk1when the magnetic head10moves to the outermost periphery of the magnetic disk1, substrate unit (FPC unit)21on which electronic components such as conversion connectors are mounted, and spoiler70. A printed circuit board27is screwed to the outer surface of the bottom wall12aof the base12. The printed circuit board controls the operation of the spindle motor19and, structures the control unit that controls the operation of the VCM24and the magnetic head10through the board unit21. The actuator assembly22includes a bearing28fixed on the bottom wall12aof the base12, a plurality of arms32extending from the actuator block, which is not shown, in the bearing28, suspension assembly (may be referred to as head gimbal assembly: HGA)30attached to each arm32, and magnetic head10supported by the suspension assembly30. The suspension34includes its base fixed at the tip of the arm32by spot welding or gluing, and extends from the arm32. The magnetic head10is supported at the extending end of each suspension34. During recording, the suspensions34and magnetic heads10face each other with magnetic disk16therebetween. FIG.2is a schematic view of a part of the structure of the actuator assembly and the magnetic disk ofFIG.1. In this example, for explanation, the actuator assembly22is loaded onto the magnetic disk1. As a plurality of magnetic disks1, magnetic disks61-1,61-2,61-3, and61-4are disposed on a common spindle, which is not shown, to be rotatably in this order from the top to the bottom with respect to the bottom wall12aof base12, and are spaced at predetermined intervals, parallel to each other and supported to be parallel to the bottom of base12. The arms32are one more in number than the number of magnetic disks. Furthermore, the magnetic heads10include twice as many heads as the number of magnetic disks. In the actuator assembly22, multiple arms32-1,32-2,32-3,32-4, and32-5extend from one common actuator block29. The actuator block29is disposed rotatably in the bearing28. One suspension assembly30-1is attached to the upper end arm32-1and one suspension assembly30-8is attached to the lower end arm32-5. To the arms32-2,32-3,32-4, a pair of suspension assemblies30-2and30-3, pair of suspension assemblies30-4and30-5, and pair of suspension assemblies30-6and30-7are attached, respectively. Magnetic heads51-1,51-2,51-3,51-4,51-5,51-6,51-7, and51-8are supported at the tips of suspension assemblies30-1,30-2,30-3,30-4,30-5,30-6,30-7, and30-8. Thus, the magnetic heads51-1, and51-2are provided such that they face each other with the magnetic disk61-1therebetween. Furthermore, the magnetic heads51-3and51-4are provided such that they face each other with the magnetic disk61-2therebetween. Similarly, the magnetic heads51-5and51-6are provided such that they face each other with the magnetic disk61-3therebetween. In addition, the magnetic heads51-7and51-8are provided such that they face each other with the magnetic disk61-4therebetween. At that time, the magnetic heads51-2and51-3, the magnetic heads51-4and51-5, magnetic heads51-6and51-7are each back-to-back adjacent to each other. Thus, in the first embodiment, on both surfaces of a plurality of disks, for example, magnetic disks61-1,61-2,61-3, and61-4, there provided are recording surfaces61-1a,61-1b,61-2a,61-2b,61-3a,61-3b,61-4a, and61-4b. Corresponding to each of recording surfaces61-1a,61-1b,61-2a,61-2b,61-3a,61-3b,61-4a, and61-4b, there provided are a plurality of magnetic heads, for example, magnetic heads51-1,51-2,51-3,51-4,51-5,51-6,51-7, and51-8. FIG.3is a side view of the magnetic head10and the suspension. As inFIG.3, each magnetic head10is configured as a levitating head and includes a nearly rectangular-shaped slider42and a head44for recording and reproducing is provided at the outflow end (trailing end) of the slider42. The magnetic head10is fixed to a gimbal spring41at the tip of the suspension34. Each magnetic head10is subjected to a head load L toward the surface of the magnetic disk1due to the elasticity of the suspension34. As inFIG.2, each magnetic head10is connected to a head amplifier IC11and HDC13through the suspension34and line member (flexure)35fixed on the arm32. Next, the structure of the magnetic disk1and the magnetic head10will be described. FIG.4is a cross-sectional view illustrating the head44of the magnetic head10and the magnetic disk1in an enlarged manner. As inFIGS.3and4, the magnetic disk1includes, for example, a disk-shaped substrate101of about 2.5 inches (6.35 cm) in diameter, formed of a nonmagnetic material. On each surface of the substrate101, there is a soft magnetic layer102formed of a material exhibiting soft magnetic properties as a base layer, and on the upper layer, there is a magnetic layer103with magnetic anisotropy in the perpendicular direction of the disk surface, and on the upper layer, there is a protective layer104. The slider42of the magnetic head10is formed of sintered alumina and titanium carbide (Altic), for example, and the head44is formed by layering thin films. The slider42includes a rectangular disk-facing surface (air bearing surface (ABS)43facing the recording surface61-1aof the magnetic disk1. The slider42is levitated by air current C produced between the disk surface and the ABS43by the rotation of the magnetic disk1. The direction of the air current C coincides with the direction of rotation B of the magnetic disk1. The slider42is arranged such that the longitudinal direction of ABS43approximately coincides with the direction of the air current C with respect to the surface of the magnetic disk1. The slider42includes a leading end42alocated in the inflow side of air current C and a trailing end42blocated in the outflow side of air current C. In the ABS43of the slider42, a leading step, trailing step, side step, negative pressure cavity, and the like are formed, which are not shown. As inFIG.4, the head44includes a reproducing head54and a recording head (magnetic recording head)58formed in a thin-film process at the trailing end42bof the slider42, as separated magnetic heads. The reproducing head54and the recording head58covered by a protective insulating film76, except for the portion exposed to ABS43of the slider42. The protective insulating film76forms the outline of the head44. The reproducing head54includes a magnetic film55that exhibits a magnetoresistive effect, and shield films56and57arranged to hold the magnetic film55in the trailing and reading sides thereof. The lower edges of these magnetic film55, and shield films56and57are exposed to the ABS43of the slider42. The recording head58is located at the trailing end42bside of the slider42with respect to the reproducing head54. FIG.5is a schematic perspective view of the recording head58and the magnetic disk1, andFIG.6is a cross-sectional view illustrating the end part of the magnetic disk1side of the recording head58in an enlarged manner, taken along the track center.FIG.7is a cross-sectional view illustrating the recording head58ofFIG.6, in a partially enlarged manner. As inFIGS.4to6, the recording head58includes a main magnetic pole60formed of a highly saturated magnetizing material that generates a recording magnetic field perpendicular to the surface of the magnetic disk1, trailing shield (auxiliary pole)62placed in the trailing side of the main magnetic pole60and formed of a soft magnetic material provided to effectively close the magnetic path through a soft magnetic layer102immediately below the main magnetic pole60, recording coil64arranged to wind around the magnetic core (magnetic circuit) including the main magnetic pole60and the trailing shield62to flow magnetic flux to the main magnetic pole60when writing signals to the magnetic disk1, and flux control layer65arranged between the tip60ain the ABS43side of the main magnetic pole60and the trailing shield62to be flush with the ABS43. The main magnetic pole60, formed of a soft magnetic material, extends substantially perpendicular to the surface of the magnetic disk1and the ABS43. The lower end of the main magnetic pole60in the ABS43side includes that a narrowing portion60btapers toward ABS43and is narrowed in the track width direction in a rote shape, and a tip60aof a predetermined width extending from the narrowing portion60btoward the magnetic disk side. The tip, or lower end, of the tip60ais exposed to the ABS43of the magnetic head. The width of the tip60ain the track width direction substantially corresponds to the track width TW of the recording surface61-1aof the magnetic disk1. Furthermore, the main magnetic pole60also includes a shield side end surface60cthat extends substantially perpendicular to the ABS43and faces the trailing side. In one example, the end in the ABS43side of the shield side end surface60cextends inclining to the shield side (trailing side) with respect to the ABS43. The trailing shield62formed of a soft magnetic material is approximately L-shaped. The trailing shield62includes a tip62aopposed to the tip60aof the main magnetic pole60with a write gap WG therebetween, and a connection (back gap section)50that is apart from the ABS43and connected to the main magnetic pole60. The connection50is connected to the upper part of the main magnetic pole60via a non-conductor52, that is, is connected to the upper part which is farther back or upward from the ABS43. The tip62aof the trailing shield62is formed in an elongated rectangular shape. The lower end surface of the trailing shield62is exposed to the ABS43of the slider42. The leading side end surface (main pole side end surface)62bof the tip62aextends along the width direction of the tracks of the magnetic disk1and is inclined toward the trailing side with respect to the ABS43. This leading side end surface62bis opposed to the shield side end surface60cof the main magnetic pole60in the lower end of the main magnetic pole60(part of tip60aand narrowing portion60a) in an approximately parallel manner with the write gap WG therebetween. As inFIG.6, the flux control layer65has a function to suppress only the inflow of magnetic flux from the main magnetic pole60to the trailing shield62, that is, to oscillate the spin torque such that the permeability of the effective write gap WG becomes negative. In detail, the magnetic flux control layer65includes a conductive intermediate layer (first nonmagnetic conductive layer)65a, adjustment layer65b, and conductive cap layer (second nonmagnetic conductive layer)65c, which is conductive, and the aforementioned layers are layered from the main magnetic pole60side to the trailing shield62side, that is, the layers are layered sequentially along the running direction D of the magnetic head. The intermediate layer65a, adjustment layer65b, conduction cap layer65ceach have a film surface parallel to the shield side end surface60cof the main magnetic pole60, that is, film surface extending in the direction that intersects the ABS43. Note that, the intermediate layer65a, adjustment layer65b, and conduction cap layer65are not limited to the above example, and may be layered in the opposite direction, that is, from the trailing shield62side to the main magnetic pole60side. Furthermore, as inFIG.7, a protection layer68is disposed on the ABS43of the recording head58including the main magnetic pole60, flux control layer65, and trailing shield62. The intermediate layer65acan be formed of a metal layer of, for example, Cu, Au, Ag, Al, Ir, NiAl alloys that do not interfere with spin conduction. The intermediate layer65ais formed directly on the shield side end surface60cof the main magnetic pole60. The adjustment layer65bincludes a magnetic material including at least one of iron, cobalt, or nickel. As the adjustment layer, for example, an alloy material of FeCo with at least one additive of Al, Ge, Si, Ga, B, C, Se, Sn, and Ni, and at least one type of material selected from an artificial lattice group of Fe/Co, Fe/Ni, and Co/Ni can be used. The thickness of the adjustment layer may be, for example, 2 to 20 nm. The conduction cap layer65ccan be a nonmagnetic metal and a material that blocks spin conduction. The conduction cap layer65ccan be formed of, for example, at least one selected from a group of Ta, Ru, Pt, W, Mo, and Ir, or an alloy containing at least one thereof. The conduction cap layer65cis formed directly on the leading end surface62bof the trailing shield62. Furthermore, the conduction cap layer can be single or multi-layered. The intermediate layer65ais formed to be thick enough to transfer spin torque from the main magnetic pole60while sufficiently weakening the exchange interaction, for example, 1 to 5 nm. The conduction cap layer65cis formed to be thick enough to block the spin torque from the trailing shield62while still allowing the exchange interaction to be sufficiently weak, for example, 1 nm or greater. Because the orientation of the magnetization of the adjustment layer65brequires to be opposite to the magnetic field by the the spin torque from the main magnetic pole60, the saturation magnetic flux density of the adjustment layer65bshould be small. On the other hand, in order to effectively shield the magnetic flux by the adjustment layer65b, the saturation flux density of the adjustment layer65bshould be large. The magnetic field between the write gap WG is approximately 10 to 15 kOe, and thus, even if the saturation magnetic flux density of the adjustment layer65bis 1.5 T or higher, the improvement effect is unlikely to be enhanced. Therefore, the saturation magnetic flux density of the adjustment layer65bshould be 1.5 T or less, and more specifically, the adjustment layer65bis, preferably, formed such that the product of the film thickness of65band the saturation magnetic flux density becomes 20 nmT or less. In order to focus the current flow in the direction perpendicular to the film surfaces of the intermediate layer65a, adjustment layer65b, and conduction cap layer65c, the flux control layer65is surrounded by an insulating layer, for example, protective insulating film76, except for the part in contact with the main magnetic pole60and the trailing shield62. The main magnetic pole60can be formed of a soft magnetic metal alloy with Fe—Co alloy as its main component. The main magnetic pole60also functions as an electrode for applying electric current to the intermediate layer65a. The trailing shield62can be formed of a soft magnetic metal alloy with a Fe—Co alloy as its main component. The trailing shield62also serves as an electrode for applying current to the conduction cap layer65c. The protective layer68is disposed to protect the ABS43, and formed of one or more materials, and can be a single layer or multiple layers. The protective layer has a surface layer formed of, for example, diamond-like carbon. Furthermore, a base layer formed of Si or the like can be disposed between the ABS43of the recording head58and the protective layer68. An additional base layer may be provided between the main magnetic pole60and the intermediate layer65a. For example, a metal such as Ta or Ru can be used as the base layer. The thickness of the base layer can be 0.5 to 10 nm, for example. Furthermore, it can be about 2 nm. Furthermore, an additional cap layer may be provided between the trailing shield62and the conduction cap layer65c. As the cap layer, at least one non-magnetic element selected from a group consisting of Cu, Ru, W, and Ta can be used. The thickness of the cap layer can be 0.5 to 10 nm, for example. Furthermore, it can be about 2 nm. In addition, CoFe can be used as a spin-polarized layer between the main magnetic pole and the intermediate layer. As inFIG.4, the main magnetic pole60and trailing shield62are each connected via line66to the connection terminal45, and further connected to the head amplifier and HDC, which are not shown, via line member (flexure)35inFIG.2. A current circuit to apply an STO drive current (bias voltage) in series from the head amplifier IC through the main magnetic pole60, STO65, and trailing shield62is structured. The recording coil64is connected to the connection terminal45via line77, and is further connected to the head amplifier IC via the flexure35. When writing signals to a magnetic disk12, recording current is applied to the recording coil64from the recording current supply circuit of the head amplifier IC, which is not shown, and thus, the60main magnetic poles is excited, causing a magnetic flux to flow through the main magnetic pole60. The recording current supplied to the recording coil64is controlled by the HDC. According to the HDD configured as described above, driving the VCM4causes the actuator3to be driven to rotate, and the magnetic head10is moved to a desired track of the recording surface61-1aof the magnetic disk1to be positioned. As inFIG.3, the magnetic head10is levitated by the air current C produced between the disk surface and the ABS43by the rotation of the magnetic disk1. During HDD operation, the ABS43of the slider42is facing the disk surface with a gap therebetween. In this state, read of recorded information is performed with respect to the magnetic disk1by the reproducing head54while write of information is performed by the recording head58. The head44of the magnetic head is equipped with a first heater76aand a second heater76boptionally. The first heater76ais located near the recording head58, for example, near the recording coil64and the main magnetic pole60. The second heater76bis located near the read head54. The first heater76aand the second heater76bare each connected to the connection terminal45via lines, and further connected to the head amplifier IC11via the flexure35. The first and second heaters76aand76bare coiled, for example, and by being energized, generate heat and cause thermal expansion of the surrounding area. Thereby the ABS43near the recording head58and reproducing head54protrudes, bringing it closer to the magnetic disk1and lowering the levitation height of the magnetic head. By controlling the heat generation as above through adjusting the drive voltages supplied to the first and second heaters76aand76b, the levitation height of the magnetic head can be controlled. FIG.8schematically illustrates a magnetization state in the write gap WG with the flux control layer65functioning. In the above writing of information, as inFIGS.4and8, alternating current is applied from the power supply to the recording coil64, and thus, the recording coil64excites the main magnetic pole60, and a perpendicular recording magnetic field is applied from the main magnetic pole60to the recording layer103of the magnetic disk1immediately below thereof. Thus, information is recorded in the magnetic recording layer103at the desired track width. Furthermore, when applying a recording magnetic field to the magnetic disk1, the current is applied from another power supply through the line66, main magnetic pole60, flux control layer65, and trailing shield62. This current application causes spin torque from the main magnetic pole60to act on the adjustment layer65bof the magnetic flux control layer65, and the direction of magnetization of the adjustment layer65bis as shown by arrow105, directed to be opposite to the direction of the magnetic field (gap magnetic field) Hgap generated between the main magnetic pole60and the trailing shield62. Such magnetization reversal causes the adjustment layer65bto block the magnetic flux (gap magnetic field Hgap) flowing directly from the main magnetic pole60to the trailing shield62. As a result, the magnetic field leaking from the main magnetic pole60into the write gap WG is reduced, and the degree of convergence of the magnetic flux from the tip60aof the main magnetic pole60to the magnetic recording layer103of the magnetic disk1improves. This improves the resolution of the recording magnetic field and increases the recording line density. Note that the above is a mode in which the magnetization of the magnetic flux control layer reverses due to the effect of spin torque, but it also includes a mode in which the magnetization of the magnetic flux control layer rotates simultaneously. By applying the high-frequency magnetic field generated by the simultaneous rotation to the magnetic recording layer103, the recording line density can be increased. FIG.9is a block diagram illustrating part of the functional structure of the magnetic recording/reproducing device of the embodiment. As in the figure, the magnetic recording/reproducing device100of the embodiment is an assisted magnetic recording/reproducing device including a plurality of magnetic recording medium1such as a magnetic recording medium having a recording surface61-1a, a plurality of assisted magnetic recording heads10disposed to correspond to each recording surface such as recording surface61-1a, and an assisting amount adjustment part130connected to the assisted recording magnetic heads10. The assisting amount adjustment part130adjusts the assisting amount of each assisted magnetic head10corresponding to the recording capacity of each recording surface. Note that, there are a plurality of magnetic recording medium1and assisted magnetic recording heads10while one magnetic recording medium1and one assisted magnetic recording head10are described for simplification. The assisting amount adjustment part130is, for example, connected to the assisted magnetic recording head10to perform the magnetic recording to the magnetic disk1, and may include a main control unit126which controls a change of the assisting amount of the magnetic head10, recording capacity calculator121which is connected to the main control unit126to calculate an initial value of the recording capacity of each recording surface such as recording surface61-1a, a sum of the initial values of the recording capacity of each recording surface, and a ratio of initial values of the recording capacity of each recording surface with respect to the sum (ratio of recording capacity), write time calculator122which calculates the ratio of adjusted write time (write time ratio) with respect to a write time (total write time) corresponding to the recording capacity from the ratio of the recording capacity, assisting amount determination unit123which determines the assisting amount corresponding to the write time ratio by backward calculation, determination unit124which determines update of the assisting amount setting value upon receiving the information from the assisting amount determination unit123, and instruction section127which instructs updating of the assisting amount setting value based on the determined assisting amount. To the main control unit126, an initial value storage unit128which stores the initial values of recording capacity of each recording surface and a memory unit129which contains an updated value memory unit129that stores the assisting amount determined in proportion to the write time ratio can be connected. By providing the memory unit125, for example, the recording capacity calculator121can calculate the total value of the initial values from the initial value of the recording capacity acquired from the initial value memory unit128and the ratio of the recording capacity. In the updated value storage unit129can store the updated value of the assisting amount set in proportion to the write time ratio. Based on the updated assisting amount, the application of current to the assisted recording element is adjusted for each recording surface to adjust the assisting amount, and assisted magnetic recording can be performed. For such assisted magnetic recording, for example, between the assisting amount adjustment part130and the magnetic head10, optionally, for example, a calculation unit (which is not shown) to calculate a change amount of current to be applied to the assisted recording element, for example, the magnetic flux control layer65ofFIG.8from a change amount between the updated value of the assisting amount acquired from the update value storage unit129and a constant assisting amount used to acquire the initial value, determination unit (not shown) that determines the current to be applied according to the amount of change in the current, and instruction unit (not shown) that instructs the magnetic head10to apply current to the assisted recording element upon receiving the determined current information can be connected. Furthermore, if need be, the recording capacity calculation unit121, write time calculation unit122, and assisting amount determination unit123, initial value storage unit128, and update value storage unit129, etc., can be installed in an external device such as PC connectable or communicable to the assisting amount adjustment part130instead of installed in the assisting amount adjustment part130. As assisted recording methods, microwave assisted magnetic recording (MAMR) method, heat assisted magnetic recording (HAMR) method, and energy assisted perpendicular magnetic recording (ePMR) method can be cited. FIG.10is a graphical representation of the relationship of recording density with respect to assist energy in the MAMR head. As in the figure, for example, in the MAMR method, when the voltage to the STO increases, the magnetic field becomes stronger, and the magnetic recording density increases, but the recording density saturates at a certain point. This recording density can be interpreted as the recording capacity per magnetic disk surface. The assisting amount used in the embedment is expressed differently depending on the assisted recording method, and in the MAMR method, the assisting amount may correspond to the element voltage applied to the STO. Even in the ePMR method which is assisted by magnetic field switching with an electric current, and the HAMR method, the relationship between the assisting amount with respect to the electric current or a laser diode power is similar. FIG.11is a cross-sectional view of an example of the structure of a magnetic head using the energy assisted recording method. For example, in the energy assisted recording method, as in the figure, the recording head158includes a main magnetic pole160formed of a highly saturated magnetization material that generates a recording magnetic field perpendicular to the surface of the magnetic disk, auxiliary pole162arranged in the trailing side of the main magnetic pole160and formed of a soft magnetic material, and conductive layer165arranged between the tip end in the ABS143side of the main magnetic pole160and the auxiliary pole162and flush with the ABS143. The recording head158has a similar structure as inFIG.7except for using the conductive layer165instead of the magnetic flux control layer65ofFIG.7. When energizing the main magnetic pole160, current is concentrated in the conductive layer165, generating a magnetic field to assist the magnetic recording. In that case, since the strength of the magnetic field is proportional to the amount of current, in the energy assisted perpendicular magnetic recording method, the assisting amount can be almost equivalent to the amount of current. Furthermore,FIG.12is a cross-sectional view of an example of the structure of a HAMR magnetic head. As in the figure, the HAMR magnetic head258includes a near-field optical element disposed between a main magnetic pole260with a coil and an auxiliary pole262, optical waveguide266that propagates light to the near-field optical element265, and laser diode267as a light source supplying the light of the optical waveguide266, and assists switching by heat generated by the evanescent light generated from the near-field optical element265. In this case, since the power from the laser diode267is proportional to the assisting amount, and in the HAMR method, the assisting amount can be equivalent to the laser diode power. For high recording density, maximizing the assisting amount may be considered in the assisted recording, however, increasing the assisting amount may cause electro-migration due to heat generation and over-current, which deteriorates the head lifetime. FIG.13is a graphical representation of the relationship of element lifetime with respect to the assisting element temperature in the MAMR head. Here, element lifetime refers to the total write time before the bit error rate in a magnetic recording/reproducing device becomes worse than the minimum acceptable limit. In the magnetic recording/reproducing device with assisted recording heads, the assisting amount of each magnetic head is determined in terms of both recording density and guaranteed operating time. FIG.14is a part of the manufacturing system of the magnetic recording/reproducing device of the embodiment, and is a flowchart of the operations of the system adjusting the recording capacity for the assembled magnetic recording/reproducing device. Usually, in the adjustment of the recording capacity on each surface of a magnetic disk, initially, with respect to the read/write capacity of the assembled magnetic recording/reproducing device, the initial adjustment of read/write conditions with the magnetic head corresponding to each recording surface of the magnetic recording medium is performed (S1). Next, format adjustment is performed for each recording surface (S2). Then, various measurements and overall inspections, such as defect inspections, are performed (S3). Then, the recording capacity as a magnetic recording/reproducing device is determined by summing the recording capacity of each magnetic disk surface (S4). Note that, the assembled magnetic recording/reproducing device here includes, for example, the magnetic recording/reproducing device in the state before the lid is attached. If there is a difference in recording capacity of each recording surface, the access frequency for writing will differ, resulting in a difference in magnetic head lifetime on each recording surface. For example, if a recording surface with a recording capacity of 1 TB and a recording surface with a recording capacity of 800 GB coexist in a single magnetic recording/reproducing device, the total write time for the 1 TB surface is approximately 1.25 times higher than the 800 GB surface. Therefore, the head degradation is likely to proceed in the high-capacity surface. Therefore, according to the embodiment, the assisting amount is lowered below the set value in the high recording capacity surface, and conversely, the assisting amount is raised in the low recording capacity surface, thus the adjustment to even the head lifetime for the expected write hours. This allows balanced use of each magnetic head without degradation and prevents deterioration of device lifetime. The setting of the assisting amount for adjusting the assisting amount may be set as a part of the adjustment of the recording capacity of the magnetic recording/reproducing device ofFIG.14. For the setting of the assisting amount, the assisting amount adjustment part130provided with the magnetic recording/reproducing device of the embodiment can be used. The recording capacity calculation unit121of the assisting amount adjustment part130calculates the initial value of the recording capacity of each recording surface from a certain assisting amount, acquires the ratio of the recording capacity of each recording surface, calculates the assisting amount of each magnetic head based on the ratio of each recording capacity, and performs the adjustment of the assisting amount. The initial value of the recording capacity of each recording surface can be stored in the initial value storage unit128. The adjustment of the assisting amount may include, in the write time calculation unit22, calculating a write time ratio from the ratio of the recording capacity and backward calculating the assisting amount in proportion to the write time ratio in the assisting amount determination unit124(S5). The determination unit124determines the assisting amount corresponding to the write time ratio. The instruction section127instructs updating the assisting amount setting value based on the determined assisting amount (S6). The update value of the assisting amount can be stored in the update value storage unit129. In the update of the assisting amount setting value, a formula or table of temperature and element lifetime for resistance value can be prepared in advance for the magnetic head. For example, if the device lifetime follows the general Arrhenius model, the device lifetime can be calculated by the following formula (1). L=A×exp(ΔEa/kT)  (1) where L is the device lifetime, A is a coefficient specific to the device or other equipment, ΔEa is the activation energy of device degradation, k is Boltzmann's constant, and T is the device temperature. The following formula (2) can also be used for the case where the device lifetime follows a power-law model. L=B×Sn(2) where B is a device or other specific coefficient, S is voltage or laser diode power, and n is a device specific coefficient. Using these formulae, the available assisting amount can be backward-calculated from the write time ratio for each surface. Note that, this update process may be completed once, or it may be performed multiple times to increase accuracy. In this adjustment, the assisting amount is larger than the initial adjustment value for low recording capacity recording surfaces and smaller than the initial adjustment value for high recording capacity recording surfaces, and thus, the adjustment is negatively correlated with the recording capacity. By using the magnetic recording/reproducing device of the embodiment, assisted recording is used as the magnetic recording method, and an assisting amount adjustment part is connected to the assisted magnetic recording head, and thereby, individual assisting amount of the magnetic head can be adjusted. Even if there is a difference in the load of each magnetic head due to the difference in the recording capacity of each recording surface, the recording surface with a high recording capacity requires less assisting amount, and the recording surface with a low recording capacity requires more assisting amount, and the difference can be mitigated. In this way, the load on the recording head is distributed as evenly as possible on each recording surface, the lifetime of each magnetic head is adjusted more evenly, and each magnetic head is used in a balanced manner with as little degradation as possible, thereby reducing the lifetime of the magnetic recording/reproducing device itself can be prevented from deteriorating. According to the magnetic recording/reproducing method of the embodiment, assisted magnetic recording is used as the magnetic recording method, and the assisting amount of each individual magnetic recording head can be changed. The ratio of the recording capacity of each recording surface can be acquired, and the assisting amount of each assisted magnetic recording head can be adjusted based on the ratio of the recording capacity. In the adjustment of the assisting amount, the assisting amount is reduced for a recording surface with a high recording capacity and is increased for a recording surface with a low recording capacity, and thus, the load on each magnetic head can be equalized as much as possible. This has the effect of adjusting the lifetime of each magnetic head and using each magnetic head in a balanced manner with as little degradation as possible, thereby preventing the deterioration of the lifetime of the magnetic recording/reproducing device. Using the following examples, the embodiment will be explained specifically. EXAMPLES Example 1 The MAMR magnetic recording head was created as follows. First, on the main magnetic pole, which is mainly composed of FeCo, layers of the following materials and thicknesses, respectively, were placed using the DC magnetron sputtering method, from the first conductive layer, adjustment layer, and second conductive layer in this order. Thereby, the magnetic flux control layer1, which has the same configuration as the magnetic flux control layer65ofFIG.7was obtained. The first conductive layer, adjustment layer, and second conductive layer were structured the same as the intermediate layer65a, adjustment layer65b, and conduction cap layer65cofFIG.7, respectively. The first conductive layer is, for example, a metal layer of Cu, Au, Ag, Al, Ir, NiAl alloy, etc., and is formed of a material that does not interfere with spin conduction. The adjustment layer can be formed of a magnetic material containing at least one of iron, cobalt, or nickel. The magnetic material can be, for example, an alloy material of FeCo with an additive of at least one of Al, Ge, Si, Ga, B, C, Se, Sn, and Ni, or, at least one type of material selected from an artificial lattice group consisting of Fe/Co, Fe/Ni, and Co/Ni. A mask layer was formed to define the size of the stripe height direction on the magnetic flux control layer1, and then, the magnetic flux control layer was etched by ion beam etching (IBE) until the main magnetic poles are exposed. An insulating film SiOx (where x is an oxidization number) was deposited on the area around the magnetic flux control layer, and then the mask layer was removed. A mask layer to define the size in the track width direction was also created and etched in the same manner, and an insulating film SiOx was deposited on the peripheral portions of the element to process the magnetic flux control layer1. Next, NiFe was formed as a trailing shield on the conduction cap layer. Then, a Si base layer of 1 nm was sputtered onto the main magnetic pole in the ABS side, flux control layer, trailing shield, and insulating film. Then, on the Si base layer, a diamond-like carbon film was deposited by CVD method to obtain a protection layer having a thickness of 1.6 nm to achieve a magnetic recording head. In the same way, a magnetic recording head to be incorporated into HDD with 18 heads and 9 magnetic disks, and 200 magnetic recording/reproducing devices in total were prepared. 100 of the 200 magnetic recording/reproducing devices obtained were categorized as comparative examples, and subjected to the initial adjustment of read/write conditions with the corresponding magnetic head for each recording surface of the magnetic recording media, format adjustment of each recording surface, various measurements, and full surface inspection for defects, for example. Then, the recording capacity as a magnetic recording/reproducing device was determined by summing the recording capacity of each magnetic disk surface. On the other hand, the remaining 100 devices were categorized as examples, and subjected to the same except for acquiring the ratio of recording capacity during the format adjustment of each recording surface, and calculating a setting value of the assisting amount of each magnetic head based on the ratio of each recording capacity. As a long-time current-carrying test, the obtained HDDs were set in an ambient temperature of 70° C., and the magnetic flux control layer was kept energized with a 300 mV applied voltage for 5000 hours. At this time, for Comparative Example 1, no adjustment of the assisting amount was made, and for Example 1, the amount of current to the flux control layer was adjusted as the assisting amount based on the setting value of the assisting amount. The bit error rate (BER) was measured before and after energizing. As a result, with respect to the BER value before the energizing test, there were multiple heads with increased BER at the time of 5000 hours. The following results were obtained when the number of pieces was counted by judging OK/NG at the cutoff value of 1×10−1.7.Energization test resultsNumber of BER NG unitsExample 1: 5/100Comparative Example 1: 15/100 The devices were disassembled and analyzed to find that many BER NGs occurred in the elements with high load (long write time) on the assist elements. The results of this study showed that, compared to Comparative Example 1, Example 1 was able to prevent the deterioration of the lifetime of the assisted recording head on average, and to suppress the degradation of the recording head within a certain time period. Example 2 HAMR magnetic recording heads were prepared by the following method. First, the optical waveguide for near-field light on the main magnetic pole, which is mainly composed of FeCo, was prepared with Al2O3or Ta2O5with a high refractive index. The optical waveguide was connected to a laser diode in the light source unit. On the opposite side of the main magnetic pole, a near-field optical elements were prepared using, for example, Au, Pd, Pt, Rh, or an alloy containing two or more of these elements. Furthermore, a heat sink layer formed of Cu was created near the main magnetic pole to create a thermal assisted magneticrecording head. A material with high magnetic anisotropy (Hk), mainly composed of FePt, was used for the magnetic recording layer of the magnetic recording medium used. Fifty of such heads were prepared, 25 of which were used in Example 2 and the remaining 25 in Comparative Example 2 to assemble magnetic recording/reproducing devices. The obtained magnetic recording/reproducing devices were subjected to a long time energizing test as in Example 1 except that the evaluation environment temperature was set to room temperature, and the near-field optical element was energized for 2000 hours. As to Comparative Example 2, the assisting count was not adjusted, and the laser diode power was adjusted as the assisting amount based on the setting value of the assisting amount for Example 2. The bit error rate (BER) was measured before and after energizing. As a result, with respect to the BER value before the energizing test, there were multiple heads with increased BER at the time of 2000 hours. The following results were obtained when the number of pieces was counted by judging OK/NG at the cutoff value of 1×10−1.7.Energization test resultsNumber of BER NG unitsExample 2: 7/25Comparative Example 2: 10/25 Furthermore, the devices were disassembled and analyzed to find that many BER NGs occurred in the elements with high load (long write time) on the assist elements. The results of this study showed that, compared to Comparative Example 2, Example 2 was able to prevent the deterioration of the lifetime of the assisted recording head on average, and to suppress the degradation of the recording head within a certain time period. Example 3 Magnetic heads of the energy assisted recording method were prepared in the same manner as in Example 1. In this Example, instead of the magnetic flux control layer65, Cu as a non-magnetic conductive band is embedded as the conductive layer165. Other than that, the structure was the same as that of Example 1. In this state, the magnetization switching of the head is assisted by the generation of a magnetic field due to the electric current, instead of the reversal and rotation of the magnetization of the flux control layer. 200 of such heads were prepared, with 100 as Example 3 and the remaining 100 as Comparative Example 3. The obtained devices were subjected to a long time energizing test as in Example 1 except that the evaluation environment temperature was set to room temperature, and evaluation time was set for 5000 hours. As a result, with respect to the BER value before the energizing test, there were multiple heads with increased BER at the time of 5000 hours. The following results were obtained when the number of pieces was counted by judging OK/NG at the cutoff value of 1×10−1.7.Energization test resultsNumber of BER NG unitsExample 3: 23/100Comparative Example 3: 40/100 Furthermore, the devices were disassembled and analyzed to find that many BER NGs occurred in the elements with high load (long write time) on the assist elements. The results of this study showed that, compared to Comparative Example 3, Example 3 was able to prevent the deterioration of the lifetime of the assisted recording head on average, and to suppress the degradation of the recording head within a certain time period. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
49,763
11862207
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION The present disclosure generally relates to data storage devices, and more specifically, to a magnetic media drive employing a magnetic recording head. The head includes a main pole, a trailing shield, and a MAMR stack including at least one magnetic layer. The magnetic layer has a surface facing the main pole, and the surface has a first side at a media facing surface (MFS) and a second side opposite the first side. The length of the second side is substantially less than the length of the first side. By reducing the length of the second side, the area to be switched at a location recessed from the MFS is reduced as a current flowing from the main pole to the trailing shield or from the trailing shield to the main pole. With the reduced area of the magnetic layer, the overall switch time of the magnetic layer is decreased. The terms “over,” “under,” “between,” and “on” as used herein refer to a relative position of one layer with respect to other layers. As such, for example, one layer disposed over or under another layer may be directly in contact with the other layer or may have one or more intervening layers. Moreover, one layer disposed between layers may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first layer “on” a second layer is in contact with the second layer. Additionally, the relative position of one layer with respect to other layers is provided assuming operations are performed relative to a substrate without consideration of the absolute orientation of the substrate. FIG.1is a schematic illustration of a data storage device such as a magnetic media device. Such a data storage device may be a single drive/device or comprise multiple drives/devices. For the sake of illustration, a single disk drive100is shown according to one embodiment. As shown, at least one rotatable magnetic disk112is supported on a spindle114and rotated by a drive motor118. The magnetic recording on each magnetic disk112is in the form of any suitable patterns of data tracks, such as annular patterns of concentric data tracks (not shown) on the magnetic disk112. At least one slider113is positioned near the magnetic disk112, each slider113supporting one or more magnetic head assemblies121that may include a MAMR stack. As the magnetic disk112rotates, the slider113moves radially in and out over the disk surface122so that the magnetic head assembly121may access different tracks of the magnetic disk112where desired data are written. Each slider113is attached to an actuator arm119by way of a suspension115. The suspension115provides a slight spring force which biases the slider113toward the disk surface122. Each actuator arm119is attached to an actuator means127. The actuator means127as shown inFIG.1may be a voice coil motor (VCM). The VCM includes a coil movable within a fixed magnetic field, the direction and speed of the coil movements being controlled by the motor current signals supplied by control unit129. During operation of the disk drive100, the rotation of the magnetic disk112generates an air bearing between the slider113and the disk surface122which exerts an upward force or lift on the slider113. The air bearing thus counter-balances the slight spring force of suspension115and supports slider113off and slightly above the disk surface122by a small, substantially constant spacing during normal operation. The various components of the disk drive100are controlled in operation by control signals generated by control unit129, such as access control signals and internal clock signals. Typically, the control unit129comprises logic control circuits, storage means and a microprocessor. The control unit129generates control signals to control various system operations such as drive motor control signals on line123and head position and seek control signals on line128. The control signals on line128provide the desired current profiles to optimally move and position slider113to the desired data track on disk112. Write and read signals are communicated to and from write and read heads on the assembly121by way of recording channel125. The above description of a typical magnetic media device and the accompanying illustration ofFIG.1are for representation purposes only. It should be apparent that magnetic media devices may contain a large number of media, or disks, and actuators, and each actuator may support a number of sliders. FIG.2Ais a fragmented, cross sectional side view of a read/write head200facing the magnetic disk112according to one embodiment. The read/write head200may correspond to the magnetic head assembly121described inFIG.1. The read/write head200includes a MFS212, such as an air bearing surface (ABS), facing the disk112, a magnetic write head210, and a magnetic read head211. As shown inFIG.2A, the magnetic disk112moves past the write head210in the direction indicated by the arrow232and the read/write head200moves in the direction indicated by the arrow234. In some embodiments, the magnetic read head211is a magnetoresistive (MR) read head that includes an MR sensing element204located between MR shields S1and S2. In other embodiments, the magnetic read head211is a magnetic tunnel junction (MTJ) read head that includes a MTJ sensing device204located between MR shields S1and S2. The magnetic fields of the adjacent magnetized regions in the magnetic disk112are detectable by the MR (or MTJ) sensing element204as the recorded bits. The write head210includes a main pole220, a leading shield206, a trailing shield240, a coil218that excites the main pole220, and a MAMR stack230disposed between the main pole220and the trailing shield240. The main pole220may be a magnetic material such as a FeCo or FeCo(N) alloy. The leading shield206and the trailing shield240may be a magnetic material, such as FeCo or NiFe alloy. The coil218may have a “pancake” structure which winds around a back-contact between the main pole220and the leading shield206, instead of a “helical” structure shown inFIG.2A. The trailing shield240includes a notch270that is in contact with the MAMR stack230. The MAMR stack230is also in contact with the main pole220. A dielectric material254is disposed between the leading shield206and the main pole220. The dielectric material254is also disposed between the main pole220and the trailing shield240at a location recessed from the MFS212, as shown inFIG.2A. The dielectric material254is fabricated from any suitable dielectric material, such as aluminum oxide. FIG.2Bis a portion of the write head210ofFIG.2Aaccording to one embodiment. As shown inFIG.2B, the MAMR stack230includes a seed layer242, a magnetic layer244, and a spacer layer246. The seed layer242is fabricated from an electrically conductive material, such as a non-magnetic metal. In one embodiment, the seed layer242is fabricated from Ta, Cr, Cu, NiAl, Ru, Rh, or combination thereof. The magnetic layer244is fabricated from a magnetic material, such as NiFe, CoMnGe, CoFe, or combinations thereof. In one embodiment, the magnetic layer244is a STL. The spacer layer246is fabricated from a material such as copper (Cu) or silver tin alloy (AgSn). During operation, an electrical current having a minimum current density flows into the magnetic layer244via the main pole220or trailing shield240, and the magnetic layer244switches against the gap field. With the magnetization of the magnetic layer244pointing in the direction opposite to the gap field, the field strength of the main pole220and the down track gradient are enhanced, which in turn improves the recording performance. As shown inFIG.2B, the seed layer242is in contact with the main pole220, the magnetic layer244is in contact with the seed layer242, and the spacer layer246is in contact with the magnetic layer244. The magnetic layer244has a first surface260at the MFS212, a second surface262opposite the first surface260, a third surface264connecting the first surface260and the second surface262, and a fourth surface266opposite the third surface264. The third surface264faces the main pole220, and the fourth surface266faces the trailing shield240. The definition of the term “face” is extended to include a material located between a first element that is facing a second element and the second element. For example, the third surface264of the magnetic layer244faces the main pole220, and the seed layer242is located between the main pole220and the third surface264of the magnetic layer244. The third surface264and the fourth surface266both extend from the MFS212to a location recessed from the MFS212along the Z-axis, and extend in the cross-track direction, as indicated by the X-axis. FIG.3Ais a schematic top view of the magnetic layer244disposed over the main pole220, andFIG.3Bis a perspective view of the magnetic layer244disposed over the main pole220according to one embodiment. The seed layer242is omitted for better illustration. As shown inFIGS.3A and3B, the magnetic layer244has a stripe height SH ranging from about 60 nm to about 120 nm along the Z-axis. The magnetic layer244includes the surface266facing the trailing shield240(FIG.2A). The surface266includes a first side302at the MFS212, a second side304opposite the first side302, a third side306connecting the first side302and the second side304, and a fourth side308opposite the third side306. The first side302and the second side304extend in the cross-track direction, as indicated by the X-axis. The first side302has a first length L1, and the second side has a second length L2. The second length L2is substantially less than the first length L1. In one embodiment, the second length L2is about one percent to about 90 percent of the first length L1. In one embodiment, the surface266of the magnetic layer244has a trapezoidal shape, such as acute trapezoidal shape, right trapezoidal shape, obtuse trapezoidal shape, or isosceles trapezoidal shape. In one embodiment, the second side304is a point, and the surface266of the magnetic layer244has a triangular shape. The third side306of the surface266of the magnetic layer244forms an angle θ2with respect to a central axis310of the surface266. The fourth side308of the surface266of the magnetic layer244forms an angle θ1with respect to the central axis310of the surface266. In one embodiment, the angle θ1is substantially the same as the angle θ2. In another embodiment, the angle θ1is substantially different from the angle θ2. The angle81ranges from about zero degrees to about 20 degrees, and the angle θ2ranges from about zero degrees to about 20 degrees. The angles θ1and θ2may not be both zero degrees. In one embodiment, at least one of the angles θ1and θ2ranges from about greater than zero degrees to about 20 degrees. The surface266of the magnetic layer244may be substantially symmetric with respect to the central axis310, as shown inFIG.3A. In another embodiment, the surface266of the magnetic layer244may be substantially asymmetric with respect to the central axis310. The surface264of the magnetic layer244may have the same shape as the surface266and may be substantially parallel to the surface266. During operation, there is a higher current density at the location near the second side304than at the location near the first side302in the magnetic layer244as the current flows through the magnetic layer244from the main pole220. The main pole220has a first track width TW1at the MFS212and a second track width TW2at a location that is aligned with the second side304of the surface266of the magnetic layer244. The second track width TW2is substantially greater than the first track width TW1. The first track width TW1is substantially less than the length L1of the first side302of the surface266of the magnetic layer244. Thus, the current density in the magnetic layer244is higher near the second side304than the first side302due to current crowding. With the higher current density, the portion of the magnetic layer244near the second side304will switch before the portion of the magnetic layer244near the first side302switches. As the stripe height SH of the magnetic layer244gets greater, for example, greater than 60 nm, the switch time of the magnetic layer244is increased. In order to reduce the switch time of the magnetic layer244, the side304of the surface266of the magnetic layer244is reduced to be substantially less than the side302of the surface266of the magnetic layer244. With the reduced side304, the portion of the magnetic layer244near the second side304is smaller than the conventional magnetic layer having a cuboid shape. With a smaller portion near the side304of the magnetic layer244, the amount of magnetic material to be switched is reduced, leading to a decreased switching time of the magnetic layer244. FIG.4Ais a perspective view of the magnetic layer244ofFIG.3Aaccording to one embodiment. As shown inFIG.4A, the magnetic layer244includes the surface260at the MFS212(FIG.2A), the surface262opposite the surface260, the surface264facing the main pole220(FIG.2A), and the surface266facing the trailing shield240(FIG.2A). The surface266includes the side302at the MFS212(FIG.2A), the side304opposite the side302, the side306connecting the side302and side304, and the side308opposite the side306. The third side306forms the angle θ2with respect to the central axis310of the surface266, and the fourth side308forms the angle θ1with respect to the central axis310. The surface260includes the side302, a side408opposite the side302, a side410connecting the side302and the side408, and a side412opposite the side410. The side302and the side408may be substantially equal in length and substantially parallel. The side410and the side412may be substantially equal in length and substantially parallel. In one embodiment, as shown inFIG.4C, the surface260has a rectangular shape. As shown inFIG.4B, the surface262includes the side304, a side402opposite the side304, a side404connecting the side304and the side402, and a side406opposite the side404. The side304and the side402may be substantially equal in length and substantially parallel. The side404and the side406may be substantially equal in length and substantially parallel. In one embodiment, as shown inFIG.4B, the surface262has a rectangular shape. The area of the surface262is substantially less than the area of the surface260. With the reduced area of the surface262, the portion of the magnetic layer244near the surface262is smaller than the conventional magnetic layer having a cuboid shape. With a smaller portion near the surface262of the magnetic layer244, the amount of magnetic material to be switched is reduced, leading to a decreased switching time of the magnetic layer244. FIGS.5A-5Care schematic illustrations of the shape of the surface266(or the surface264) of the magnetic layer244ofFIG.3Aaccording to embodiments. As shown inFIG.5A, the surface266includes the first side302, the second side304opposite the first side, a third side502connected to the second side304, a fourth side504opposite the third side502, a fifth side506connected to the third side502, and a sixth side508opposite the fifth side506. The length of the second side304is substantially less than the length of the first side302. The third side502forms an angle θ3with respect to the central axis310of the surface266, and the fourth side504forms an angle θ4with respect to the central axis310of the surface266. The angle θ3ranges from about 5 degrees to about 60 degrees, and the angle θ2ranges from about 5 degrees to about 60 degrees. As shown inFIG.5A, the surface266is substantially symmetric with respect to the central axis310. The surface264of the magnetic layer244may have the same shape as the surface266and may be substantially parallel to the surface266. FIG.5Bschematically illustrates the surface266being substantially asymmetric with respect to the central axis310. In this embodiment, the central axis310extends through the center of the second side304, but may not extend through the center of the first side302since the shape of the surface266is asymmetric with respect to the central axis310. The side306is substantially parallel to the central axis310, and the side308forms an angle θ5with respect to the central axis310. The length of the side304is substantially less than the length of the side302, as shown inFIG.5B. The surface264of the magnetic layer244may have the same shape as the surface266and may be substantially parallel to the surface266. The surface266shown inFIG.5Bhas a right trapezoidal shape. FIG.5Cschematically illustrates the surface266having a triangular shape. As shown inFIG.5C, the surface266includes the side302, a point510opposite the side302, the side306connected to the point510, and the side308opposite the side306. The side306forms an angle θ6with respect to the central axis310of the surface266, and the side308forms an angle θ7with respect to the central axis310of the surface266. The angle θ6ranges from about 5 degrees to about 30 degrees, and the angle θ7ranges from about 5 degrees to about 30 degrees. In one embodiment, the surface266is substantially symmetric with respect to the central axis310, and the angles θ6and θ7are substantially the same. In another embodiment, the surface266is substantially asymmetric with respect to the central axis310, and the angles θ6and θ7are substantially different. As shown inFIG.5C, the surface266is substantially symmetric with respect to the central axis310. The surface264of the magnetic layer244may have the same shape as the surface266and may be substantially parallel to the surface266. FIGS.6A and6Bare schematic illustrations of shapes of a surface601of a magnetic layer600that lead to increased switching time. As shown inFIG.6A, the surface601of the magnetic layer600has a first side602at the MFS212(FIG.2A) and a second side604opposite the first side602. The length of the second side604is substantially greater than the length of the first side602. The switching time of the magnetic layer600is greater than the switching time of the magnetic layer244. As shown inFIG.6B, the length of the second side604of the surface601of the magnetic layer600is substantially greater than the length of the first side602of the surface601of the magnetic layer600. Similarly, the switching time of the magnetic layer600is greater than the switching time of the magnetic layer244. FIG.7is a perspective view of the MAMR stack230of the write head ofFIG.2Aaccording to one embodiment. In one embodiment, the seed layer242and the spacer layer246have the same shape as the magnetic layer244(FIG.2B), and the shape of the MAMR stack230may be similar to the shape of the magnetic layer244, as shown inFIG.7. The MAMR stack230includes a first surface702at the MFS212(FIG.2A), a second surface704opposite the first surface702, a third surface706connecting the first surface702and the second surface704, and a fourth surface708opposite the third surface706. The third surface706may face the main pole220, and the fourth surface708may face the trailing shield240(FIG.2A). The area of the surface704is substantially less than the area of the surface702. The surface706includes a first side710at the MFS212(FIG.2A), a second side712opposite the first side710, a third side714connecting the first side710and the second side712, and a fourth side716opposite the third side714. The length of the first side710may be substantially the same as the length L1of the first side302(FIG.3A), and the length of the second side712may be substantially the same as the length L2of the second side304(FIG.3A). The third side714of the surface706of the MAMR stack230forms an angle θ8with respect to a central axis718of the surface706. The fourth side716of the surface706of the MAMR stack230forms an angle θ9with respect to the central axis718of the surface706. The angle θ8may be substantially the same as the angle θ1, and the angle θ9may be substantially the same as the angle θ2. As shown inFIG.7, the surface708of the MAMR stack230may have the same shape as the surface706and may be substantially parallel to the surface706. By reducing the area of the magnetic layer in a MAMR stack at the location recessed from the MFS, the amount of magnetic material of the magnetic layer to be switched at the location recessed from the MFS is reduced. With the reduced magnetic material, the switching time of the magnetic layer is reduced compared to the switching time of a conventional magnetic layer having a cuboid shape. While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
21,059
11862208
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Magnetic reader embodiments described below relate to non-local spin valve (NLSV) sensors or readers that include one or more spin injectors (sometimes simply referred to herein an injector or injectors), a detector and a channel layer substantially extending from the spin injector(s) to the detector. The spin injector(s) inject electron spins into the channel layer. The spins are diffused down the channel layer to the detector. In some embodiments, the channel layer and the detector are substantially in a same plane to provide a large reduction in shield-to-shield spacing in the reader. Prior to providing additional details regarding the different embodiments, a description of an illustrative operating environment is provided below. FIG.1shows an illustrative operating environment in which certain specific embodiments disclosed herein may be incorporated. The operating environment shown inFIG.1is for illustration purposes only. Embodiments of the present disclosure are not limited to any particular operating environment such as the operating environment shown inFIG.1. Embodiments of the present disclosure are illustratively practiced within any number of different types of operating environments. It should be noted that like reference numerals are used in different figures for same or similar elements. It should also be understood that the terminology used herein is for the purpose of describing embodiments, and the terminology is not intended to be limiting. Unless indicated otherwise, ordinal numbers (e.g., first, second, third, etc.) are used to distinguish or identify different elements or steps in a group of elements or steps, and do not supply a serial or numerical limitation on the elements or steps of the embodiments thereof. For example, “first,” “second,” and “third” elements or steps need not necessarily appear in that order, and the embodiments thereof need not necessarily be limited to three elements or steps. It should also be understood that, unless indicated otherwise, any labels such as “left,” “right,” “front,” “back,” “top,” “bottom,” “forward,” “reverse,” “clockwise,” “counter clockwise,” “up,” “down,” or other similar terms such as “upper,” “lower,” “aft,” “fore,” “vertical,” “horizontal,” “proximal,” “distal,” “intermediate” and the like are used for convenience and are not intended to imply, for example, any particular fixed location, orientation, or direction. Instead, such labels are used to reflect, for example, relative location, orientation, or directions. It should also be understood that the singular forms of “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. It will be understood that, when an element is referred to as being “connected,” “coupled,” or “attached” to another element, it can be directly connected, coupled or attached to the other element, or it can be indirectly connected, coupled, or attached to the other element where intervening or intermediate elements may be present. In contrast, if an element is referred to as being “directly connected,” “directly coupled” or “directly attached” to another element, there are no intervening elements present. Drawings illustrating direct connections, couplings or attachments between elements also include embodiments, in which the elements are indirectly connected, coupled or attached to each other. FIG.1is a schematic illustration of a data storage device100including a data storage medium and a head for reading data from and/or writing data to the data storage medium. Data storage device100may be characterized as a hard disc drive (HDD). In data storage device100, head102is positioned above storage medium104to read data from and/or write data to the data storage medium104. In the embodiment shown, the data storage medium104is a rotatable disc or other magnetic storage medium that includes a magnetic storage layer or layers. For read and write operations, a spindle motor106(illustrated schematically) rotates the medium104as illustrated by arrow107and an actuator mechanism110positions the head102relative to data tracks114on the rotating medium104between an inner diameter108and an outer diameter109. Both the spindle motor106and actuator mechanism110are connected to and operated through drive circuitry112(schematically shown). The head102is coupled to the actuator mechanism110through a suspension assembly which includes a load beam120connected to an actuator arm122of the mechanism110for example through a swage connection. AlthoughFIG.1illustrates a single load beam120coupled to the actuator mechanism110, additional load beams120and heads102can be coupled to the actuator mechanism110to read data from or write data to multiple discs of a disc stack. The actuator mechanism110is rotationally coupled to a frame or deck (not shown) through a bearing124to rotate about axis126. Rotation of the actuator mechanism110moves the head102in a cross track direction as illustrated by arrow130. The head102includes one or more transducer elements (not shown inFIG.1) coupled to head circuitry132through flex circuit134. Details regarding elements of a head such as102are provided below in connection withFIG.2. FIG.2is a schematic diagram showing a cross-sectional view of portions of a recording head200and a data storage medium250taken along a plane substantially normal to a plane of a bearing surface (for example, an air bearing surface (ABS))202of recording head200. The recording head elements shown inFIG.2are illustratively included in a recording head such as recording head102ofFIG.1. Medium250is illustratively a data storage medium such as medium104inFIG.1. Those skilled in the art will recognize that recording heads and recording media commonly include other components. Embodiments of the present disclosure are not limited to any particular recording heads or media. Embodiments of the present disclosure may be practiced in different types of recording heads and media. Recording head200includes a write pole205, a magnetization coil210, a return pole215, a top shield218, a read transducer220, a bottom shield222and a wafer overcoat236. Storage medium250includes a recording layer255and an underlayer260. Storage medium250rotates in the direction shown by arrow265. Arrow265is illustratively a direction of rotation such as arrow107inFIG.1. In an embodiment, electric current is passed through coil210to generate a magnetic field. The magnetic field passes from write pole205, through recording layer255, into underlayer260, and across to return pole215. The magnetic field illustratively records a magnetization pattern270in recording layer255. Read transducer220senses or detects magnetization patterns in recording layer255, and is used in retrieving information previously recorded to layer255. In the embodiment shown inFIG.2, read transducer220is a NLSV sensor. NLSV sensor220includes a spin injector224, a detector226and a channel layer228. Top shield218and bottom shield222may also be considered to be a part of the NSLV sensor220. The spin injector224may include an electrically conductive, magnetic layer (not separately shown) that has a magnetization that is pinned in a direction (preferably perpendicular to the bearing surface202). Pinning of the magnetization of the pinned magnetic layer may be achieved by, for example, exchange coupling with a layer of anti-ferromagnetic material (not separately shown). Also, in some embodiments, a synthetic antiferromagnetic (SAF) structure may be utilized for the spin injector224. The detector226may include a magnetic, electrically conductive layer having a magnetization that is free to move in response to a magnetic field, and can therefore be referred to herein as a free layer (FL). Injector224and detector226may each be separated from channel layer228by a thin electrically insulating tunnel barrier layer238A,238B, respectively. The portion of NLSV sensor220proximate to the bearing surface202does not include relatively thick SAF and antiferromagnetic (AFM) stacks that are typically present in, for example, current perpendicular-to-plane (CPP) Tunnel Junction Magnetoresistive (TMR) sensors. Further, unlike conventional NLSV sensors in which both the injector and the detector are each on the top or the bottom of the channel layer, in NLSV sensor220, detector226is positioned in a same plane as channel layer228. The position of detector226in the same plane as channel layer228yields a spacing between top shield218and bottom shield222, denoted by SSS (shield-to-shield spacing), that is slightly more than the thickness of channel layer228by approximately an insulation layer235A that separates bottom shield222from detector226. Insulation layer235A is included to prevent shorting between detector226and channel layer228. An insulation layer235B separates top shield218from channel layer228. Electrical connector/contact227A may be provided between top shield218and detector226, and electrical connector/contact227B may be provided between bottom shield227B and channel layer228. For allowing a detection current to flow to detector226, spin injector224and channel layer228are connected to a current source (not shown inFIG.2) via terminals240and242, respectively. Detector226and channel layer228are connected to a suitable voltage measuring device (not shown inFIG.2) via terminals244and246, respectively. First, the current from the current source is made to flow through the spin injector224and through a portion of the channel layer228. This flow of current causes electron spins to accumulate in channel layer228, which then diffuse through the channel layer228to the detector226. When the spins are transported to the detector226, an electric potential difference, which varies depending upon the detector226magnetization which responds to an external magnetic field, appears between the detector226and the channel layer228(e.g., across barrier layer238B). The voltage measuring device detects an electric potential difference appearing between the detector226and the channel layer228. In this manner, the NLSV sensor220can be applied as an external magnetic field sensor for detecting bits stored on a magnetic data storage medium such as250. Different NLSV sensor embodiments are described below in connection withFIGS.3A-9B. FIG.3Ais a bearing surface view of a NLSV sensor300in accordance with one embodiment.FIG.3Bis a side view of NLSV sensor300. Most elements of NLSV sensor300are substantially similar to the elements of NLSV sensor220ofFIG.2described above. Therefore, in the interest of brevity, a description of the substantially similar elements is not repeated in connection withFIGS.3A and3B. As can be seen inFIG.3B, spin injector324is a multi-layered structure that includes a SAF structure. Accordingly, spin injector324includes a pinning layer (e.g., an antiferromagnetic layer)326, a pinned layer328, a thin separation layer330, which may comprise a metal such as ruthenium (Ru) in some embodiments, and a reference layer332. The magnetic moments of the pinned layer328and the reference layer332are generally oriented normal to the bearing surface202and anti-parallel to each other. Spin injector324and channel layer228are connected to a current source302via terminals240and242, respectively. Detector226and channel layer228are electrically connected to a voltage measuring device304via terminals244and246, respectively. NLSV sensor300operates in a manner similar to NLSV sensor200described above in connection withFIG.2. FIG.4Ais a side view of a NLSV sensor400in accordance with one embodiment.FIG.4Bis a top view of NLSV sensor400. NLSV sensor400includes a spin injector424, a detector426and a channel layer428that extends from injector424to detector426. In the embodiment ofFIGS.4A and4B, detector426is above channel layer428. In an alternate embodiment, detector426may be below channel layer428. Tunnel barrier layer438A is included between injector424and channel layer428, and tunnel barrier438B is included between detector426and channel layer428. To mitigate against resistance that arises due to the inclusion of tunnel barrier438A, the spin injector may be made larger. Thus, as can be seen inFIG.4Bspin injector424is a relatively large area spin injector (e.g., substantially wider than detector426). The relatively large area spin injector424is employed to increase injected spins and leverage a benefit of the tunnel junction of the injector424and thereby enhance spin-selectivity and spin-polarized current injected into the NLSV channel while simultaneously avoiding elevated resistance from the tunnel junction. The top view of NLSV sensor400in included inFIG.4Bto show the size of the large-area injector424compared to the detector426. As can be seen inFIG.4B, channel layer428includes a paddle region404, a flare region406, and a tip region408. Paddle region404illustratively has a width410, and tip region408illustratively has a width412. Flare region406has a first side414and a second side416that is not parallel to side414. Sides414and416start being spaced apart by width410and come closer together until they are spaced apart by width412(smaller than410) as the sides meet tip region408. Flare region406therefore includes two sides414and416that are tapered going from paddle region404to tip region408. In some embodiments, width410may range from tens of nanometers to the micron scale. Also, in such embodiments, width412may be tens of nanometers or less. It should be noted that dimensions of widths410and412and not limited to the examples provided herein and any suitable width dimensions may be used in different embodiments without departing from the scope of the disclosure. In the embodiment shown inFIGS.4A and4B, a geometry of injector424and tunnel barrier438A corresponds to a geometry of the paddle region404of channel layer428. However, elements424,428and438A may be of any suitable shape and the shapes of these elements are not limited to the shapes shown inFIG.4B. FIG.5Ais a side view of a NLSV sensor500in accordance with one embodiment. FIG. is a top view of NLSV sensor500. Most elements of NLSV sensor500are substantially similar to the elements of NLSV sensor400ofFIGS.4A and4Bdescribed above. Therefore, in the interest of brevity, a description of the substantially similar elements is not repeated in connection withFIGS.5A and5B. In NLSV sensor500, channel layer528does not include a tip region such as408in channel layer428ofFIGS.4A and4B. Instead, detector426is substantially coplanar with channel layer528, thereby reducing SSS in a manner shown and described above in connection withFIG.2. FIGS.6A and6Bare front and side views, respectively, of a 3-terminal NLSV sensor600in accordance with one embodiment. NLSV sensor600is substantially similar to NLSV sensor300ofFIGS.3A and3B. However, in NLSV sensor600, no terminal is connected to bottom shield222. Instead, a terminal602, coupled to channel layer228, serves as a common terminal to which both current source302and voltage measuring device304are connected. FIGS.7A and7Bare front and side views, respectively, of a 4-terminal NLSV sensor700in accordance with one embodiment. In addition to a first spin injector324of the type shown inFIG.3B, NLSV sensor700includes a second spin injector324′. As can be seen inFIG.7B, spin injector324′ is a multi-layered structure that includes a SAF structure. Accordingly, spin injector324′ includes a pinning layer (e.g., an antiferromagnetic layer)326′, a pinned layer328′, a thin separation layer330′, which may comprise a metal such as ruthenium (Ru) in some embodiments, and a reference layer332′. The magnetic moments of the pinned layer328′ and the reference layer332′ are generally oriented normal to the bearing surface202and anti-parallel to each other with reference layer332′ opposite to332. In the embodiment shown inFIGS.7A and7B, spin injectors324and324′ are connected to current source302via terminals702and704, respectively, and voltage measuring device304is connected to detector226and channel layer228via terminals706and708, respectively. In some embodiments, one or both of injectors324and324′ may be large area injectors, and the channel layer may have a geometry similar to channel layer428ofFIGS.4A and4B. FIG.8Ais a bearing surface view of a multi-NLSV or multi-sensor magnetic recording (MSMR) reader800in accordance with one embodiment.FIG.8Bis a side view of multi-NLSV reader800. As indicated earlier in connection with the description ofFIGS.2,3A and3B, NLSV sensors such as220and300have narrow SSS proximate to a bearing surface such as202. Therefore, it is a suitable reader design to implement in a multi-sensor configuration where two or more NLSV sensors are stacked on top of each other within a single recording head. One example of a dual-sensor configuration is shown inFIGS.8A and8B, which are front and side views, respectively, of MSMR reader800. The embodiment of reader800inFIGS.8A and8Bincludes a top shield218, a bottom shield222, a middle shield802and NLSV sensors300A and300B interposed between top shield218and bottom shield222. NLSV sensor300A includes an injector324A, a detector226A, and a channel228A in a same plane as detector226A. Similarly, NLSV sensor300B includes an injector324B, a detector226B and a channel228B in a same plane as detector226B. Isolation layers334A and334B are included on respective upper and lower sides of middle shield802. Elements327A and327B are electrical connectors/contacts. For allowing a detection current to flow to detector226A, spin injector324A and channel layer228A are connected to a first current source302A via terminals804A and806A, respectively. Detector226A and channel layer228A are connected to a first voltage measuring device304A via terminals808A and810A, respectively. Similarly, for allowing a detection current to flow to detector226B, spin injector324B and channel layer228B are connected to a second current source302B via terminals804B and806B, respectively. Detector226B and channel layer228B are connected to a second voltage measuring device304B via terminals808B and810B, respectively. Layers326A and326B are pinning layers (e.g., an antiferromagnetic layers), layers328A and328B are pinned layers, layers330A and330B are thin separation layers, and layers332A and332B are reference layers. FIG.9Ais a bearing surface view of a multi-NLSV or MSMR reader900in accordance with one embodiment.FIG.9Bis a side view of multi-NLSV reader900. Most elements of multi-NLSV reader900are substantially similar to the elements of multi-NLSV reader800ofFIGS.8A and8Bdescribed above. Therefore, in the interest of brevity, a description of the substantially similar elements is not repeated in connection withFIGS.9A and9B. In MSMR reader900, channel layers228A and228B are electrically coupled to middle shield802by electrical connectors/contacts902A and902B, respectively. This enables first and second voltage measuring devices304A and304B to be connected to a same terminal910. Remaining terminal (904A,906A,904B,906B,908A and908B) connections are similar to those described above in connection withFIGS.8A and8B. Elements334A,334B,335A and335B are insulators, and elements327A and327B are electrical connectors/contacts. In the multi-sensor configurations, FL-to-FL separation distances812and912are shown inFIGS.8A and9A, respectively. Reducing the FL-to-FL separation enables a multi-sensor reader to be implemented in a high linear density drive and across a wide skew range. Substantially high FL-to-FL separation reduction may be achieved by implementing NLSV-based magnetic readers with channels and detectors in a same plane because, as noted above, they eliminate the thicknesses of SAF and AFM stacks at the bearing surface that are typically present in, for example, CPP TMR readers. Additionally, the relatively thin and uniform mid-shield802ofFIGS.8B and9Bmay result in a further reduction in FL-to-FL separation. It should be noted thatFIGS.8A-9Bare illustrative embodiments of multi-sensor readers and, in other embodiments, more than two sensors may be employed. It is generally understood that the NLSV signal can be increased by the use of high RA (product of resistance and area) insulators at the interface between the injector-channel (e.g.,238A ofFIG.2) and detector-channel (e.g.,238B ofFIG.2). In practice, for HDD readers, there are constraints which entail the use of a novel, practical approach to increase the NLSV signal. Reasons for such constraints, examples of the constraints, and practical design approaches in view of the constraints are described below.The detector voltage signal (e.g., signal measured by voltage measuring device304ofFIG.3B), Vs, is defined as the non-local voltage given by resistance signal×injector current, Rs×I.A general expression for Rsfor a one-dimensional (1-D) case is shown in Equation 1 below. Rs=4⁢RN⁢(P11-P12⁢R1RN+pF1-pF2⁢RFRN)⁢(P21-P22⁢R2RN+pF1-pF2⁢RFRN)⁢e-L/IN(1+21-P12⁢R1RN+21-pF2⁢RFRN)⁢(1+21-P22⁢R2RN+21-pF2⁢RFRN)⁢e-2⁢L/IN,Equation⁢1 where Rsis the signal resistance, RNis the spin accumulation resistance of a normal metal channel, RFis spin accumulation resistance of ferromagnetic electrodes, R1is resistance of the injector-channel interface, R2is resistance between the detector-channel interface, P1is the injector-channel interfacial spin polarization, P2is the detector-channel interfacial spin polarization, pFis spin polarization of ferromagnet injector and detector, L is the lateral separation between the injector and detector, and INis the spin diffusion length in the normal metal channel.For conditions of when injector interface resistance (R1) and detector interface resistance (R2) are either both high relative to the channel spin accumulation resistance (RN) or both low relative to the electrodes spin accumulation resistance (RF), the signal resistance can be simply expressed as:Rsfor high injector, detector RA (R1,R2>>RN) RS=RNP1P2e−L/INEquation 2Rsfor low injector, detector RA (R1,R2<<RF) RS=RN[4pF2/(1−pF2)2](R1R2/RN2)[e−L/IN/(1−e−2L/IN)]Equation 3In many practical applications there are constraints, and RNis similar to R1and R2. Examples of constraints are included below.Injector electrical reliability is limited by dielectric (e.g., material of barrier238A ofFIG.2) breakdown, so Vbias(voltage value to produce the injector current from the current source (e.g.,302ofFIG.3B)) may be maintained below a predetermined Vbiasvalue (e.g., ˜150 millivolts (mV) for MgO).Injector magnetic stability is limited by spin momentum transfer (SMT) at the injector-channel junction, so the junction current density may be maintained below a predetermined junction current density, Jlimit(e.g., ˜1e8 ampere/square centimeter (A/cm2))Injector heating may produce stray signals due to thermal spin injection, so injector power may be maintained below a predetermined power limit, Plimit. Although a goal may be to increase the injector current for the design, the above-noted constraints mean that the injector current and current density may not be increased without limit. Further, although the detector signal increases with detector-channel junction resistance (e.g., RA of layer238B ofFIG.2), it may not exceed acceptable preamplifier impedance for high frequency operation. Accordingly, one approach for tuning for high Vsis provided below. An example design approach to tune for high Vsincludes:1) Injector (Goal: increase injected spin current while maintaining good stability (<=Jlimit) and limited heating (<=injector Plimit)Set injector junction bias to a predetermined Vbiasvalue/limit.Increase injector-channel junction spacer area or injector-channel barrier area as practical for design fabrication.Select junction spacer or injector-channel barrier (materials, thickness) to reduce injector junction RA with operating current density<=Jlimitand power (V2/R)<PlimitIt should be noted that the resistance of an injector itself is generally insignificant compared to the resistance of the interface layer (e.g., barrier layer) between the injector and the channel layer. Thus, when injector junction resistance is used herein, it is essentially the resistance of the barrier layer between the injector and the channel layer.2) Detector (Goal: increase voltage detected at an acceptable impedance for the preamplifier)Set detector area (width×height) for a predetermined cross-track resolution and stability.Select detector-channel junction spacer or detector-channel barrier (materials, thickness) to increase detector junction RA<Rlimit(detector junction resistance limit).It should be noted that the resistance of a detector itself is generally insignificant compared to the resistance of the barrier layer between the detector and the channel layer. Thus, when detector junction resistance is used herein, it is essentially the resistance of the barrier layer between the detector and the channel layer.3) Channel (Goal: increase spin accumulation by tuning normal channel spin resistance for injector and detector spin resistances)Choose injector-detector channel spacing less than the channel spin diffusion length and as practical for design fabrication.Select channel (material, thickness, geometry) to increase signal Vswith reduced shield-shield spacing.It should be noted that, in practice, channel spin diffusion length is dependent upon material, process, and thickness. In one non-limiting example embodiment, the NLSV sensor is designed in the order listed above. The injector is defined first, the detector is engineered in view of the designed injector, and the channel layer is tuned based on the determined injector and detector parameters. FIGS.10A-10Care detector voltage signal (Vs) versus channel thickness graphs1000,1010and1020, respectively, for different injector junction/detector junction resistance area (RA) product values. InFIG.10A, plot1002is for a 10 nanometer (nm) injector width, plot1004is for a nm injector width and plot1006is for a 100 nm injector width. All plots consider a channel length of 200 nm with a spin diffusion length of 200 nm. The following parameter values were employed to obtain the plots1002,1004and1006:Injector Vbias=100 mV.Detector width×height=10 nm×10 nmForRho N (resistivity of the channel layer material)=15 microohms·centimeter (μohm·cm)Injector junction RA1=0.01 ohnms·micrometer squared (ohm·μm2) (Jlimit=1e9 A/cm2)Detector junction RA2=1.0 ohm·μm2(R (resistance of the detector junction)˜10,000 ohm)VsInjector area (width2)=100 nm2, 625 nm2, 10000 nm2Channel thickness=0.01 to 10 nm. InFIG.10B, plot1012is for a 10 nm injector width, plot1014is for a 25 nm injector width and plot1016is for a 100 nm injector width. The following parameter values were employed to obtain the plots1012,1014and1016:Injector Vbias=100 mV.Detector width×height=10 nm×10 nmForRho N=15 μohm·cmInjector junction RA1=0.1 ohm·μm2(Jlimit=1e8 A/cm2)Detector junction RA2=1 ohm·μm2(R˜10,000 ohm)VsInjector area (width2)=100 nm2, 625 nm2, 10000 nm2Channel thickness=0.01 to 10 nm. InFIG.10C, plot1022is for a 10 nm injector width, plot1024is for a 25 nm injector width and plot1026is for a 100 nm injector width. The following parameter values were employed to obtain the plots1022,1024and1026:Injector Vbias=100 mV.Detector width×height=10 nm×10 nmForRho N=15 μohm·cmInjector junction RA1=0.1 ohm·μm2(Jlimit=1e8 A/cm2)Detector junction RA2=0.1 ohm·μm2(R˜1000 ohm)VsInjector area (width)=100 nm2, 625 nm2, 10000 nm2Channel thickness=0.01 to 10 nm InFIGS.10A-10C, Vsincreases by tuning channel resistance (e.g., Rho N/thickness) with injector and detector resistance. Also, Vsincreases with detector junction area. FIGS.11A-11Care Vsversus channel thickness graphs1100,1110and1120, respectively, for different channel resistivity values. InFIG.11A, plot1102is for a 10 nm injector width, plot1104is for a 25 nm injector width and plot1106is for a 100 nm injector width. All plots consider a channel length of 200 nm with a spin diffusion length of 200 nm. The following parameter values were employed to obtain the plots1102,1104and1106:Injector Vbias=100 mV.Detector width×height=10 nm×10 nm It should be noted that, in practice, rho N is determined by material, quality and thickness. Values considered below are similar to bulk, thin film literature reports and a high rho case.ForRho N=3 μohm·cm (˜bulk Cu)Injector junction RA1=0.1 ohm·μm2(Jlimit=1e8 A/cm2)Detector junction RA2=0.1 ohm·μm2(R˜1000 ohm)VsInjector area (width2)=100 nm2, 625 nm2, 10000 nm2Channel thickness=0.01 to 10 nm. InFIG.11B, plot1112is for a 10 nm injector width, plot1114is for a 25 nm injector width and plot1116is for a 100 nm injector width. The following parameter values were employed to obtain the plots1112,1114and1116:Injector Vbias=100 mV.Detector width×height=10 nm×10 nmForRho N=15 μohm·cm (˜thin film Cu)Injector junction RA1=0.1 ohm·μm2(Jlimit=1e8 A/cm2)Detector junction RA2=0.1 ohm·μm2(R˜1000 ohm)VsInjector area (width2)=100 nm2, 625 nm2, 10000 nm2Channel thickness=0.01 to 10 nm. InFIG.11C, plot1122is for a 10 nm injector width, plot1124is for a 25 nm injector width and plot1126is for a 100 nm injector width. The following parameter values were employed to obtain the plots1122,1124and1126:Injector Vbias=100 mV.Detector width×height=10 nm×10 nmForRho N=75 μohm·cm (˜high rho for very thin film of Cu)Injector junction RA=0.1 ohm·μm2(Jlimit=1e8 A/cm2)Detector junction RA=0.1 ohm·μm2(R˜1000 ohm)VsInjector area (width2)=100 nm2, 625 nm2, 10000 nm2Channel thickness=0.01 to 10 nm. InFIGS.11A-11C, Vsincreases by tuning channel resistance (e.g., thickness) with injector and detector resistance. Also, Vsincreases with detector junction area. FIGS.12A-12Care detector Vsversus detector junction RA graphs1200,1210and1220, respectively, for different channel thickness values. InFIG.12A, which includes plots for a channel thickness of 10 nm, plot1202is for a 10 nm injector width, plot1204is for a 25 nm injector width and plot1206is for a 100 nm injector width. InFIG.12B, which includes plots for a channel thickness of 5 nm, plot1212is for a 10 nm injector width, plot1214is for a 25 nm injector width and plot1216is for a 100 nm injector width. InFIG.12C, which includes plots for a channel thickness of 1 nm, plot1222is for a 10 nm injector width, plot1224is for a 25 nm injector width and plot1226is for a 100 nm injector width. All plots consider a channel length of 200 nm with a spin diffusion length of 200 nm. The following parameter values were employed to obtain the plots1202-1226:Injector Vbias=100 mV.Detector width×height=10 nm×10 nmForRho N=15 μohm·cmInjector RA1=0.1 ohm·μm2(Jlimit=1e8 A/cm2)VsInjector area (width)=100 nm2, 625 nm2, 10000 nm2Detector junction RA2=0.01 to 10. ohm·μm2 InFIGS.12A-12C, Vsis tuned by increasing detector RA and injector area. The preferred design can be chosen to satisfy the selected detector resistance (e.g., detector resistance=detector RA/detector area). FIG.13is a flow diagram of a method1300in accordance with one embodiment. The method includes, at1302, selecting first design parameter values for a spin injector and for a first interface resistance between the spin injector and a channel layer. The method also includes, at1304, selecting second design parameter values for a detector and for a second interface resistance between the detector and the channel layer. The method further includes, at1306, selecting third design parameter values for the channel layer such that the third design parameter values comport with the first design parameter values and the second design parameter values. In one embodiment, the selection of the third design parameter values includes measuring different detector-channel voltage values for different thickness values for the channel layer between a predetermined low thickness value (e.g., 0.01 nm) and a predetermined high thickness value (e.g., 10 nm). In this embodiment, the method also includes selecting a thickness value of the different thickness values that provides a highest detector-channel voltage value of the measured different detector-channel voltage values. It should be noted that most of the above-described embodiments are shown with barrier layers between the injector and the channel and the detector and the channel. However, in some embodiments, the injector-channel interface/junction itself and/or the detector/channel interface/junction itself may have resistance values that are suitable for the NLSV sensor, and therefore such embodiments may not employ barrier layers. Although various uses of the NLSV sensors are disclosed in the present disclosure, embodiments are not limited to the particular applications or uses disclosed in the disclosure. It is to be understood that even though numerous characteristics and advantages of various embodiments of the disclosure have been set forth in the foregoing description, together with details of the structure and function of various embodiments of the disclosure, this disclosure is illustrative only, and changes may be made in detail, especially in matters of structure and arrangement of parts within the principles of the present disclosure to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed. For example, the particular elements may vary depending on the particular application for the NLSV sensor while maintaining substantially the same functionality without departing from the scope and spirit of the present disclosure. In addition, although the preferred embodiment described herein is directed to particular type of NLSV sensor utilized in a particular data storage system, it will be appreciated by those skilled in the art that the teachings of the present disclosure can be applied to other data storage devices without departing from the scope and spirit of the present disclosure.
34,034
11862209
DETAILED DESCRIPTION A load beam is described herein. The load beam according to some embodiments of the present disclosure is part of suspension for a magnetic disk drive unit. The disk drive unit includes a spinning magnetic or optical disk, which contains a pattern of magnetic ones and zeroes on it that constitutes the data stored on the disk drive. The magnetic or optical disk is driven by a drive motor. The disk drive unit, according to some embodiments, includes a suspension with a load beam, a base plate, and a gimbal to which a head slider is mounted proximate the distal end of the gimbal. The proximal end of a suspension or load beam is the end that is supported, i.e., the end nearest to a base plate which is swaged or otherwise mounted to an actuator arm. The distal end of a suspension or load beam is the end that is opposite the proximal end, i.e., the distal end is the cantilevered end. The gimbal is coupled to a base plate, which in turn is coupled to a voice coil motor. The voice coil motor is configured to move the suspension arcuately in order to position the head slider over the correct data track on the magnetic disk. The head slider is carried on a gimbal, which allows the slider to pitch and roll so that it follows the proper data track on the spinning magnetic disk, allowing for such variations without degraded performance. Such variations typically include vibrations of the disk, inertial events such as bumping, and irregularities in the disk's surface. In some embodiments, the gimbal described herein is part of a dual stage actuation (DSA) suspension. The DSA suspension can include a base plate and a load beam. The load beam includes a gimbal. The gimbal can include mounted actuators and a gimbal assembly. The actuators are operable to act directly on the gimbaled assembly of the DSA suspension that is configured to include the read/write head slider. In some embodiments, the gimbal can include at least one actuator joint configured to receive an actuator. The gimbal, according to some embodiments, includes two actuator joints, located on opposing sides of the gimbal. Each actuator joint includes actuator mounting shelves. In some embodiments, each actuator spans the respective gap in the actuator joint. The actuators are affixed to the slider tongue by an adhesive. The adhesive can include conductive or non-conductive epoxy strategically applied at each end of the actuators. The positive and negative electrical connections can be made from the actuators to the gimbal by a variety of techniques. When the actuator is activated, it expands or contracts producing movements of the read/write head that is mounted at the distal end of suspension thereby changing the length of the gap between the mounting ends. In some embodiments, the suspension can be configured as a single-stage actuation suspension, a dual-stage actuation device, a tri-stage actuation device or other configurations. In some embodiments, the tri-stage actuation suspension includes actuators respectively located at the mount plate region and on the gimbal at the same time. Conceivably, any variation of actuators can be incorporated onto the suspension for the purposes of the examples disclosed herein. In other words, the suspension may include more or less components than those shown without departing from the scope of the present disclosure. The components shown, however, are sufficient to disclose an illustrative example for practicing the disclosed principles. As shown in greater detail inFIGS.1and2, the suspension10comprises a plurality of separate components that are mounted together. Suspension10includes a load beam12to which a flexure is mounted. The load beam12is a generally planar structure formed from a metal substrate, such as stainless steel. The load beam12includes a major surface14(e.g., a top or bottom surface of the load beam12) that is flat and extends over a large portion of the load beam12. The load beam12is generally rigid such that the different sections of the major surface14do not move relative to one another during normal operation of the suspension10. The major surface is interrupted by various features, such as a window38as shown inFIG.2. The load beam12can also include other windows. The windows are open on a first side (e.g., the top side) and a second side (e.g., the bottom side) of the load beam12by extending through the substrate of the load beam12. The windows can be used for alignment during assembly, the windows can lighten and/or strengthen the load beam12, and/or other components can extend through one or more of the windows. The load beam12includes a mounting region at its proximal end, to which a base plate is mounted. The mounting region and base plate are mounted to the actuator arm of a disk drive unit in a known manner. The load beam12further includes a rigid region at the distal portion of the load beam12and a spring region located proximal of the rigid region and distal of the mounting region. A flexure is mounted to the rigid region of the load beam12and provides a resilient connection between the load beam12and slider. The spring region of load beam12provides a desired gram load that opposes the force exerted upon the slider by the air bearing generated by a rotating disk. Toward this end, the spring region can include a preformed bend or radius that provides a precise gram load force. The gram load is transmitted to the flexure through the rigid region of the load beam12. A dimple9can extend between the rigid region of the load beam12and the flexure to provide a point of transfer for the gram load. In some embodiments, the load beam12include side rails22,24. In some embodiments, the side rails22,24have high lateral stiffness to attain high torsion and sway frequency. In some embodiments, the side rails22,24are made of stainless steel. In some embodiments, the side rails22,24generally extend orthogonal from the load beam12. In some embodiments, the load beam12and the side rails22,24constitute a unitary piece. In some embodiments, the load beam12and the side rails22,24constitute a unitary piece of stainless steel. In some embodiments, a distal end of the load beam12includes a dustpan18and a lift tab16. In some embodiments, the dustpan18includes a proximal end26and a distal end28. In some embodiments, the lift tab16is disposed on a distal end28of the dustpan18. In other words, the lift tab16is distal of the dustpan18. In some embodiments, a proximal end26defines a dustpan forming line30between the dustpan18and the major surface14of the load beam12. In some embodiments, the side rails22,24also extend from the dustpan18. In some embodiments, the side rails22,24generally extend orthogonal from the dustpan18. In some embodiments, a distal end28of the load beam12further includes a tip weld20. In some embodiments, the tip weld20is disposed on the major surface14that is flat. The dustpan forming line in conventional load beams (without the slit), are disposed distal to the tip weld. In some embodiments of the present disclosure, the dustpan forming line30extends through the tip weld20. In other words, the dustpan forming line30is shifted towards the dimple9relative to conventional load beams (without the slit). For some embodiments, the dustpan forming line30is shifted by 0.05 mm to 0.5 mm towards the dimple9relative to conventional load beams. As shown inFIG.2, the dustpan forming line30is shifted towards the dimple9by 0.1 mm relative to the dustpan forming line in conventional load beams (without the slit). In some embodiments, a distal end of the load beam12further includes a slit32disposed about the tip weld20. In some embodiments, the slit32is disposed on the major surface14that is flat. In some embodiments, the slit32is in the shape of a semicircle, as shown inFIG.2. In some embodiments, the slit32is a U-shape, as shown inFIG.4. In some embodiments, a convex portion of the slit32(e.g., in the semicircular shape or U-shape) is distal of the tip weld20. Due to the additional tip weld in the distal end of the load beam, the load beam dustpan forming line is shifted towards the lift tab (i.e., distal to the tip weld) in conventional load beams. To maintain the same length of the load beam, the dustpan forming angle is increased to achieve the targeted lift tab offset height. However, the increased dustpan forming angle in conventional load beams can easily cause the potential buckling in the load beam rails due to the excessive load beam material deformation. Without being bound to any particular theory, the improved load beam12alleviates the buckling issue of the side rails22,24of the load beam12. The slit32allows for the dustpan forming line30to be shifted towards the dimple9relative to conventional load beams (i.e., shifted away from the lift tab16). The slit32also allows for the dustpan forming angle θ to be decreased relative to conventional load beams (without the slit), thereby alleviating the buckling issue of the side rails at the dustpan forming line in conventional load beams. In some embodiments, the dustpan forming angle θ can be decreased from 2.0° to 8.0° relative to the dustpan forming angle θ in conventional load beams (without the slit) while a lift tab offsetting height h can be maintained. In some embodiments, the dustpan forming angle θ can be decreased from 4.0° to 8.0° relative to the dustpan forming angle θ in conventional load beams (without the slit). In other words, the dustpan forming angle θ of the exemplary embodiment ofFIG.2is less than the dustpan forming angle θ of conventional load beams (without the slit). As illustrated inFIG.3the slit32allows for the dustpan forming angle θ to be decreased to 24.7°, while the lift tab offsetting height h is maintained, which for some embodiments is 0.250 mm. In some embodiments, the dustpan forming angle θ can be between 15 to 25°, 18 to 25°, or 20 to 25°. Furthermore, the slit32is narrow in width and is disposed on the major surface14that is flat, which allows for the stiffness of the lift tab16to be maintained. FIG.4shows a second exemplary embodiment of the load beam212. As shown inFIG.4, the dustpan forming line230is shifted towards the dimple229relative to the dustpan forming line in conventional load beams (without the slit). In some embodiments, the dustpan forming line230is shifted towards the dimple229by more than the exemplary embodiment ofFIG.2(i.e., more than 0.1 mm) relative to the dustpan forming line in conventional load beams (without the slit). In some embodiments, a distal end of the load beam212further includes a slit232disposed about the tip weld220. In some embodiments, the slit232is disposed on the major surface214that is flat. In some embodiments, the slit232is a U-shape, as shown inFIG.4. In some embodiments, a convex portion of the slit232is distal of the tip weld220. In some embodiments, the load beam212also includes a window238. In some embodiments, the dustpan forming angle θ can be decreased from 2.0° to 8.0° relative to the dustpan forming angle θ in conventional load beams (without the slit) while a lift tab offsetting height h can be maintained. In some embodiments, the dustpan forming angle θ can be decreased from 4.0° to 8.0° relative to the dustpan forming angle θ in conventional load beams (without the slit). In some embodiments, the dustpan forming angle θ can be decreased more than the exemplary embodiment ofFIG.2(i.e., more than 4.0°) relative to the dustpan forming angle θ in conventional load beams (without the slit). In other words, the dustpan forming angle θ of the exemplary embodiment ofFIG.4is less than the dustpan forming angle θ of the exemplary embodiment ofFIG.2(i.e., less than 24.7°) as well as the dustpan forming angle θ in conventional load beams (without the slit). In some embodiments, the dustpan forming angle θ can be between 15 to 25°, 18 to 25°, or 20 to 25°. Without being bound to any particular theory, the improved load beam212alleviates the buckling issue of the side rails222,224of the load beam212. The load beam according to embodiments described here are configured to be used with hard drive suspensions including those described herein. While multiple examples are disclosed, still other examples within the scope of the present disclosure will become apparent to those skilled in the art from the detailed description provided herein, which shows and describes illustrative examples. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not restrictive. Features and modifications of the various examples are discussed herein and shown in the drawings. While multiple examples are disclosed, still other examples of the present disclosure will become apparent to those skilled in the art from the following detailed description, which shows and describes illustrative examples of this disclosure. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not restrictive.
13,071
11862210
DETAILED DESCRIPTION OF THE INVENTION First Embodiment A disk drive suspension (to be referred to as suspension10hereinafter) according to the first embodiment will be described with reference toFIGS.1to11. The suspension10shown inFIG.1includes a base plate11, a load beam12, a flexure13and the like.FIG.1is a plan view of the suspension10as viewed from a side of the load beam12.FIG.2is a plan view of the suspension10as viewed from a side of the flexure13. The load beam12is made from a stainless steel plate and extends along a longitudinal direction of the suspension10. The direction indicated by a two-way arrow X1inFIG.1is the longitudinal direction of the load beam12. The direction indicated by a two-way arrow Y1inFIG.1is the width direction of the load beam12. The load beam12includes a base portion12a(shown inFIG.2), which is fixed to the base plate11. The thickness of the load beam12is, for example, 20 to 40 μm, but may be of other thicknesses. In the vicinity of the base portion12aof the load beam12, first piezoelectric elements15aand15b(shown inFIG.1) are disposed. In the vicinity of a distal end portion12bof the suspension10, second piezoelectric elements16aand16b(shown inFIG.2) are disposed. The piezoelectric elements15a,15b,16aand16bhave the function of moving the distal end portion12bof the suspension10in a sway direction (the direction indicated by the two-way arrow S1inFIG.1). The flexure13includes a metal base (metal plate)20and a wiring portion21. The metal plate20is made of a thin plate of stainless steel. The wiring portion21is located along the metal plate20. The thickness of the metal plate20is, for example, 20 μm (12 to 25 μm), but may be any other thickness. The thickness of the metal plate20is less than the thickness of the load beam12. As shown inFIG.2, the flexure13includes a flexure main body30, a flexure tail31, a gimbal portion32, and a pair of outrigger portions33and34. The flexure main body30is fixed to the load beam12. The flexure tail31extends behind the base plate11(in the direction indicated by R1inFIG.1). The gimbal portion32is formed near the distal end13aof the flexure13. On the gimbal portion32, a tongue35is formed. In the tongue35, a slider36, which functions as a magnetic head, is disposed. The outrigger portions33and34are formed from parts of the metal plate20. The outrigger portions33and34extend along the length direction of the flexure13from both side portions of the flexure main body30to respective both side portions of the gimbal portion32. The length direction of the flexure13is the length direction of the load beam12as well. The outrigger portions33and34each have an elongated slim shape and elastically support the tongue35and the like. The roots33aand34aof the outrigger portions33and34are connected to the flexure main body30. The metal plate20of the flexure13is secured to the load beam12by a plurality of weld portions41,42and43. The weld portions41,42and43are formed by laser spot welding. The first weld portion41is formed near the roots33aand34aof the outrigger portions33and34. The second weld portion42secures the flexure main body30to the load beam12. The third weld portion43secures the distal end13aof the flexure13to the load beam12. FIG.3is an enlarged plan view of a part of the suspension10shown inFIG.1.FIG.4is an enlarged plan view of a part of the suspension10shown inFIG.2.FIG.5is an enlarged plan view of the root33aand the weld portion41of one outrigger portion33, and the like. The root33aof the outrigger portion33is supported by the weld portion41. FIG.6is a cross-sectional view of a part of the suspension10(near the weld portion41) taken along line F6-F6inFIG.5.FIG.6shows a cross-section along the longitudinal direction of the load beam12in the thickness direction.FIG.7shows a cross-sectional view of a part near the weld portion41taken along line F7-F7inFIG.5.FIG.7shows a cross section along the width direction of the load beam12. FIGS.6and7show the root33aand the weld portion41of one (outrigger portion33) of the pair of outrigger portions33and34. The root34aand the weld portion41of the other outrigger portion34are configured to be similar to those of the root33aand the weld portion41of the outrigger portion33. For this reason, the outrigger portion33and the weld portion41will be described as representative hereafter. FIG.8is a perspective view of the load beam12. On respective sides of the load beam12, flange bend portions51and52are formed. The flange bend portions51and52extend in the longitudinal direction of the load beam12. The direction indicated by the two-way arrow X1inFIG.8is the longitudinal direction of the load beam12. The direction indicated by the two-way arrow Y1inFIG.8is the width direction of the load beam12. In a longitudinal part of the load beam12(between the base portion12aand the distal end portion12b), a sag bend portion55is formed. As shown inFIG.6, the sag bend portion55is formed by bending the longitudinal part of the load beam12at an angle θ1in the thickness direction. As shown inFIG.8, the sag bend portion55extends in the width direction of the load beam12. The load beam12with the sag bend portion55includes a first portion12A and a second portion12B bounded by the sag bend portion55. The first portion12A is located closer to the base portion12awith respect to the sag bend portion55. The second portion12B is located closer to the distal end portion12bwith respect to the sag bend portion55. The weld portion41is formed on the second portion12B of the load beam12in the vicinity of the sag bend portion55. The root33aof the outrigger portion33is secured to the load beam12by the weld portion41. The weld portion41supports the root33aof the outrigger portion33to the load beam12. In the load beam12, a slit portion60is formed. The slit portion60has a U-shaped in a plan view of the load beam12. The slit portion60is formed in a region W1(shown inFIG.3) that includes the weld portion41in the plan view of the load beam12. The region W1including the weld portion41is a part of the load beam12and includes the root33aof the outrigger portion33, the weld portion41, and a part of the sag bend portion55. The slit portion60includes an arc-shaped slit61and a pair of extension slits62and63. The arc-shaped slit61is formed into such a shape as to surround approximately a half of a circumference of the weld portion41. The extension slits62and63are connected to respective ends of the arc-shaped slit61. Inside the slit portion60, an outrigger support portion70is formed. In the outrigger support portion70, a weld portion41is formed. The arc-shaped slit61is formed in the second portion12B of the load beam12. The arc-shaped slit61of this embodiment is formed into an approximately semicircular shape around the weld portion41. The extension slits62and63extend from respective ends of the arc-shaped slit61in a direction away from the weld portion41. The extension slits62and63are formed along the longitudinal direction of the load beam12. The extension slits62and63extend from the second portion12B across the sag bend portion55to the first portion12A. Between the flange bend portion51and the slit portion60, a narrow portion71is formed. The narrow portion71is a part of the load beam12. The narrow portion71extends along the flange bend portion51and in the longitudinal direction of the load beam12. The slit portion60is formed in the load beam12. With this structure, the bending rigidity of the load beam12becomes lower in the vicinity of the slit portion60. However, since the flange bend portion51is provided near the narrow portion71, a necessary rigidity as the load beam12is obtained. The weld portion41is formed by irradiating a laser beam by a laser irradiation device from the side of the flexure13of the suspension. The region where the laser beam is focused is fused, and as the fused metal solidifies, the weld portion41is formed. The weld portion41has a front nugget41aand a rear nugget41b. The front nugget41ais exposed from the surface of the flexure13and has substantially a round shape. The rear nugget41bis exposed from the rear surface of the load beam12and has substantially a round shape. In other embodiments, the laser beam may be irradiated from the side of the load beam12of the suspension. The front nugget41ahas a diameter D1(shown inFIG.5), which is greater than a diameter of the rear nugget41b(shown inFIG.6). The diameter D1of the front nugget41ais, for example, 0.13 to 0.16 mm. When forming the weld portion41, a holding jig is used to support the load beam12. Reference symbol D2inFIG.5represents the distance from the center C1of the weld portion41to the slit portion60. If this distance D2is excessively short, it becomes difficult to secure the contact surface of the holding jig mentioned above. On the other hand, if the distance D2is excessively large, it is undesirable because part of the slit portion60may reach the flange bend portion51. As the distance D2is larger, the area of the outrigger support portion70becomes larger, and therefore the rigidity of the outrigger support portion70becomes excessive. For such a reason, the distance D2from the center C1of the weld portion41to the slit portion60should preferably be at least one and not more than three times the diameter D1of the front nugget41a. FIG.6shows a cross-sectional view along the longitudinal direction of the load beam12. As shown inFIG.6, viewing the load beam12from a side, the second portion12B is bent in the thickness direction of the load beam12with respect to the first portion12A. That is, the second portion12B is bent at an angle θ1in the thickness direction of the load beam12, at the boundary of the sag bend portion55. In contrast, the outrigger support portion70is bent at an angle θ2on the same side as the second portion12B of the load beam12. As shown inFIG.6, the outrigger support portion70extends in a direction different from that of the second portion12B of the load beam12. InFIG.6, let us suppose X2as a virtual line segment extending the first portion12A in the longitudinal direction of the load beam12. The angle that the outrigger support portion70makes with respect to this virtual line segment X2is 02. The angle of the second portion12B to the virtual line segment X2is 01. The angle θ2of the outrigger support portion70is smaller than the angle θ1of the second portion12B. The root33aof the outrigger portion33is fixed to the outrigger support portion70by the weld portion41. Therefore, the root33aof the outrigger portion33is bent at an angle θ2, which corresponds to that of the outrigger support portion70. As shown inFIG.7, the outrigger support portion70is set to have different heights along the thickness direction with respect to the load beam12. FIG.9is a cross-sectional view schematically showing an example of a disk drive80. The disk drive80has a case81(only a part thereof is shown), disks82, a carriage84, a positioning motor85and the like. The disks82rotate around a spindle. The carriage84pivots around a pivot axis83. The motor85drives the carriage84. The case81is sealed by a lid. The carriage84includes a plurality of arm portions86. The base plate11of the suspension10is fixed to the distal end of each of the arm portions86. When a disk82rotates, an air bearing is formed between the slider36and the disk82. When the carriage84is pivoted by the motor85, the suspension10moves along the radial direction of disk82. Thus, the slider36is moved to a desired position on the disk82. FIG.10is an enlarged plan view of a part of the suspension10. The suspension10includes an outrigger portion33. For convenience of explanation,FIG.10is represented in left-to-right reverse as compared toFIG.4.FIG.11shows the relationship between position along line segments L1, L2and L3shown inFIG.10and height. As indicated by the line segment L3inFIG.11, the profile of the outrigger portion33is optimized according to the angle θ2(shown inFIG.6) of the outrigger support portion70. Second Embodiment FIG.12shows a cross-sectional view of the vicinity of an outrigger support portion70of a suspension10A according to the second embodiment. The outrigger support portion70of the suspension10A extends in the same direction as that of the first portion12A with regard to a cross-section taken along the longitudinal direction of the load beam12. The outrigger support portion70extends in a direction different from that of the second portion12B of the load beam12. The outrigger support portion70and the second portion12B make an angle θ1with respect to each other. FIG.13shows the relationship between the positions respectively corresponding to line segments L1, L2and L3shown inFIG.10and the heights. The line segment L3shown inFIG.13represents a profile of the outrigger portion33of the suspension10A in the second embodiment (shown inFIG.12). The outrigger portion33of the suspension10A has a profile according to the outrigger support portion70. In this embodiment, the heights of the locations respectively corresponding to the line segments L1and L2are measured with reference to the rear surface of the load beam12. InFIG.13, the heights correspond to the direction of the locations measured. As to the other structure and operation, since the suspension10A of the second embodiment shares the same configuration with the suspension10of the first embodiment (shown inFIGS.1to8), the items common to both are designated by the same reference symbols and the explanation thereof is omitted. Third Embodiment FIG.14shows a cross-sectional view of the vicinity of an outrigger support portion70of a suspension10B according to the third embodiment.FIG.14is a cross-section taken along the longitudinal direction of a load beam12. The outrigger support portion70of the suspension10B is bent at a negative angle θ3to an opposite side of a second portion12B. The outrigger portion33of the suspension10B has a profile according to the outrigger support portion70having a negative angle θ3. As to the other structure and operation, since the suspension10B of the third embodiment shares the same configuration with the suspension10of the first embodiment, the items common to both are designated by the same reference symbols and the explanation thereof is omitted. Fourth Embodiment FIG.15is a plan view of a suspension10C according to the fourth embodiment. An arc-shaped slit61of this suspension10C is formed in a first portion12A of the load beam12. Extension slits62and63extend from the first portion12A across the sag bend portion55to the second portion12B. As to the other structure and operation, since the suspension10C of the fourth embodiment shares the same configuration with the suspension10of the first embodiment, the items common to both are designated by the same reference symbols and the explanation thereof is omitted. Fifth Embodiment FIG.16is a plan view of a suspension10D according the fifth embodiment. A slit portion60of this suspension10D includes a first slit60A and a second slit60B. The first slit60A and the second slit60B are symmetrical to each other with respect to a sag bend portion55as the axis of symmetry. The first slit60A is formed in the first portion12A of the load beam12. The second slit60B is formed in the second portion12B of the load beam12. The slit portion60includes the first slit60A and the second slit60B. An outrigger support portion70with a weld portion41is formed inside the slit portion60. Note that the first slit60A and the second slit60B need not be completely symmetrical. For example, the first slit60A and the second slit60B may be slightly asymmetrical with respect to the sag bend portion55as a border. As to the other structure and operation, since the suspension10D of the fifth embodiment shares the same configuration with the suspension10of the first embodiment, the items common to both are designated by the same reference symbols and the explanation thereof is omitted. Sixth Embodiment FIG.17is a plan view of a suspension10E according to the sixth embodiment.FIG.18is a cross-sectional view of the suspension10E taken along line F18-F18inFIG.17. InFIG.17, an axis X3extends along the longitudinal direction of the load beam12. The suspension10E includes a pair of slit portions60that are line symmetrical with respect to the axis X3as the axis of symmetry. The slit portions60are formed in the first portion12A of the load beam12. The extension slits62and63extend along the width direction of the load beam12. An outrigger support portion70with a weld portion41is formed inside the slit portions60. As shown inFIG.18, a central portion of the cross-section along the width direction of the load beam12is slightly convex and curved to the opposite side of flange bend portions51and52. A pair of outrigger support portions70each extend in the width direction of the load beam12. As to the other structure and operation, since the suspension10E of the sixth embodiment shares the same configuration with the suspension10of the first embodiment, the items common to both are designated by the same reference symbols and the explanation thereof is omitted. In implementing the present invention, it goes without saying that the specific shape and configuration of the load beam and flexure that constitute the suspension, as well as the shape and arrangement of the sag bend portion, outrigger portion, slit portion, outrigger support portion, etc., can be changed as necessary. Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
17,926
11862211
DETAILED DESCRIPTION FIGS.2A and2Billustrate conceptual block diagrams of a top view and a side view of a data storage device in the form of a disk drive15, in accordance with aspects of the present disclosure. Disk drive15comprises control circuitry22, an actuator arm assembly19, and a plurality of hard disks16A,16B,16C,16D (“hard disks16”).FIG.2Cdepicts a flowchart for an example method80that control circuitry22of disk drive15may perform or execute in controlling the operations of disk drive15, in accordance with aspects of the present disclosure, including operations involved inFIG.2Cdepicts a flowchart for an example method that control circuitry of a disk drive may perform or execute in controlling the operations of the disk drive, including for assigning one or more logical tracks to physical tracks of two or more of the disk surfaces such that a respective logical track of the logical tracks comprises: at least a portion of sectors of a primary physical track of the physical tracks, the primary physical track being on the first disk surface; and at least a portion of sectors of a donor physical track of the physical tracks, the donor physical track being on the second disk surface; and performing, using the first head proximate to the first disk surface and the second head proximate to the second disk surface, a data access operation with at least one of the logical tracks, in accordance with aspects of the present disclosure, in accordance with illustrative aspects. Each logical track of the logical tracks is assigned to and comprises at least a primary physical track and at least a portion of an auxiliary or donor physical track, in accordance with aspects of the present disclosure. The logical track may comprise the entirety of the primary physical track, or may comprise at least a portion of the sectors of the primary physical track, in various examples. The terms “auxiliary track” and “donor track” may be used synonymously for purposes of this disclosure. Actuator arm assembly19comprises a primary actuator20(e.g., a voice coil motor (“VCM”)) and a number of actuator arms40(e.g., topmost actuator arm40A, as seen in the perspective view ofFIGS.2A and2B). Each of actuator arms40comprises a suspension assembly42at a distal end thereof (e.g., example topmost suspension assembly42A comprised in topmost actuator arm40A, in the view ofFIGS.2A and2B). Each suspension assembly42may comprise one or more additional fine actuators, in some examples. Each of actuator arms40is configured to suspend one of read/write heads18(“heads18”) in close proximity over a corresponding disk surface17(e.g., head18A suspended by topmost actuator arm40A over topmost corresponding disk surface17A, head18H suspended by lowest actuator arm40H over lowest corresponding disk surface17H). Other examples may include any of a wide variety of other numbers of hard disks and disk surfaces, and other numbers of actuator arm assemblies, primary actuators, and fine actuators besides the one actuator arm assembly19and the one primary actuator20in the example ofFIGS.2A and2B, for example. In various examples, disk drive15may be considered to perform or execute functions, tasks, processes, methods, and/or techniques, including aspects of example method80, in terms of its control circuitry22performing or executing such functions, tasks, processes, methods, and/or techniques. Control circuitry22may comprise and/or take the form of one or more driver devices and/or one or more other processing devices of any type, and may implement or perform functions, tasks, processes, methods, or techniques by executing computer-readable instructions of software code or firmware code, on hardware structure configured for executing such software code or firmware code, in various examples. Control circuitry22may also implement or perform functions, tasks, processes, methods, or techniques by its hardware circuitry implementing or performing such functions, tasks, processes, methods, or techniques by the hardware structure in itself, without any operation of software, in various examples. Control circuitry22may comprise one or more processing devices that constitute device drivers, specially configured for driving and operating certain devices, and one or more modules. Such device drivers may comprise one or more head drivers, configured for driving and operating heads18. Device drivers may be configured as one or more integrated components of one or more larger-scale circuits, such as one or more power large-scale integrated circuit (PLSI) chips or circuits, and/or as part of control circuitry22, in various examples. Device drivers may also be configured as one or more components in other large-scale integrated circuits such as system on chip (SoC) circuits, or as more or less stand-alone circuits, which may be operably coupled to other components of control circuitry22, in various examples. Primary actuator20may perform primary, macroscopic actuation of a plurality of actuator arms40, each of which may suspend one of heads18over and proximate to corresponding disk surfaces17of disks16. The positions of heads18, e.g., heads18A and18H, are indicated inFIG.2A, although heads18are generally positioned very close to the disk surfaces, and are too small to be visible if depicted to scale inFIGS.2A and2B. Example disk drive15ofFIGS.2A and2Bcomprises four hard disks16. Other examples may comprise any number of disks, such as just one disk, two disks, three disks, or five or more disks. Hard disks16may also be known as platters, and their disk surfaces may also be referred to as media, or media surfaces. The four hard disks16comprise eight disk surfaces17A,17B,17C,17D,17E,17F,17G, and17H (“disk surfaces17”), with one disk surface17on each side of each hard disk16, in this illustrative example. Actuator assembly19suspends heads18of each actuator arm40over and proximate to a corresponding disk surface17, enabling each of heads18to write control features and data to, and read control features and data from, its respective, proximate disk surface17. In this sense, each head18of each actuator arm40interacts with a corresponding disk surface17. The term “disk surface” may be understood to have the ordinary meaning it has to persons skilled in the applicable engineering fields of art. The term “disk surface” may be understood to comprise both the very outer surface layer of a disk as well as a volume of disk matter beneath the outer surface layer, which may be considered in terms of atomic depth, or (in a greatly simplified model) the number of atoms deep from the surface layer of atoms in which the matter is susceptible of physically interacting with the heads. The term “disk surface” may comprise the portion of matter of the disk that is susceptible of interacting with a read/write head in disk drive operations, such as control write operations, control read operations, data write operations, and data read operations, for example. In the embodiment ofFIGS.2A and2B, each disk surface, e.g., disk surface17A as shown inFIG.2A, comprises a plurality of control features. The control features comprise servo wedges321-32N, which define a plurality of servo tracks34, wherein data tracks are defined relative to the servo tracks34, and which may be at the same or different radial density. Control circuitry22processes a read signal36emanating from the respective head, e.g., head18A, to read from disk surface17A, to demodulate the servo wedges321-32Nand generate a position error signal (PES) representing an error between the actual position of the head and a target position relative to a target track. A servo control system in the control circuitry22filters the PES from the servo wedges using a suitable compensation filter to generate a control signal38applied to actuator arm assembly19, including to control actuator20, which functions as a primary actuator, and which rotates actuator arm assembly19about an axial pivot in order to perform primary actuation of the corresponding heads radially over the disk surfaces17in a direction that reduces the PES, as well as to control any fine actuators, in various examples. Control circuitry22may also apply control signals to and receive sensor signals from heads18and/or any of various components of disk drive15, in various examples. In the example ofFIGS.2A and2B, actuator arm assembly19rotates actuator arms40about a common pivot. In another example, a first actuator arm assembly and/or VCM and a second actuator arm assembly and/or VCM, or other types of primary actuators, may each be configured to actuate respective actuator arm assemblies or sets of multi-actuator arms, either about a single common coaxial pivot, or about separate pivots, for example, mounted at different circumferential locations about the disks. Various examples may employ more than two actuator arm assemblies or primary actuators or multi-actuators, which may be actuated about a common pivot, or which may be comprised in multiple multi-actuators mounted at different circumferential locations about the disks. In some examples, a disk drive may comprise two or more VCMs stacked vertically together and rotating about a common axis, thereby configured to actuate actuator arms and heads across different disk surfaces independently of each other, from a common pivot axis. Examples in this configuration may be referred to as split actuators or dual actuators. In one example, a disk drive may have two VCMs in a vertical stack, each of which has its own independent actuator arm assembly and controls one half of the actuator arms and heads, with a first half of the actuator arms and heads in a first stack controlled by the first VCM and a second half of the actuator arms and heads in a second stack controlled by the second VCM. Actuator arm assembly19and/or any of these other examples may thus constitute and/or comprise an actuator mechanism, in various examples. In executing example method80ofFIG.2C(aspects of which will also be further explained below with reference to the further figures), control circuitry22may issue one or more commands to other components of disk drive15, receive information from one or more other components of disk drive15, and/or perform one or more internal operations, such as generating one or more driver currents for outputting to system components of disk drive15. In particular, one or more processing devices, such as control circuitry22, may assign one or more logical tracks to physical tracks of two or more of the disk surfaces such that a respective logical track of the logical tracks comprises: at least a portion of sectors of a primary physical track of the physical tracks, the primary physical track being on the first disk surface; and at least a portion of sectors of a donor physical track of the physical tracks, the donor physical track being on the second disk surface (82). The one or more processing devices, such as control circuitry22, are further configured to perform, using the first head proximate to the first disk surface and the second head proximate to the second disk surface, a data access operation with at least one of the logical tracks (84). In some examples, control circuitry22may assign logical tracks to physical tracks of the disk surfaces such that the logical tracks across at least a portion of track radii of the disk surfaces are assigned with selected numbers of sectors per logical tracks, independent of the numbers of sectors per the physical tracks. Control circuitry22may comprise an FLT module24, which may implement these functions. FLT module24may comprise any hardware and/or software and are not limited by any other definitions of the term “module” in any other software or computing context. Control circuitry22may further perform additional actions, methods, and techniques, in accordance with various aspects including as further described herein. Disk drive15in accordance with aspects of the present disclosure may also be referred to as an FLT disk drive15. Disk drive15increases the overall hard drive data input/output rate, relative to prior art disk drives. Overall hard drive data input/output rate may also be referred to simply as data rate, for purposes of this disclosure, with the understanding of referring generally to data input/output rate for the disk drive, e.g., between control circuitry22and disk surfaces17, and/or between host44and disk surfaces17. Disk drive15may apply substantially the same data rate across at least a substantial portion of disk surfaces17, and across the tracks thereon at at least a substantial portion of radii of the tracks, as the data rate of an otherwise comparable conventional disk drive at only the outer diameter of disks16, at the tracks of the largest radii and highest numbers of sectors per track, in various examples of this disclosure. Disk drive15may implement logical tracks, based on but distinct from the physical tracks, such that the sectors per logical track are effectively independent of the radii of the physical tracks and of the numbers of sectors per physical track at each radius of disks16, in various examples of this disclosure. Disk drive15implementing FLT technology of this disclosure may enable higher data rates than conventional disk drives but with a single back-end channel encoder/decoder. Disk drives may implement what may be termed Multi-Band Output (MBO) technology, or multiple integer band, in which a disk drive logically stripes or bundles integer numbers of data tracks together, with multiple heads performing data access operations (e.g., read operations and/or write operations) on multiple tracks together, thus integrating the output of multiple bands (from the multiple tracks via the multiple heads), enabling substantially higher data rates than conventional disk drives. Multiple integer band technology may often be used in association with multiple actuators, in technology that may be referred to as Multiple-Actuator Multi-Band Output (MAMBO) technology. Multiple integer band technology, including MAMBO, may be referred to more generally as multiple integer band technology in this disclosure. FLT technology as described herein is analogously a multiple band technology that may be considered as a superset of multiple integer band technology, which may use at least one integer band of at least one primary track plus fractional portions of at least one donor track as parts of logical tracks. Control circuitry22may implement multiple band techniques, processes, and hardware elements, such as fine actuators, to keep all primary physical tracks associated with a single donor track in logical tracks physically closely aligned with each other, with fine control of the respective heads writing and reading those respective physical tracks on the respective disk surfaces, in various examples. A multiple integer band disk drive also requires multiple back-end channel decoders, which accordingly uses additional power and cost relative to a single back-end channel decoder. A multiple integer band drive may enable two or more heads to work in tandem to perform data access operations (e.g., read operations and/or write operations) in a shared logic block address (LBA) range, and double, triple, or quadruple the data rate of sequential data access, for examples that integrate the operations of two, three, or four heads and tracks, respectively (or otherwise multiplied for examples with more heads in tandem), compared with a conventional disk drive using independent heads. Disk drive15implementing FLT technology of this disclosure may thus, in some examples, be considered as an alternative technology to multiple integer band for increasing data rates, but without requiring multiple back-end channel decoders, in some FLT examples, such as by fitting one integer primary physical track plus one fractional donor track into one logical track. Disk drive15implementing FLT technology of this disclosure may implement an independently selected sector density per logical track and a selected (e.g., constant) data rate, maximized up to the maximum channel capability enabled for the maximum radius tracks of the outer diameter, but across a substantial portion of the disk radii, while still also using only a single back-end channel encoder/decoder. Disk drive15implementing FLT technology of this disclosure may thus provide substantially higher data rates relative to an otherwise comparable disk drive without FLT technology, and without imposing additional power requirement or cost for additional channel encoders/decoders beyond a single back-end channel encoder/decoder, and a single-channel system-on-chip (SoC). FLT technology may particularly make use of one or more auxiliary actuators, e.g., milliactuators and/or microactuators, on the actuator arms, to enable compensatory fine control to coordinate fine positioning of multiple heads, in the face of positioning performance constraints such as thermal expansion and tilt, for closely coordinating multiple heads for a single logical track, in some fractional multiple band examples. Some examples using multiple integer primary tracks may require novel changes to servo control and firmware, as well as additional channels. FLT disk drives that are not combined with multiple integer band and that use a single primary track in various examples of this disclosure may enable higher data rates than in otherwise analogous disk drives without imposing any additional changes or costs associated with multiple integer band technology, in various examples. Various examples of FLT-implemented disk drives may thus offer a flexible set of options for performance advantages and trade-offs. While FLT may be considered an alternative in certain contexts to multiple integer band as a technology for increasing disk drive data rate, in some examples in accordance with this disclosure, a disk drive may also implement both FLT and multiple integer band technologies in combination. While FLT may be used as an alternative to multiple integer band to increase data rates, among other advantages, that is not to contradict the fact that a disk drive in some examples of this disclosure may also use multiple integer band and FLT in combination, which may enable data rates higher than using either of those particular implementations of multiple integer band or FLT alone, and may provide synergistic advantages. Using FLT may enable optimizing power and cost for multiple integer band in a disk drive that combines FLT and multiple integer band, in various examples. An FLT disk drive of this disclosure may particularly enable higher data rates for larger random read sequential data transfers. Overall data rates depend strongly on different factors depending on the sizes of sequential data transfers. Direct comparisons in overall data transfer times may be made with the same queue depth, e.g., a queue depth of 4. In a relatively small random read data transfer, such as for 4 kilobytes (kB), a large proportion of access time (or inverse of data rate) is due to seek and latency, and may be primarily improved with improvements to seek and latency capability such as multiple actuators, and only a small proportion of access time is occupied by the data transfer. In a relatively larger random read data transfer, such as for 2, 4, or 8 megabytes (MB), for example, the proportions are reversed (though may not be as lopsided between the factors): the data access time for the read data transfer is greater than for the seek and latency, such that a substantial majority of the overall data access time is spent with the head on track, and reading and transferring the data from the disk surface. An FLT disk drive of this disclosure may increase read data transfer speed, relative to an otherwise comparable disk drive without FLT technology, and thus may particularly increase overall data transfer speed for larger data transfers, in which overall data rate speed depends primarily on on-track read transfer rate. FLT format and data rate, as implemented by control circuitry22with disks16of this disclosure, are defined by channel capability & LBA layout, not by an integer multiple of physical sectors-per-track (SPT), as in conventional disk drives. In an FLT disk drive15of this disclosure (in examples that use a single primary physical track and not multiple primary tracks), control circuitry22writes and reads FLT logical tracks to and from disk surfaces17such that each FLT logical track corresponds with and is interleaved between one entire physical track, plus a fractional selection or proportion of sectors of a different physical donor track. A single donor track may be associated with multiple primary tracks, and may have donor sectors written to it to correspond as portions of the logical tracks for multiple primary physical tracks. At each track radius, between the inner and outer diameters of disk surfaces17, FLT control circuitry22may draw enough sectors from a donor track to top up the margin of data rate between the primary physical track at that radius and the channel bandwidth. Thus, FLT control circuitry22may interleave an increasingly large proportion of donor track sectors with a primary track at decreasing track radii (toward the inner diameter of the disk), as the sectors per primary physical track continuously decrease proportionally with declining radius. This trend may continue close to the inner diameter of disks16such that, at relatively low track radii, a single donor track may have most of its physical sectors assigned to a single primary track. As an example, at close to or at the inner diameter of disks16, a donor track may have the entirety of its sectors assigned to a single primary track in a single logical track, and there ceases to be a distinction between the primary and donor tracks, but rather, the two physical tracks are both assigned in their entirety to a single logical track, in examples which are limited to a single donor track. In other examples, toward the inner diameter, FLT control circuitry22may draw partial sectors from a third track as a donor track for two primary tracks in a single logical track. FIG.3depicts an FLT format logical sector layout400across a first primary physical track410and a first donor physical track420, in a simplified form, as may be assigned by control circuitry22and implemented by disk drive15, in accordance with aspects of the present disclosure. Donor physical track420may also be referred to as a secondary physical track420. As shown, primary physical track410and donor physical track420each has 32 physical sectors, in this example. Control circuitry22assigns a proportion of sectors from donor or secondary physical track420together with the entirety of primary physical track410in a single logical track. In this example, control circuitry22assigns 25% of sectors (8 of 32) from donor track420, or every fourth sector of donor track420, in this example, to the logical track centered on primary physical track410. The corresponding logical track is shown to have 40 logical sectors total (numbers 0-39 inFIG.3denotes the order of the sectors within the logical track). Since primary physical track410has 32 physical sectors as noted above, the logical track thus has 25% more sectors per track than primary physical track410. As illustrated, the logical track starts with sectors 0-3 in primary physical track410, then sector 4 from donor track420is interleaved into the sector order of the logical track. After sector 4, sectors 5-8 are from primary physical track410, followed by sector 9 which is interleaved from donor track420, and so on. As further explained below, primary physical track410may be on one disk surface (e.g., disk surface17A) accessed by one head (e.g., head18A) while donor track420may be on another disk surface (e.g., disk surface17B or17C) accessed by another head (e.g., head18B or18C). As such, access to both physical tracks410and420can be done concurrently to allow for real-time read or write of the logical track, including in examples with a conventional, single, fixed actuator arm assembly19. In some single actuator arm assembly examples, disk drive15may use milliactuators, microactuators, and/or other fine actuators to facilitate maintaining alignment between both physical tracks410and420on different disk surfaces17across disks16, including to compensate for any positioning variations between actuator arms42and heads18. FIG.4depicts another FLT format logical sector layout500across a second primary physical track510and the same donor physical track420as inFIG.3, in a simplified form, as may be assigned by control circuitry22and implemented by disk drive15, in accordance with aspects of the present disclosure. In this example, the numbering continues fromFIG.3, showing that logical track as encompassing the sectors from bothFIG.3andFIG.4. Control circuitry22assigns another proportion of sectors from donor physical track420together with the entirety of primary physical track510in another single logical track, the donor track420of which is thus shared with the logical track centered on physical track410ofFIG.3. In this example as well, control circuitry22assigns 25% of sectors from donor track420, or every fourth sector of donor track420, to the logical track centered on primary physical track510, and the corresponding logical track thus has 25% more sectors per track than primary physical track510. As illustrated, the logical track continues fromFIG.3with sectors 40-42 in primary physical track510, then sector 43 from donor track420is interleaved into the sector order of the logical track. After sector 43, sectors 44-47 are from primary physical track510, followed by sector 48 which is interleaved from donor track420, and so on. The lighter-shaded sector numbers in donor track420(4, 9, 14, 19, etc.) denotes the previously assigned sectors from the assignment scheme shown inFIG.3. Here, again, tracks510and420can be accessed concurrently per the disk surface—head assignment scheme discussed above. At this track radius, control circuitry22is assigning 25% of sectors of donor track420to be interleaved with a corresponding primary physical track. After the assignments ofFIGS.3and4, 50% of the sectors of donor track420have been assigned. Control circuitry22may also assign sectors of donor track420to two additional primary physical tracks, for four in total, and achieve full usage of donor track420. In this example, 25% of donor track420is assigned to each of four primary physical tracks, adding up to 100% usage of donor track420, with no remainder. At other disk radii, the sectors per physical track may not work out to assign proportions of a single donor track to proximate primary physical tracks with zero or negligible remainder, as in this example. In various examples, an FLT disk drive may allow a remainder of a donor track to go unused at a given time, though it may rotate usage of portions of the donor track as it overwrites logical tracks over time. In various examples, if control circuitry22detects that some donor tracks have unused remainders after usage of a single donor track with one or more primary physical tracks, control circuitry22may assign and bundle remainder portions of multiple donor tracks in a single logical track with a single primary physical track (or with multiple primary physical tracks, in many-to-many FLT implementations). Primary track410, primary track510, and donor track420may all be at approximately the same track radius, in various examples. Donor track420may be at nearly the same or a substantially identical track radius to primary tracks410and510and on the opposing side of the same disk as primary tracks410and510, in various examples. Donor track420, primary tracks410and510, and two other primary physical tracks associated with donor sectors of donor track420, may all be proximate to each other, at a same or substantially the same track radius on one or both sides of a single disk or of multiple disks in a disk stack in disk drive15, in various examples. Control circuitry22may implement techniques and processes to keep all primary physical tracks associated with a single donor track in logical tracks physically closely aligned with each other, and may control fine actuators such as milliactuators, microactuators, and/or other fine actuators to exercise fine control of the respective heads writing and reading those respective physical tracks on the respective disk surfaces, in various examples. Disk drive15may make use of one or more fine actuators per actuator arm and per head, for purposes of maintaining close physical alignment of primary and donor physical tracks logically assigned to one or more common logical tracks, and in close vertical alignment on either side of a disk16, in various examples. Control circuitry22and disk drive15may also alternate over time between heads, physical tracks, and disk surfaces for assigning to either primary physical track usage or donor physical track usage. For example, control circuitry22may at first assign a first disk surface17A on a first side of a disk16A and a first head18A that corresponds with that first side of disk16A for writing and reading primary physical track410, and may correspondingly assign the opposing disk surface17B on the opposing side of disk16A, and second head18B that corresponds with that opposing side of disk16A, for writing and reading donor physical track420. Then, at a later time, when either overwriting or refreshing the existing logical track, control circuitry22may write a primary physical track to disk surface17B on the opposing side of disk16A using head18B, and for an associated donor track, logically bundled with that same primary physical track, may write the donor track to first disk surface17A on the first side of disk16A using first head18A. Disk drive15may thus alternate back and forth over time between heads, tracks, and disk surfaces, for purposes that may illustratively include facilitating maintaining close physical alignment of the primary and donor physical tracks, and/or may include distributing “wear and tear” effects evenly over time, such as to include ameliorating long-term or medium-term or short-term physical effects of usage over time, such as thermal expansion or deformation of heads18A and18B. FIG.5depicts another example FLT format logical sector layout600across a primary physical track610and a donor physical track620, in a simplified form, as may be assigned by control circuitry22and implemented by disk drive15, in accordance with aspects of the present disclosure. Whereas control circuitry22in the FLT implementation examples ofFIGS.4and5assigns a single donor sector at a time for a corresponding portion of sectors (four consecutive primary sectors at a time in this example) on the primary physical track, control circuitry22in the FLT implementation example ofFIG.5assigns a set of four consecutive donor sectors at a time for corresponding portions of varying sizes of primary track sectors on the primary physical track. In particular, in this example, control circuitry22assigns four consecutive sectors of donor track620as sectors 7 through 10 of the logical track, after assigning the first seven sectors of the primary physical track as the first seven sectors, (sectors 0 through 6) of the logical track; then assigns another fifteen sectors of the logical track to the primary physical track (sectors 11 through 25), then assigns another span of four consecutive donor track sectors to the logical track (sectors 26 through 29 of the logical track), then resumes assigning logical track sectors to the primary physical track. This arrangement of assigning multiple consecutive logical track sectors to the donor track at a time may offer a different set of advantages, in the context of an overall constrained optimization among multiple overall performance criteria, than assigning a single logical track sector to the donor track at a time. Assigning multiple consecutive sectors to donor track620may reduce overall complexity and overhead, which may come with a tradeoff of a rotational offset between logical block addresses and channel buffering in transitioning between the donor track and the primary track, with transitions between logic blocks of the donor track and the primary track at a modestly greater spacing than in the example ofFIGS.4and5, in which the donor track sectors remain adjacent to their consecutive logical track sectors on the primary physical track. Disk drive15may use additional techniques such as 2× channel buffering for short periods to help address minimizing the rotational offset between the logical block addresses and channel buffering, in various examples. FIG.6illustrates comparative sectors per track (physical or logical track as applicable), shown on the y-axis, from outer diameter to inner diameter (from left to right), shown across the x-axis, of disk surfaces17, across conventional, multiple integer band, and FLT disk drives, for otherwise comparable disk drives, as shown in graph300, with the understanding that on-track read data rates may be proportional to sectors per track, in accordance with aspects of the present disclosure. Track radius or disk position diameter also be referred to in terms of stroke, with stroke of 0 to 100% corresponding with diameter from outer diameter to inner diameter, as shown in the labeling along the x-axis. Line segment301shows sectors per track for a conventional disk drive. In this example, sectors per track go from approximately 600 at the outer diameter to approximately 260 at the inner diameter. In this conventional disk drive, the sectors per track refers to sectors per physical track, as there is no distinction between logical tracks and physical tracks. Line segment302shows sectors per track for a multiple integer band disk drive, and in particular with a 2× multiple integer band implementation, in which pairs of physical tracks are logically bundled together, such that each logical track comprises two physical tracks. In this example, sectors per track are simply double the sectors per track for the conventional disk drive as shown in line segment301, and go from approximately 1,200 at the outer diameter to approximately 520 at the inner diameter. Line segment311shows sectors per logical track for an example FLT disk drive of this disclosure, in a particular example referred to as 1×FLT, as explained below. As illustrated by line segment311, the sectors per logical track in examples of 1×FLT are constant across most of the track radii, at an identical level with that at just the outer diameter in a conventional disk drive, as illustrated at the left edge of line segment301. Close to the inner diameter, at a stroke of approximately around 85% or 90%, line segment311of 1×FLT intersects line segment302of 2× multiple integer band, which indicates a track radius at which the sectors per physical track is half that at the outer diameter, i.e.,300, and such that an entire donor track becomes associated with a primary track in an FLT logical track, and the distinction between the donor and primary tracks disappears. In this example of 1×FLT, there is also no usage of a third track to become a donor track to the two primary tracks at smaller track radii than at the intersection with 2× multiple integer band, so in the innermost and lowest 10% or 15% of track radii, 1×FLT logical track assignments coincide with and are equivalent with 2×multiple integer band logical track assignments. Graph300also shows the sectors per logical track for another example FLT implementation referred to as 1.2×FLT, in line segment312. As with the sectors per logical track for 1×FLT as shown in line segment311, a 1.2×FLT disk drive, in various examples, has a constant number of sectors per logical track across a substantial proportion of track radii between the outer diameter and inner diameter of disks16. This example is referred to as 1.2×FLT because it assigns a number of sectors per logical track equal to 1.2 times the maximum number of sectors per physical track, at the outer diameter. In this example in which the number of sectors per physical track at the outer diameter is 600, the 1.2×FLT disk drive assigns720sectors per logical track. At the outer diameter, the 1.2×FLT disk drive assigns 20% of a donor track to the outermost, longest full primary physical track, such that that logical track carries 120% of the bandwidth of that outermost, longest full primary physical track. Across the majority of the disk diameter in which the 1.2×FLT disk drive implements the constant sectors per track at720, the 1.2×FLT disk drive assigns an ever higher proportion of a donor track beyond 20%, in proportional compensation to decreasing physical track radius, to each full primary physical track, such that each logical track carries 120% of the capacity of the physical capacity of the outermost physical track. The rationale for this 1.2× implementation example is that the channel bandwidth or capacity may often be designed and implemented with an extra headroom margin of bandwidth, such as 20% extra bandwidth, beyond the nominal bandwidth or physical capacity of the outermost physical track, and thus of the nominal maximum data access bandwidth capacity of the disk drive, assuming a conventional (pre-FLT) track formatting. The value of 20% for extra capacity in this example, and the corresponding value of 1.2× for the FLT implementation, is an illustrative example, and other examples may include any feasible proportion or percentage of nominal extra channel margin. Various example FLT implementations in the family of examples that make use of nominal extra channel margin in pre-existing (or analogous later-developed) channel architecture and firmware may correspondingly implement logical tracks with any fractional multiple of the maximum physical outer track diameter capacity, and so may be implemented in examples of FLT technology with 1.1×, 1.3×, or 1.4× fractional multiples of outer diameter maximum physical track capacity, or any other feasible multiple within the ranges of 1× through 1.4× or any other feasible multiple higher than 1.4×, in various examples of FLT implementations, within the bounds of this disclosure. As graph300shows, line segment312for a 1.2×FLT implementation intersects with line segment302for 2× multiple integer band, at a higher disk diameter and lesser stroke than for the values at which line segment311for a 1×FLT implementation intersects with line segment302, as 1.2×FLT assigns an entire donor track to a primary track for the same logical track, saturating the donor track, and merges with 2× multiple integer band at a larger track radius than 1×FLT. In other examples, FLT control circuitry22may assign a third track as a donor track per logical track at inner track radii inward of where FLT uses two full primary physical tracks, in which case, the graph line segments analogous to line segments311,312for sectors per logical track per track radius would continue horizontally rightward of line segment302for 2× multiple integer band. Table 1 below shows data rate performance increases for FLT, for the particular examples of 1×FLT and 1.2×FLT, relative to conventional baseline without any distinction of logical tracks from physical tracks, and compared with for 2× multiple integer band (where multiple integer band achieves the data rate performance improvements while requiring additional channel architecture and firmware, which the FLT examples do not), for the examples of sequential reads, 2 MB random reads, and 2 MB super cache reads (the further examples of FLT super cache reads in accordance with aspects of this disclosure are explained below). As shown, the FLT examples enable substantial data rate performance improvements, across various data transfer scenarios, within the bounds of pre-existing single-channel SoC and firmware. TABLE 1ConfigurationSequential2 MB Random2 MB Super Cache1× FLT+29%+12% to +18%+20%1.2× FLT+51%+18% to +30%+33%2× multiple integer+100%+30 to +52%+60%band FIG.7depicts graph700of IOPS per queue depth for 1× and 1.2× example FLT disk drives (curves711and712, respectively) relative to conventional single-track reads and 2× multiple integer band (curves701and702, respectively), for constant 2 MB full volume random reads, in accordance with aspects of this disclosure.FIG.8depicts graph800of IOPS percentage gain per queue depth for 1× and 1.2× example FLT disk drives (curves811and812, respectively) relative to conventional single-track reads and 2× multiple integer band (the x-axis baseline and curve802, respectively), for constant 2 MB full volume random reads, in accordance with aspects of this disclosure. IOPS and IOPS percentage gain are depicted along the y-axis of graphs700and800, respectively, and queue depth is depicted along the x-axis in both of graphs700,800. As demonstrated, an FLT disk drive of this disclosure, in various examples, of which 1× and 1.2×FLT are illustrative, enable substantial IOPS performance gains over conventional single-band baseline, and with proportionally increasing IOPS performance gains for larger queue depth, which reduces the relative role of seek and latency relative to read transfer time in overall data rate. FIG.9depicts graph900of IOPS per read operation transfer length (on a logarithmic scale) for 1×, 1.5×, 2×, and 4× example FLT disk drives (curves911,913,915,917, respectively), in FLT examples in which disk drives incorporate super cache technology in combination with FLT technology of this disclosure, relative to conventional single-track reads (curve901), in accordance with aspects of this disclosure. In some of these examples, disk drive15may also incorporate multiple integer band technology, together with FLT and super cache.FIG.10depicts graph1000of IOPS percentage gain per read operation transfer length (on a logarithmic scale) for 1×, 1.5×, 2×, and 4× example FLT disk drives (curves1011,1013,1015,1017, respectively), in FLT examples in which disk drives incorporate super cache technology in combination with FLT technology, and potentially also with multiple integer band technology, relative to conventional single-track reads (the x-axis baseline), in accordance with aspects of this disclosure. In super cache examples, disk drive15extends data access buffering to an internal embedded flash memory or any type of solid-state non-volatile memory component (which may have high endurance and low retention), and which may enable buffering and substantially increasing the internal queue depth in the embedded flash memory or any type of solid-state non-volatile memory component, beyond what might be possible in an otherwise analogous disk drive, and substantially reducing average seek and latency, such as by enabling storing a large number of write commands even with high transfer length. In other words, disk drive15may leverage an internal flash memory or any type of solid-state non-volatile memory component to convert or bundle numbers of relatively small data transfers ordered by host44into relatively long transfers with disk surfaces17for sequential operations that also use FLT technology of this disclosure, which may additionally be combined with multiple integer band technology. Thus, in various examples, an FLT disk drive15may also use super cache synergistically with FLT formatting to gain additional data rate performance advantages, synergistically combining novel advantages in all sources of overall data transfer time, including both seek and latency time and on-track data transfer time, in accordance with aspects of this disclosure. In some examples, disk drive15may also make use of an internal flash memory or any type of solid-state non-volatile memory component as a donor for an FLT logical track format, to add extra headroom margin to an FLT logical track format, temporary or long-term, as an alternative or in combination with using a physical disk surface track as a donor track. In some of these examples, control circuitry22may reserve only a portion or up to a portion of an internal flash memory or any type of solid-state non-volatile memory component as virtual donor tracks, which disk drive15may use in any or all of the same ways it uses donor tracks as described and depicted herein. The term “donor track” may equivalently refer to a donor physical track on a disk surface or a virtual donor track in a flash memory or any type of solid-state non-volatile memory component, in accordance with aspects of this disclosure. Disk drive15may combine usage of donor physical tracks with physical virtual donor tracks in any combination, using either/or, or both in combination, at different times and/or at different disk surface radii or locations. As one illustrative example, disk drive15may implement a 1.4×FLT format in which, for all data assigned to a primary physical track, disk drive15also assigns 20% as much data to a disk surface donor track and 20% as much data to a donor track stored in a flash memory or other solid-state memory component within the hard drive. Thus, control circuitry may assign and use at least a portion of sectors of either a donor physical track of the physical tracks, or sectors of a non-volatile memory storage, such as a flash memory or other solid-state memory component within the hard drive, or any other type of non-volatile memory storage or component, as a donor track, for any purposes as described herein in terms of comprising, using, and performing data access operations with a donor track, including in lieu of any assignment and use of a donor physical track, in various examples. Thus, such sectors of the non-volatile memory storage may form a virtual donor track, and be used as a donor track for all purposes and in all functions of a donor track as described herein (with exceptions to the description as appropriate for the different natures of a donor physical track on a disk surface and a virtual donor track of sectors in a non-volatile memory storage). In some examples, a respective logical track may be assigned to comprise both the portion of sectors of the donor physical track and the sectors of the non-volatile memory storage, as a single, integral, flexible donor track, in which the control circuitry may flexibly use and control both the donor physical track and the virtual donor track of the sectors of the non-volatile memory storage as a donor track for the logical track. In some examples, as noted above, disk drive15may implement a many-to-many FLT logical track format, in which disk drive15logically bundles multiple primary physical tracks with multiple donor tracks into single logical tracks, across either a portion or the entirety of track radii between inner and outer diameters of disks16, and using three or more heads for data access operations with three or more physical tracks (primary and donor) for a single logical track. In various examples, an FLT disk drive of this disclosure may assign different format technologies to different portions of disks16, such as different track radii of disks16. As an illustrative example, a disk drive of this disclosure may assign an outermost or an innermost 10%, 20%, 30%, or whatever other proportion at whatever other positional range of disks16, to be used with FLT formatting, or with FLT plus multiple integer band in combination, or with FLT plus super cache in combination, or with FLT plus multiple integer band and super cache in combination, and may other positional ranges to conventional single-band formatting and logical block addressing, or to multiple integer band and/or super cache without FLT. In various examples, a disk drive of this disclosure comprises a split actuator that comprises and uses two vertically stacked, independent VCMs or other primary actuators and actuator arm assemblies, as described above, with FLT formatting, such that the disk drive may assign, control, and perform data access operations with a primary physical track via a first VCM and first actuator arm assembly, and with a donor track via a second VCM and second actuator arm assembly. In these examples, the control circuitry may assign and write a primary track and a donor track of a respective single logical track without regard to vertical alignment of the primary track and donor track across disk surfaces. Rather, the control circuitry may assign and write a primary track and a donor track of a respective single logical track at independent radial positions on different disk surfaces, without regard to vertical alignment. Thus, the first primary actuator and the second primary actuator are stacked in vertical alignment with each other, and the primary physical track and the donor physical track of the respective logical track are assigned at independent radial positions on a first disk accessed by the first actuator arm assembly and on a second disk accessed by the second actuator arm assembly. This may grant the disk drive an extra degree of freedom and further flexibility in arranging layouts of primary and donor physical tracks for flexible logical tracks. In various examples, an FLT disk drive of this disclosure with multiple actuator arm assemblies may assign one or more selected portions of one or more disks in the disk drive, e.g., the outer 10% of the disks in one example, to be used primarily for long block random commands, with a 4× data rate, and interleaving one logical track among four physical tracks instead of two in this special region, with, e.g., three or two primary physical tracks and one or two donor physical tracks, using both of the two actuator arm assemblies in tandem to interleave the individual logic tracks among the four physical tracks at, e.g., a 4× data rate. In the other portions of the disks, the disk drive may operate the two or more actuator arm assemblies independently on different, independent workloads, each operating at, e.g., a 2× data rate. The disk drive may also modify the proportions of the disks designated for each data rate mode over time, in response to work demands. An FLT disk drive with one or multiple actuator arm assemblies may thus employ a flexible range of different combinations of data transfer technologies, in various examples. In various examples, an FLT disk drive of this disclosure may have multiple primary actuators, each with its own actuator arm assembly and set of heads at different angular positions around the stack of disks, and potentially each with its own sets of milliactuators and/or microactuators and/or other fine actuators on the actuator arms and/or proximate to the heads, and may enable any of various usages of the heads of the multiple actuator arm assemblies in various implementations of FLT formatting. In various illustrative examples, an FLT disk drive of this disclosure may assign a first actuator arm assembly to implement one or more donor tracks to support primary physical tracks implemented by the other actuator arm assembly, or by both actuator arm assemblies. An FLT disk drive of this disclosure may thus distribute data access operations per individual logical track across multiple actuator arm assemblies, and may interleave an individual logical track to disk surfaces by way of the multiple actuator arm assemblies. An FLT disk drive of this disclosure may implement a single logical track across multiple actuator arm assemblies either still in close vertical alignment on either side of on an individual disk, or otherwise proximate on an individual disk, or independently across disks in the disk drive, in different examples. Any suitable control circuitry may be employed to implement the flow diagrams in the above examples, such as any suitable integrated circuit or circuits. For example, the control circuitry may be implemented within a read channel integrated circuit, or in a component separate from the read channel, such as a data storage controller, or certain operations described above may be performed by a read channel and others by a data storage controller. In some examples, the read channel and data storage controller may be implemented as separate integrated circuits, and in some examples, the read channel and data storage controller may be fabricated into a single integrated circuit or system on a chip (SoC). In some examples, the control circuitry may include a suitable preamp circuit implemented as a separate integrated circuit, integrated into the read channel or data storage controller circuit, or integrated into an SoC. In some examples, the control circuitry may comprise a microprocessor executing instructions, the instructions being operable to cause the microprocessor to perform one or more aspects of methods, processes, or techniques shown in the flow diagrams and described with reference thereto herein. Executable instructions of this disclosure may be stored in any computer-readable medium. In some examples, executable instructions of this disclosure may be stored on a non-volatile semiconductor memory device, component, or system external to a microprocessor, or integrated with a microprocessor in an SoC. In some examples, executable instructions of this disclosure may be stored on one or more disks and read into a volatile semiconductor memory when the disk drive is powered on. In some examples, the control circuitry may comprises logic circuitry, such as state machine circuitry. In some examples, at least some of the flow diagram blocks may be implemented using analog circuitry (e.g., analog comparators, timers, etc.). In some examples, at least some of the flow diagram blocks may be implemented using digital circuitry or a combination of analog and digital circuitry. In various examples, one or more processing devices may comprise or constitute the control circuitry as described herein, and/or may perform one or more of the functions of control circuitry as described herein. In various examples, the control circuitry, or other one or more processing devices performing one or more of the functions of control circuitry as described herein, may be abstracted away from being physically proximate to the disks and disk surfaces. The control circuitry, and/or one or more device drivers thereof, and/or one or more processing devices of any other type performing one or more of the functions of control circuitry as described herein, may be part of or proximate to a rack of multiple data storage devices, or a unitary product comprising multiple data storage devices, or may be part of or proximate to one or more physical or virtual servers, or may be part of or proximate to one or more local area networks or one or more storage area networks, or may be part of or proximate to a data center, or may be hosted in one or more cloud services, in various examples. In various examples, a disk drive may include a magnetic disk drive, an optical disk drive, a hybrid disk drive, or other types of disk drive. Some examples may include electronic devices such as computing devices, data server devices, media content storage devices, or other devices, components, or systems that may comprise the storage media and/or control circuitry as described above. The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and subcombinations fall within the scope of this disclosure. Certain method, event or process blocks may be omitted in some implementations. The methods and processes described herein are not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences. For example, described tasks or events may be performed in an order other than that specifically disclosed, or multiple may be combined in a single block or state. The example tasks or events may be performed in serial, in parallel, or in another manner. Tasks or events may be added to or removed from the disclosed examples. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed examples. While certain example embodiments are described herein, these embodiments are presented by way of example only, and do not limit the scope of the inventions disclosed herein. Thus, nothing in the foregoing description implies that any particular feature, characteristic, step, module, or block is necessary or indispensable. The novel methods and systems described herein may be embodied in a variety of other forms. Various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit and scope of the present disclosure. Method80and other methods of this disclosure may include other steps or variations in various other embodiments. Some or all of any of method80and other methods of this disclosure may be performed by or embodied in hardware, and/or performed or executed by a controller, a CPU, an FPGA, a SoC, a measurement and control multi-processor system on chip (MPSoC), which may include both a CPU and an FPGA, and other elements together in one integrated SoC, or other processing device or computing device processing executable instructions, in controlling other associated hardware, devices, systems, or products in executing, implementing, or embodying various subject matter of the method. Data storage systems, devices, and methods implemented with and embodying novel advantages of the present disclosure are thus shown and described herein, in various foundational aspects and in various selected illustrative applications, architectures, techniques, and methods for implementing and embodying novel advantages of the present disclosure. Persons skilled in the relevant fields of art will be well-equipped by this disclosure with an understanding and an informed reduction to practice of a wide panoply of further applications, architectures, techniques, and methods for novel advantages, techniques, methods, processes, devices, and systems encompassed by the present disclosure and by the claims set forth below. As used herein, the recitation of “at least one of A, B and C” is intended to mean “either A, B, C or any combination of A, B and C.” The descriptions of the disclosed examples are provided to enable any person skilled in the relevant fields of art to understand how to make or use the subject matter of the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art based on the present disclosure, and the generic principles defined herein may be applied to other examples without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. The present disclosure and many of its attendant advantages will be understood by the foregoing description, and various changes may be made in the form, construction, and arrangement of the components without departing from the disclosed subject matter or without sacrificing all or any of its material advantages. The form described is merely explanatory, and the following claims encompass and include a wide range of embodiments, including a wide range of examples encompassing any such changes in the form, construction, and arrangement of the components as described herein. While the present disclosure has been described with reference to various examples, it will be understood that these examples are illustrative and that the scope of the disclosure is not limited to them. All subject matter described herein are presented in the form of illustrative, non-limiting examples, and not as exclusive implementations, whether or not they are explicitly called out as examples as described. Many variations, modifications, and additions are possible within the scope of the examples of the disclosure. More generally, examples in accordance with the present disclosure have been described in the context of particular implementations. Functionality may be separated or combined in blocks differently in various examples of the disclosure or described with different terminology, without departing from the spirit and scope of the present disclosure and the following claims. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure as defined in the claims that follow.
61,009
11862212
MODE(S) FOR CARRYING OUT THE INVENTION An embodiment according to the present technology will now be described below with reference to the drawings. [Configuration of Magnetic Recording Medium] First, a basic configuration of a magnetic recording medium will be described.FIG.1is a schematic diagram showing a magnetic recording medium1according to an embodiment of the present technology as viewed from the side, andFIG.2is a schematic diagram showing the magnetic recording medium1as viewed from the side of a magnetic layer. As shown inFIG.1andFIG.2, the magnetic recording medium1has a tape shape that is long in the longitudinal direction (X-axis direction), short in the width direction (Y-axis direction), and thin in the thickness direction (Z axis direction). Note that in the present specification (and the drawings), a coordinate system with reference to the magnetic recording medium1is represented by an XYZ coordinate system. The magnetic recording medium1is favorably configured to be capable of recording signals at the shortest recording wavelengths of 96 nm or less, more favorably 75 nm or less, still more favorably 60 nm or less, and particularly favorably 50 nm or less. The magnetic recording medium1is favorably used in a data recording device including a ring-type head as a recording head. Referring toFIG.1, the magnetic recording medium1includes a tape-shaped base material11that is long in the longitudinal direction (X-axis direction), a non-magnetic layer12provided on one main surface of the base material11, a magnetic layer13provided on the non-magnetic layer12, and a back layer14provided on the other main surface of the base material11. Note that the back layer14may be provided as necessary and the back layer14may be omitted. As the magnetic layer13, a coating type magnetic medium using a perpendicular recording method is typically used. Note that the magnetic recording medium1including the magnetic layer13will be described below in detail. [Data Band and Servo Band] FIG.2is a schematic diagram of the magnetic recording medium1as viewed from above. Referring toFIG.2, the magnetic layer13includes a plurality of data bands d (data bands d0to d3) long in the longitudinal direction (X-axis direction) in which a data signal is written, and a plurality of servo bands s (servo bands s0to s4) long in the longitudinal direction in which a servo signal is written). The servo bands s are located at positions where the respective data bands d are sandwiched in the width direction (Y-axis direction). In the present technology, the ratio of the area of the servo bands s to the area of the entire surface of the magnetic layer13is typically set to 4.0% or less. Note that the width of the servo band s is typically set to 95 μm or less. The ratio of the area of the servo bands s to the area of the entire surface of the magnetic layer13can be measured by, for example, developing the magnetic recording medium1using a developer such as a ferricolloid developer and then observing the developed magnetic recording medium1under an optical microscope. Since the servo bands s are located at positions where the respective data bands d are sandwiched, the number of servo bands s is one more than the number of data bands d. In the example shown inFIG.2, an example in which the number of data bands d is four and the number of servo bands s is five is shown (In existing systems, it is common to employ this approach). Note that the number of data bands d and the number of servo bands s can be changed as appropriate, and these numbers may be increased. In this case, the number of servo bands s is favorably five or more. When the number of servo bands s is five or more, it is possible to ensure stable recording/reproduction characteristics with less off-track by suppressing the effect of dimensional changes of the magnetic recording medium1in the width direction on the accuracy of servo signal reading, Further, the number of data bands d may be 8, 12, . . . , (i.e., 4n (n represents an integer greater than or equal to two)) and the number of servo bands s may be 9, 13, . . . (i.e., 4n+1 (n represents an integer greater than or equal to two)). In this case, it is possible to cope with the change of the number of data bands d and the number of servo bands s without changing the existing systems. The data band d includes a plurality of recording tracks5that is long in the longitudinal direction and aligned in the width direction. The data signals are recorded on the recording tracks5along the recording tracks5. Note that in the present technology, the one-bit length in the longitudinal direction in the data signal to be recorded on the data band d is typically 48 nm or less. The servo band s includes a servo signal recording pattern6of predetermined patterns on which a servo signal is recorded by a servo signal recording device (not shown). FIG.3is an enlarged view showing the recording tracks5in the data band d. As shown inFIG.3, the recording tracks5are each long in the longitudinal direction, are aligned in the width direction, and each have a predetermined recording track width Wd for each track in the width direction. This recording track width Wd is typically 2.0 μm or less. Note that such a recording track width Wd can be measured by, for example, developing the magnetic recording medium1using a developer such as a ferricolloid developer and then observing the developed magnetic recording medium1under an optical microscope. The number of recording tracks5included in one data band d is, for example, approximately 1,000 to 2,000. FIG.4is an enlarged view showing the servo signal recording pattern6in the servo band s. As shown inFIG.4, the servo signal recording pattern6includes a plurality of stripes7(azimuthal slope) inclined at a predetermined azimuth angle α with respect to the width direction (Y-axis direction). The azimuth angle is not particularly limited, is appropriately determined depending on the size and the like of the servo band s, and is, for example, 12°. Alternatively, the azimuth angle may be 15°, 18°, 21°, 24°, or the like. The plurality of stripes7is classified into a first stripe group8that is inclined clockwise with respect to the width direction (Y-axis direction) and a second stripe group9which is inclined counterclockwise with respect to the width direction. Note that the shape and the like of such a stripe7can be measured by, for example, developing the magnetic recording medium1using a developer such as a ferricolloid developer and then observing the developed magnetic recording medium1under an optical microscope. InFIG.4, a servo trace line T, which is a line traced by the servo read head on the servo signal recording pattern6, is indicated by a broken line. The servo trace line T is set along the longitudinal direction (X-axis direction) and is set at a predetermined interval Ps in the width direction. The number of the servo trace lines T per servo band s is, for example, approximately 30 to 200. The interval Ps between two adjacent servo trace lines T is the same as the value of the recording track width Wd, and is, for example, 2.0 μm or less, or 1.5 μm or less. Here, the interval Ps of the two adjacent servo trace lines T is a value that determines the recording track width Wd. That is, when the interval Ps between the servo trace lines T is narrowed, the recording track width Wd becomes smaller, and the number of recording tracks5included in one data band d increases. As a result, the recording capacity of data increases (the opposite is true in the case where the interval Ps increases). Therefore, in order to increase the recording capacity, while the recording track width Wd needs to be reduced, the interval Ps of the servo trace line T, is also narrowed. As a result, it is difficult to accurately trace adjacent servo trace lines. In this regard, in this embodiment, it is possible to cope with the narrowing of the interval Ps by increasing the reading accuracy of the servo signal recording pattern6as will be described below. [Data Recording Device] Next, a data recording device20for recording/reproducing data signals to/from the magnetic recording medium1will be described.FIG.5is a schematic diagram showing the data recording device20. Note that in the present specification (and the drawings), a coordinate system with reference to the data recording device20is represented by an X′Y′Z′ coordinate system. The data recording device20is configured to be capable of loading the cartridge21housing the magnetic recording medium1. Note that although a case where the data recording device20is capable of loading one cartridge21will be described here for ease of description, the data recording device20may be configured to be capable of loading a plurality of cartridges21. Further, the configuration of the cartridge21will be described below in detail. As shown inFIG.5, the data recording device20includes a spindle27, a reel22, a spindle driving device23, a reel driving device24, a plurality of guide rollers25, a head unit30, and a control device26. The spindle27is configured to be capable of loading the cartridge21. The cartridge21complies with the LTO (Linear Tape Open) standard and rotatably houses the wound magnetic recording medium1inside the case. The reel22is configured to be capable of fixing the leading end of the magnetic recording medium1pulled out from the cartridge21. The spindle driving device23causes, in response to a command from the control device26, the spindle27to rotate. The reel driving device24causes, in response to a command from the control device26, the reel22to rotate. When data signals are recorded/reproduced on/from the magnetic recording medium1, the spindle driving device23and the reel driving device24respectively cause the spindle27and the reel22to rotate, thereby causing the magnetic recording medium1to travel. The guide roller25is a roller for guiding the traveling of the magnetic recording medium1. The control device26includes, for example, a control unit, a storage unit, a communication unit, and the like. The control unit includes, for example, a CPU (Central Processing Unit) and the like, and integrally controls the respective units of the data recording device20in accordance with a program stored in the storage unit. The storage unit includes a non-volatile memory on which various types of data and various programs are to be recorded, and a volatile memory used as a work area of the control unit. The above-mentioned various programs may be read from a portable recording medium such as an optical disk and a semiconductor memory, or may be downloaded from a server device on a network. The communication unit is configured to be capable of communicating with other devices such as a PC (Personal Computer), and a server device. The head unit30is configure to be capable of recording, in response to a command from the control device26, a data signal to the magnetic recording medium1. Further, the head unit30is configured to be capable of reproducing data written to the magnetic recording medium1in response to a command from the control device26. FIG.6is a diagram of the head unit30as viewed from below. As shown inFIG.6, the head unit30includes a first head unit30aand a second the head unit30b. The first head unit30aand the second head unit30bare configured symmetrically in the X′-axis direction (the traveling direction of the magnetic recording medium1). The first head unit30aand the second head unit30bare configured to be movable in the width direction (Y′-axis direction). The first head unit30ais a head used when the magnetic recording medium1travels in the forward direction (flow direction from the cartridge21side to the device20side). Meanwhile, the second head unit30bis a head used when the magnetic recording medium1travels in the opposite direction (flow direction from the device20side to the cartridge21side). Since the first head unit30aand the second head unit30bhave basically the same configuration, the first head unit30awill be typically described. The first head unit30aincludes a unit body31, two servo read heads32, and a plurality of the data write/read heads33. A servo read head32is configured to be capable of reproducing a servo signal by reading the magnetic flux generated from magnetic information recorded on the magnetic recording medium1(servo band s) by an MR device (MR: Magneto Resistive) or the like. That is, the servo read head32reads the servo signal recording pattern6recorded on the servo band s to reproduce the servo signal. The servo read head32is provided one each on both ends of the width direction (Y′-axis direction) in the unit body31. The interval between the two servo read heads32in the width direction (Y′-axis direction) is substantially the same as the distance between adjacent servo bands s in the magnetic recording medium1. The data write/read heads33are disposed along the width direction (Y-axis direction) at equal intervals. Further, the data write/read head33is disposed at a position sandwiched between the two servo read heads32. The number of the data write/read heads33is, for example, approximately 20 to 40, but this number is not particularly limited. The data write/read head33includes a data write head34and a data read head35. The data write head34is configured to be capable of recording data signals on the magnetic recording medium1by a magnetic field generated from a magnetic gap. Further, the data read head35is configured to be capable of reproducing a data signal by reading the magnetic field generated from the magnetic information recorded on the magnetic recording medium1(data band d) by an MR device (MR: Magneto Resistive) or the like, In the first head unit30a, the data write head34is disposed on the left side of the data read head35(upstream side when the magnetic recording medium1flows in the forward direction). Meanwhile, in the second head unit30b, the data write head34is disposed on the right side of the data read head35(upstream side when the magnetic recording medium1flows in the opposite direction). Note that the data read head35is capable of reproducing a data signal immediately after the data write head34writes the data signal to the magnetic recording medium1. FIG.7is a diagram showing the state when the first head unit30aperforms recording/reproduction of a data signal. Note that in the example shown inFIG.7, a state where the magnetic recording medium1is caused to travel in the forward direction (flow direction from the cartridge21side to the device20side) is shown. As shown inFIG.7, when the first head unit30arecords/reproduces a data signal, one of the two servo read heads32is located on one of the two adjacent servo bands s and reads the servo signal on this servo band s. Further, the other of the two servo read heads32of is located on the other of the two adjacent servo bands s and reads the servo signal on this servo band s. Further, at this time, the control device26determines, on the basis of the reproduced waveform of the servo signal recording pattern6, whether or not the servo read head32traces on the target servo trace line T (seeFIG.4) accurately. This principle will be described. As shown inFIG.4, the first stripe group8and the second stripe group9in the servo signal recording pattern6are inclined in opposite directions with respect to the width direction (Y-axis direction). For this reason, in the upper servo trace line T, the distances between the first stripe group8and the second stripe group9in the longitudinal direction (X-axis direction) are relatively small. Meanwhile, on the lower servo trace line T, the distances between the first stripe group8and the second stripe group9in the longitudinal direction (X-axis direction) are relatively wide. Therefore, by obtaining the difference between the time at which the reproduced waveform of the first stripe group8has been detected and the time at which the reproduced waveform of the second stripe group9has been detected, the current position of the servo read head32in the width direction (Y-axis direction) relative to the magnetic recording medium1can be known. Accordingly, the control device26is capable of determining, on the basis of the reproduced waveform of the servo signal, whether or not the servo read head32accurately traces on the target servo trace line T. Then, in the case where the servo read head32does not trace on the target servo trace line T accurately, the control device26causes the head unit30to move in the width direction (Y′-axis direction) to adjust the position of the head unit30. Referring toFIG.7again, the data write/read head33records data signals on the recording tracks5along the recording tracks5while the position of the data write/read head33in the width direction is adjusted (when shifted). Here, when the magnetic recording medium1is completely pulled out of the cartridge21, then, the magnetic recording medium1is caused to travel in the opposite direction (flow direction from the device20side to the cartridge21side). At this time, the second head unit30bis used as the head unit30. Further, at this time, as the servo trace line T, the servo trace line T adjacent to the previously used servo trace line T is used. In this case, the head unit30is caused to move in the width direction (Y′-axis direction) by an amount corresponding to the interval Ps of the servo trace line T (=an amount corresponding to the recording track width Wd). Further, in this case, the data signal is recorded on the recording track5adjacent to the recording track5on which the data signal has been previously recorded. In this way, data signals are recorded on the recording track5while the magnetic recording medium1is reciprocated many times with the traveling direction thereof being changed between the forward direction and the reverse direction. Here, for example, assumption is made that the number of servo trace lines T is 50 and the number of data write/read heads33included in the first head unit30a(or the second head unit30b) is 32. In this case, the number of recording tracks5included in one data band d is 50×32, i.e., 1,600. Thus, in order to record data signals in all of the recording tracks5, the magnetic recording medium1needs to be reciprocated 25 times. FIG.8is a diagram showing two stripes7in the servo signal recording pattern6. Referring toFIG.8, an arbitrary stripe7of the plurality of stripes7included in the first stripe group8of the servo signal recording pattern6is defined as the first stripe7a. Further, an arbitrary stripe7of the plurality of stripes7included in the second stripe group9of the servo signal recording pattern6is defined as the second stripe7b. An arbitrary servo trace line T of the plurality of servo trace lines T is defined as the first servo trace line T1. Further, the servo trace line T adjacent to the first servo trace line T1is defined as a second servo trace line T2. The intersection of the first stripe7aand the first servo trace line T1is defined as P1. Note that regarding this point P1, an arbitrary point on the first stripe7amay be used as the point P1. The intersection of the first stripe7aand the second servo trace line T2is defined as P2. Note that regarding this point P2, a point on the first stripe7alocated at a position apart from the P1by the interval Ps (i.e., by the amount corresponding to the recording track width Wd) in the width direction (Y-axis direction) may be used as the point P2. The distance between the points P1and P2in the longitudinal direction (X-axis) is defined as a distance D. The distance D corresponds to the deviation in the longitudinal direction from the adjacent track. The intersection between the second stripe7band the first servo trace line T1is defined as P3, and the intersection between the second stripe7band the second servo trace line T2is defined as P4. When the first servo trace line T1is traced, the difference between the time at which the reproduced waveform has been detected at the point P1and the time at which the reproduced waveform has been detected at the point P3needs to be determined. This difference is defined as a first period. Similarly, when the second trace line T is traced, the difference between the time at which the reproduced waveform has been detected at the point P2and the time at which the reproduced waveform has been detected at the point P4needs to be determined. This difference is defined as the second period. Next, a difference between the first period and the second period will be considered. Here, assumption is made that the interval Ps between the servo trace lines T and the recording track width Wd are 1.56 μm and the azimuth angle α is 12 degrees. In this case, the distance D is 1.56×tan 12°, i.e., 0.33 μm. The difference between the distance between the points P1and P3and the distance between the points P2and P4is 0.66 μm, because the difference is twice the distance D. At this time, assuming that the traveling velocity of the magnetic recording medium1is 5 m/s, 0.66/5000000=0.13 s is achieved. This is the difference between the first period and the second period. However, in the case where the reproduction output of the servo signal is insufficient, such a minute difference cannot be accurately determined. In particular, in the case where the recording track width Wd is reduced and the interval Ps between the servo trace lines T is reduced in order to increase the number of recording tracks5, the distance D is further narrowed and the difference between the first period and the second period is further reduced. Further, it is expected that the servo band width will become narrower as the capacity of the magnetic tape increases in recent years. In this case, in order to cope with the increase in the capacity of the magnetic tape, it is necessary to increase the inclination angle of the azimuthal slope with respect to the tape width direction. As a result, since the azimuth loss with the servo read head increases, the SNR (signal-to-noise ratio) of a servo reproduction signal, which is the reproduction output of a servo signal, is inevitably lowered. Moreover, in the perpendicular magnetic recording method, there is a problem that the SNR of the servo reproduction signal is likely to decrease affected by the demagnetizing field in the perpendicular direction of the magnetic layer. [Servo Signal Recording Device] Next, a servo signal recording device will be described.FIG.9is a front view showing a typical servo signal recording device100, andFIG.10is a partial enlarged view showing a part thereof. Referring toFIG.9andFIG.10, the servo signal recording device100includes a feeding roller111, a pre-processing unit112, a servo write head113, a reproduction head unit114, and a take-up roller115in the order from the upstream side in the conveying direction of the magnetic recording medium1. Further, the servo signal recording device100includes a drive unit120that drives the servo write head113and a controller130that integrally controls the servo signal recording device100. The controller30includes a control unit that integrally controls the respective units of the servo pattern recording device100, a recording unit that stores various programs/data needed for processing of the control unit, a display unit that displays data, an input unit for inputting data, and the like. The feeding roller111is capable of rotatably supporting a roll-shaped magnetic recording medium1prior to recording of a servo signal. The feeding roller111is caused to rotate in accordance with the driving of a drive source such as a motor, and feeds the magnetic recording medium1toward the downstream side in accordance with the rotation. The take-up roller115is caused to rotate in synchronism with the feeding roller111in accordance with the driving of a drive source such as a motor, and winds up the magnetic recording medium1on which a servo signal has been recorded in accordance with the rotation. The feeding roller111and the take-up roller115are capable of causing the magnetic recording medium1to move in the conveying path at a constant velocity. The servo write head113is disposed on, for example, the upper side (the side of the magnetic layer13) of the magnetic recording medium1. Note that the servo write head113may be disposed on the lower side (the side of the base material11) of the magnetic recording medium1. The servo write head113generates a magnetic field at a predetermined timing in response to a pulse signal of a rectangular wave, and applies a magnetic field to a part of the magnetic layer13(after pre-processing) of the magnetic recording medium1. In this way, the servo write head113magnetizes a part of the magnetic layer13in a first direction to record a servo signal (hereinafter, referred to also as the servo signal recording pattern6) on the magnetic layer13(see black arrows inFIG.10for the magnetization direction). The servo write head113is capable of recording the servo signal recording pattern6for each of the five servo bands s0to s4when the magnetic layer13passes underneath the servo write head113, The first direction, which is the magnetization direction of the servo signal recording pattern6, contains components in the perpendicular direction perpendicular to the upper surface of the magnetic layer13. That is, in this embodiment, since a perpendicularly oriented magnetic powder is contained in the magnetic layer13, the servo signal recording pattern6recorded on the magnetic layer13contains magnetization components in the perpendicular direction. The pre-processing unit112is disposed on, for example, the lower side (the side of the base material11) of the magnetic recording medium1, on the upstream side of the servo write head113. The pre-processing unit112may be disposed on the upper side (the side of the magnetic layer13) of the magnetic recording medium1. The pre-processing unit112includes a permanent magnet112athat is rotatable about a Y-axis direction (the width direction of the magnetic recording medium1) as a center axis of rotation. The shape of the permanent magnet12ais, for example, a cylindrical shape or a polygonal prism shape, but is not limited thereto. The permanent magnet12ademagnetizes the entire magnetic layer4by applying a magnetic field to the entire magnetic layer4by means of a DC magnetic field prior to the servo pattern6being recorded by the servo write head13. Thus, the permanent magnet112ais capable of magnetizing the magnetic layer4in a second direction opposite to the magnetization direction of the servo pattern6in advance (see white arrows inFIG.10). By making the two magnetization directions in opposite directions in this way, the reproduced waveform of the servo signal obtained by reading the servo signal recording pattern6can be made symmetrical in the up-and-down direction (±). The reproduction head unit114is disposed on the upper side (the side of the magnetic layer13) of the magnetic recording medium1, on the downstream side of the servo write head113. The reproduction head unit114reads the servo signal recording pattern6from the magnetic layer13of the magnetic recording medium1, which has been pre-processed by the pre-processing unit112, the servo signal recording pattern6having been recorded on the magnetic layer13by the servo write head113. Typically, the reproduction head unit114detects the magnetic flux generated from the surface of the servo band s when the magnetic layer13passes underneath the reproduction head unit114. The magnetic flux detected at this time becomes a reproduced waveform of the servo signal. Note that also in the servo read head32in the head unit30of the above-mentioned data recording device20, the servo signal recorded on the magnetic recording medium1is reproduced in the same principal. [Reproduction Output of Servo Signal] FIG.11is a diagram describing the reproduced waveform of the servo signal and the magnitude of the output thereof. Part (a) shows the first stripe group8in the servo signal recording pattern6recorded on the magnetic layer13, and Part (b) shows the magnitude of the magnetization of the individual stripe7constituting the first stripe group7. As shown in Part (b) ofFIG.11, the magnitude of the DC level of the residual magnetization of the magnetic layer13after the demagnetization processing by the permanent magnet12ais defined as +Mr. When the servo signal recording pattern6is recorded on the magnetic layer13by the servo write head113, the residual magnetizations M of the region corresponding to the individual stripe7changes from a +Mr level to a −Mr level. As shown in Part (c) ofFIG.11, +Mr and −Mr respectively correspond to the magnetization levels in the positive and negative directions of a residual magnetization M when an external magnetic field H is zero in the M-H curve of the magnetic layer13(hysteresis). In the figure, He indicates the coercive force. The reproduction output of the servo signal is proportional to the absolute value of ΔMr corresponding to the difference between the levels of the residual magnetization of the magnetic layer13before and after the recording of the servo signal recording pattern6(difference between +Mr and −Mr). That is, as ΔMr increases, the reproduction output of a servo reproduction signal increases, resulting in a large SNR. The maximization of ΔMr is achieved by saturation-recording the servo signal recording pattern6. Further, as the head magnetic field necessary for saturation-recording the servo signal recording pattern6, a recording current capable of generating an external magnetic field (−Hs) by which the magnetization of the magnetic layer13reaches saturation is supplied from the drive unit120to the servo write head113as shown inFIG.12. The magnitude of the recording current is determined by the magnetic properties (residual magnetization, a squareness ratio, the degree of perpendicular orientation, and the like) of the magnetic layer13. Meanwhile, since the perpendicular magnetic recording method is affected by the demagnetizing field in the perpendicular direction of the magnetic layer13, the servo signal recording pattern6is not saturation-recorded in some cases even in the case where the head magnetic field for reaching the saturation recording is generated. For this reason, a method of checking whether or not the servo signal recording pattern6has been properly written is necessary. Further, in order to cope with the increase in magnetic tape capacity, the azimuth angle α (see Part (a) ofFIG.11) of each stripe7recorded as a servo signal needs to be increased. As a result, since the azimuth loss with the reproduction head unit114becomes large, a distance Δx (see Part (b) ofFIG.11) of the reproduction head unit114passing through the individual stripe7become long. As a result, the output waveform of the reproduction signal is prolonged, and the output level tends to decrease in the course of the signal processing including averaging processing. From also such a viewpoint, it is increasingly necessary to cause the signal recording pattern6to be saturation-recorded. In this regard, in this embodiment, a technology capable of stably providing a magnetic recording medium that is capable of suppressing the degradation of a servo reproduction signal due to the increase in the capacity by defining a unique index in order to properly control the magnetization level of the servo signal recorded on the magnetic layer13has been established. That is, in the present technology, a servo signal is recorded so that an index (Q) represented by Sq×Fact.(p−p)/F0(p−p) is equal to or greater than a predetermined value, Sq being a squareness ratio of a magnetic layer in the perpendicular direction, F0(p−p) being a peak-to-peak value of the first magnetic force gradient strength observed by a magnetic force microscope when a servo signal is saturation-recorded on the magnetic layer, Fact.(p−p) being a peak-to-peak value of the second magnetic force gradient strength observed by a magnetic force microscope for a servo signal recorded on a servo band. The value of the index (Q) is 0.42 or more, favorably 0.45 or more, more favorably 0.5 or more, and still more favorably 0.6 or more. By setting the index (Q) to 0.42 or more, it is possible to increase the SNR of a servo reproduction signal, as will be described below. A squareness ratio (Sq) of the magnetic layer in the perpendicular direction represents the ratio of the residual magnetization to the saturation magnetization of the magnetic layer in the perpendicular direction. The squareness ratio S typically depends on the residual magnetization (Mrt) of the magnetic particles constituting the magnetic layer, the degree of perpendicular orientation, and the like. The squareness ratio (Sq) is favorably 0.5 or more, more favorably 0.6 or more, more favorably 0.65 or more, and still more favorably 0.7 or more. As a result, the value of the index (Q) can be improved. The peak-to-peak value (F0(p−p)) of the first magnetic force gradient strength is a peak-to-peak value of a magnetic force gradient strength observed by a magnetic force microscope when the servo signal is saturation-recorded on the servo band of the magnetic layer. The first magnetic force gradient strength corresponds to the ideal value at which the servo signal is recorded without being affected by demagnetization by the demagnetizing field during recording. Hereinafter, referring toFIG.13, the first magnetic force gradient strength will be described. FIG.13is an explanatory diagram of the first magnetic force gradient strength. Part (a) shows the recording magnetization level of one stripe7(see Part (a) ofFIG.11) constituting a part of the servo signal, Part (b) shows an MFM (magnetic force microscope) image thereof, and Part (c) shows the peak value of the magnetic force gradient observed in the MFM image. The magnetic force microscope is a device for visualizing a magnetic domain structure by utilizing a magnetic interaction between a magnetic sample and a magnetic probe, and is used for analyzing a magnetization state of the magnetic sample. The magnetic probe is scanned in the direction perpendicular to the stripe7. In an ideal condition in which the servo signal is saturation-recorded, two boundary images of the magnetized region and the non-magnetized region corresponding to the inverting portion of the magnetization appears clearly in the MFM image as shown in Part (b) ofFIG.13. As a result, as shown in Part (c) ofFIG.13, the two peak values of the magnetic force gradient at each of the boundary portions are maximized. The magnitude of the magnetic force gradient between these two peaks is defined as a peak-to-peak value (F0(p−p)) of the first magnetic force gradient strength. Meanwhile, the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength is a peak-to-peak value of the magnetic force gradient strength observed by a magnetic force microscope for the servo signal actually recorded on the servo band by using the servo signal recording device100or the like. The second magnetic force gradient strength is typically a magnetic force gradient strength observed by a magnetic force microscope for a servo signal that is not saturation-recorded, and often becomes lower than the first magnetic force gradient strength by being affected by demagnetization by demagnetizing field during recording of the servo signal. FIG.14andFIG.15are each an explanatory diagram of the second magnetic force gradient strength (Fact.(p−p). Part (a) shows the recording magnetization level of one stripe7(see Part (a) ofFIG.11) constituting a part of the servo signal, Part (b) shows the MFM (magnetic force microscope) image thereof, and Part (c) shows the peak value of the magnetic force gradient observed in the MFM image. The servo signal written to the magnetic layer is usually affected by demagnetization due to the demagnetizing field of the magnetic layer during recording, as indicated by a reference symbol D1in Part (a) ofFIG.14, and the residual magnetization (−Mr) of the servo signal does not reach the level of the residual magnetization at the time of saturation-recording. Further, in the MFM image, two boundary images corresponding to the inverting portion of the magnetization are blurred as shown in Part (b) ofFIG.14. As a result, as shown in Part (c) ofFIG.14, the peak value of the magnetic force gradient at each boundary portion is reduced. The magnitude (peak-to-peak value) of the magnetic force gradient between these two peaks is defined as a peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength. Note that Parts (a) to (c) ofFIG.14each represent a state where the residual magnetization level (demagnetization level) of the magnetic layer prior to the servo signal recording has reached the saturation level. In contrast, Parts (a) to (c) ofFIG.15each represent a state where the residual magnetization level (demagnetization level) of the magnetic layer prior to the servo signal recording has not reached the saturation level. In this case, as shown in Part (a) ofFIG.15, the recording magnetization of the servo signal is affected not only by demagnetization due to the demagnetization field during servo-recording as indicated by the reference symbol D1, but also by demagnetization due to the demagnetization field at the time of demagnetization as indicated by a reference symbol D2. In this case, as shown in Parts (b) and (c) ofFIG.15, when another image appears around the two boundary images corresponding to the inverting portion of the magnetization, the waveform of the magnetic force gradient collapses in some cases in the MFM image. In such a case, the magnitude (peak-to-peak value) of the magnetic force gradient between the highest peaks in the waveform of the magnetic force gradient is defined as a peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength. The ratio (Fact.(p−p)/F0(p−p)) of Fact.(p−p) to F0(p−p) is favorably 0.6 or more, more favorably 0.7 or more, and still more favorably 0.8 or more. [Servo Signal Recording Device for Saturation-Recording] FIG.16is a partial schematic diagram showing a configuration of a servo signal recording device200according to an embodiment of the present technology. This servo signal recording device200is a novel device suitable for saturation-recording a servo signal. As shown inFIG.16, the servo signal recording device200includes a servo write head210and an auxiliary magnetic pole220. Since other configurations are similar to those of the servo signal recording device100described with reference toFIG.9, description thereof will be omitted. The servo write head210includes a magnetic core213and a coil214wound on the magnetic core213. The magnetic core213includes a gap portion213G for servo signal recording. The magnetic core213is formed of a magnetic material having soft magnetic properties. The coil214magnetizes the magnetic core213by being applied with a recording current supplied from the drive unit120(seeFIG.9andFIG.10). FIG.17is an enlarged view of a main part of an A portion inFIG.16.FIG.18is a schematic plan view showing the region of a part of the servo band s in the magnetic layer13of the magnetic recording medium1. The soft magnetic material forming the magnetic core213is not particularly limited, and an Fe (iron)-Ni (nickel)-based metal magnetic material such as permalloy or a Co (cobalt)-based metal magnetic material is typically used. Alternatively, as shown inFIG.17, the body of the magnetic core213may be formed of permalloy, and the vicinity of a gap portion213F may be formed of a CoFe-based high permeability material. Examples of the CoFe-based material include a Co1-xFex(0.6≤x≤0.8) based material. As shown inFIG.18, the gap portion213G is formed by forming a groove of a “/” shape and a groove of a “Y” shape in the magnetic core213at predetermined intervals in the traveling direction of the magnetic recording medium1, and magnetizes the magnetic layer13of the magnetic recording medium1traveling directly below the servo write head210into the respective shapes by the leakage magnetic field (head magnetic field) from the gap portion213G. The current applied to the coil214is typically a pulse current. By controlling the supplying timing thereof, the servo signal recording pattern6including a series of stripe groups8and9shown inFIG.2orFIG.4is formed. Note that the azimuth angle α is adjusted by the inclination of each of the grooves constituting the gap portion213G. The auxiliary magnetic pole220includes a pair of metal pieces220adisposed to face the servo write head210with the magnetic layer13of the magnetic recording medium1interposed therebetween. As shown inFIG.18, each of the metal pieces220ais disposed to be inclined around the Z-axis so as to face the gap portion213G of the magnetic core213in the Z-axis direction. The auxiliary magnetic pole220is typically disposed on the back surface (the support11or the back layer14) of the magnetic recording medium1in a non-contact manner, but the shorter the facing distance to the gap portion213G, the more favorable. Each of the metal piece220aconstituting the auxiliary magnetic pole220is formed of a high permeability material, and for example, the above-mentioned CoFe-based material is used. The auxiliary magnetic pole220may be provided with a base portion (illustration omitted) for commonly supporting the respective metal pieces220ain order to improve the handling property. In the servo signal recording device200configured as described above, the servo signal is recorded on the magnetic layer13while causing the magnetic recording medium1to travel between the servo write head210and the auxiliary magnetic pole220. At this time, the auxiliary magnetic pole220forms a magnetic path through which the leakage magnetic field (magnetic flux) from the gap portion213G passes. As a result, the leakage magnetic field from the gap portion213G is induced to penetrate through the magnetic recording medium1in the thickness direction, the magnetic layer13can be easily magnetized in the perpendicular direction. Therefore, in accordance with the servo signal recording device200, the servo signal can be recorded in a saturation-recorded state or a state close thereto for the following reason. In the case of performing saturation-recording on the perpendicularly oriented film, it is necessary to apply a recording magnetic field exceeding Hs=Hc+4πMs due to the influence of the demagnetizing field (4πMs). For example, in the case where a coercive force He is 3,000 Oe and a saturation magnetization Ms is 300 emu/cm3(value of general perpendicularly oriented barium ferrite), Hs=Hc+4πMs=6,768 Oe, and a recording magnetic field twice or more of He is necessary. Further, in order to perform saturation-recording, it is generally said that a magnetic field in the gap of the recording head three times or more of Hs is necessary. Therefore, in the case where the material of the recording head is Ni45Fe55 commonly used in the current magnetic tape drive, the magnetic field in the gap is approximately 16,000 Oe, and saturation-recording of the medium having Hs=6,768 Oe is difficult. However, in the case of providing the auxiliary magnetic pole220, it is considered that since the surface magnetization induced in the magnetic film surface is suppressed and the effect of cancelling the demagnetizing field of 4nMs is obtained, Hs=Hc and saturation-recording becomes possible. EXAMPLE Next, various Examples and various Comparative Examples in the present technology will be described. Example 1 A magnetic recording medium including a magnetic layer that contains barium ferrite as a magnetic powder and has a thickness of 80 nm with the residual magnetization (Mrt) of 0.55 memu/cm2and the squareness ratio (Sq) of 0.7 (70%) in the perpendicular direction was prepared. A servo signal including a servo signal recording pattern with an azimuth angle of 12° was recorded on the magnetic layer by using a first servo signal recording device (seeFIG.16) including an auxiliary magnetic pole (CoFe-based one, the same applies hereinafter) while causing the prepared magnetic recording medium to travel at 5 m/s. The servo write head was formed of Permalloy (Ni45Fe55), and a step signal with a recording current of 100% was used as the recording signal. The recording current of 100% refers to the recording current value at which the reproduction signal voltage becomes the maximum when the reproduction signal voltage is monitored while changing the recording current. The servo signal recorded in the above-mentioned manner can be regarded as being saturation-recorded on the magnetic layer due to the action of the above-mentioned auxiliary magnetic pole. In this regard, in this Example, the peak-to-peak value of the magnetic force gradient strength obtained from the MFM image of the magnetic layer on which the servo signal was recorded was used as the peak-to-peak value (F0(p−p)) of the first magnetic force gradient strength obtained when the servo signal was saturation-recorded. Next, using a second servo signal recording device that does not include the auxiliary magnetic pole, a servo signal with an azimuth angle of 12° was recorded by applying a step signal with a recording current of 100% to the servo write head while causing the above-mentioned magnetic recording medium to travel in the tape longitudinal direction at 5 m/s. The second servo signal recording device has the same configuration as that of the above-mentioned first servo signal recording device except that it does not include the auxiliary magnetic pole. Then, the MFM image of the magnetic layer on which a servo signal had been recorded under the above-mentioned condition was acquired, and the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength, which was a magnetic force gradient strength of the servo signal, was measured from the obtained MFM-image. Note that F0(p−p) and Fact.(p−p) were measured using the magnetic force microscope “NanoScope III A D3100” manufactured by Bruker. The measurement conditions are shown below.Measuring mode: Phase ModeScan speed: 1.0 HzNumber of data points: 512×512 Further, the probe MFMR manufactured by NanoWorld was used. Subsequently, when Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) to F0(p−p), was calculated from the measured value of Fact.(p−p), the value was 0.7 and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.49. Next, the servo signal of the magnetic recording medium used for the measurement of Fact.(p−p) was reproduced and the SNR was measured. For the measurement, the signal of the reproduction head unit provided in the servo signal recording device was used. The measured value was a relative value when the SNR of a servo reproduction signal of the magnetic tape in the commercially available LTO7 format was 0 dB. As a result of the measurement, the SNR was 2.0 dB. Example 2 A servo signal was recorded under the same condition as that in Example 1 except that the second servo signal recording device that does not include the auxiliary magnetic pole was used for the magnetic layer of the magnetic recording medium formed of the same material as that in Example 1 and the recording current was set to 90%. An MFM image of the recorded servo signal was acquired, and the peak-to-peak value of the magnetic force gradient strength of the servo signal was measured from the MFM image and used as the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength of the servo signal in the magnetic recording medium. When Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) of the servo signal to F0(p−p) measured in Example 1, was calculated, the value was 0.65, and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.455. Further, when the servo signal was reproduced under the same condition as that in Example 1 to measure the SNR, the measured value was 1.0 dB. Example 3 A servo signal was recorded under the same condition as that in Example 1 except that the second servo signal recording device that does not include the auxiliary magnetic pole was used for the magnetic layer of the magnetic recording medium formed of the same material as that in Example 1 and the recording current was set to 80%. An MFM image of the recorded servo signal was acquired, and the peak-to-peak value of the magnetic force gradient strength of the servo signal was measured from the MFM image and used as the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength of the servo signal. When Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) of the servo signal to F0(p−p) measured in Example 1, was calculated, the value was 0.6, and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.42. Further, when the servo signal was reproduced under the same condition as that in Example 1 to measure the SNR, the measured value was 0.0 dB. Example 4 A magnetic recording medium including a magnetic layer that contains barium ferrite as a magnetic powder and has a thickness of 80 nm with the residual magnetization (Mrt) of 0.45 memu/cm2and the squareness ratio (Sq) of 0.6 (60%) in the perpendicular direction was prepared. A servo signal including a servo signal recording pattern with an azimuth angle of 12° was recorded on the magnetic layer by using the first servo signal recording device including the auxiliary magnetic pole while causing the prepared magnetic recording medium at 5 m/s. The servo write head was formed of permalloy, and a step signal with a recording current of 100% was used as the recording signal. The servo signal recorded in the above-mentioned manner can be regarded as being saturation-recorded on the magnetic layer due to the action of the above-mentioned auxiliary magnetic pole. In this regard, in this Example, the peak-to-peak value of the magnetic force gradient strength obtained from the MFM image of the magnetic layer on which the servo signal was recorded was used as the peak-to-peak value (F0(p−p)) of the first magnetic force gradient strength obtained when the servo signal was saturation-recorded. Next, using the second servo signal recording device that does not include the auxiliary magnetic pole, a servo signal with an azimuth angle of 12° was recorded by applying a step signal with a recording current of 100% to the servo write head while causing the above-mentioned magnetic recording medium to travel in the tape longitudinal direction at 5 m/s. Then, the MFM image of the magnetic layer on which the servo signal was recorded under the above-mentioned condition was acquired, and the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength, which was a magnetic force gradient strength of the servo signal, was measured from the obtained MFM image. Subsequently, when Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) to F0(p−p), was calculated from the measured value of Fact.(p−p), the value was 0.7 and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.42. Further, when the servo signal was reproduced under the same condition as that in Example 1 to measure the SNR, the measured value was 0.0 dB. Example 5 A magnetic recording medium including a magnetic layer that contains barium ferrite as a magnetic powder and has a thickness of 80 nm with the residual magnetization (Mrt) of 0.39 memu/cm2and the squareness ratio (Sq) of 0.5 (50%) in the perpendicular direction was prepared. A servo signal including a servo signal recording pattern with an azimuth angle of 12° was recorded on the magnetic layer by using the first servo signal recording device including the auxiliary magnetic pole while causing the prepared magnetic recording medium at 5 m/s. The servo write head was formed of permalloy, and a step signal with a recording current of 100% was used as the recording signal. The servo signal recorded in the above-mentioned manner can be regarded as being saturation-recorded on the magnetic layer due to the action of the above-mentioned auxiliary magnetic pole. In this regard, in this Example, the peak-to-peak value of the magnetic force gradient strength obtained from the MFM image of the magnetic layer on which the servo signal was recorded was used as the peak-to-peak value (F0(p−p)) of the first magnetic force gradient strength obtained when the servo signal was saturation-recorded. Next, using the first servo signal recording device including the auxiliary magnetic pole, a servo signal with an azimuth angle of 12° was recorded by applying a step signal with a recording current of 90% to the servo write head while causing the above-mentioned magnetic recording medium to travel in the tape longitudinal direction at 5 m/s. Then, the MFM image of the magnetic layer on which the servo signal was recorded under the above-mentioned condition was acquired, and the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength, which was a magnetic force gradient strength of the servo signal, was measured from the obtained MFM image. Subsequently, when Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) to F0(p−p), was calculated from the measured value of Fact.(p−p), the value was 0.9 and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.45. Further, when the servo signal was reproduced under the same condition as that in Example 1 to measure the SNR, the measured value was 0.8 dB. Comparative Example 1 A servo signal was recorded under the same condition as that in Example 1 except that the second servo signal recording device that does not include the auxiliary magnetic pole was used for the magnetic layer of the magnetic recording medium formed of the same material as that in Example 1 and the recording current was set to 70%. An MFM image of the recorded servo signal was acquired, and the peak-to-peak value of the magnetic force gradient strength of the servo signal was measured from the MFM image and used as the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength of the servo signal in the magnetic recording medium. When Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) of the servo signal to F0(p−p) measured in Example 1, was calculated, the value was 0.5 and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.35. Further, when the servo signal was reproduced under the same condition as that in Example 1 to measure the SNR, the measured value was −2.0 dB. Comparative Example 2 A servo signal was recorded under the same condition as that in Example 5 except that the first servo signal recording device including the auxiliary magnetic pole was used for the magnetic layer of the magnetic recording medium formed of the same material as that in Example 5 and the recording current was set to 80%. An MFM image of the recorded servo signal was acquired, and the peak-to-peak value of the magnetic force gradient strength of the servo signal was measured from the MFM image and used as the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength of the servo signal in the magnetic recording medium. When Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) of the servo signal to F0(p−p) measured in Example 5, was calculated, the value was 0.8 and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.4. Further, when the servo signal was reproduced under the same condition as that in Example 1 to measure the SNR, the measured value was −0.5 dB. Comparative Example 3 A servo signal was recorded under the same condition as that in Example 5 except that the second servo signal recording device that does not include the auxiliary magnetic pole was used for the magnetic layer of the magnetic recording medium formed of the same material as that in Example 5 and the recording current was set to 100%. An MFM image of the recorded servo signal was acquired, and the peak-to-peak value of the magnetic force gradient strength of the servo signal was measured from the MFM image and used as the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength of the servo signal in the magnetic recording medium. When Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) of the servo signal to F0(p−p) measured in Example 5, was calculated, the value was 0.7 and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.35. Further, when the servo signal was reproduced under the same condition as that in Example 1 to measure the SNR, the measured value was −2.0 dB. Comparative Example 4 A magnetic recording medium including a magnetic layer that contains barium ferrite as a magnetic powder and has a thickness of 80 nm with the residual magnetization (Mrt) of 0.35 memu/cm2and the squareness ratio (Sq) of 0.45 (45%) in the perpendicular direction was prepared. A servo signal including a servo signal recording pattern with an azimuth angle of 12° was recorded on the magnetic layer by using the first servo signal recording device including the auxiliary magnetic pole while causing the prepared magnetic recording medium at 5 m/s. The servo write head was formed of permalloy, and a step signal with a recording current of 100% was used as the recording signal. The servo signal recorded in the above-mentioned manner can be regarded as being saturation-recorded on the magnetic layer due to the action of the above-mentioned auxiliary magnetic pole. In this regard, in this Example, the peak-to-peak value of the magnetic force gradient strength obtained from the MFM image of the magnetic layer on which the servo signal was recorded was used as the peak-to-peak value (F0(p−p)) of the first magnetic force gradient strength obtained when the servo signal was saturation-recorded. Next, using the second servo signal recording device that does not include the auxiliary magnetic pole, a servo signal with an azimuth angle of 12° was recorded by applying a step signal with a recording current of 100% to the servo write head while causing the above-mentioned magnetic recording medium to travel in the tape longitudinal direction at 5 m/s. Then, the MFM image of the magnetic layer on which the servo signal was recorded under the above-mentioned condition was acquired, and the peak-to-peak value (Fact.(p−p)) of the second magnetic force gradient strength, which was a magnetic force gradient strength of the servo signal, was measured from the obtained MFM image. Subsequently, when Fact.(p−p)/F0(p−p), which was the ratio of Fact.(p−p) to F0(p−p), was calculated from the measured value of Fact.(p−p), the value was 0.7 and the value of the index (Q), which was the product of the ratio and the squareness ratio (Sq) of the magnetic layer, was 0.315. Further, when the servo signal was reproduced under the same condition as that in Example 1 to measure the SNR, the measured value was −2.5 dB. Conditions and results of Examples 1 to 5 and Comparative Examples 1 to 3 are summarized in Table 1. TABLE 1SoftResidualmagneticmagnetizationRecordingauxiliaryMrtSquarenesscurrentmagnetic(memu/cm2)ratio Sq ⊥(%)poleFact.(p-p)/F0(p-p)Sq * Fact.(p-p)/F0(p-p)SNR(dB)Example 10.550.7100Not include0.70.492.0Example 20.550.790Not include0.650.4551.0Example 30.550.780Not include0.60.420.0Example 40.450.6100Not include0.70.420.0Example 50.390.590Include0.90.450.8Comparative0.550.770Not include0.50.35−2.0Example 1Comparative0.390.580Include0.80.4−0.5Example 2Comparative0.390.5100Not include0.70.35−2.0Example 3Comparative0.350.45100Not include0.70.315−2.5Example 4 As shown in Table 1, the SNRs for the servo reproduction signals in Examples 1 to 5 in which the value of the index Q, which was the product of the squareness ratio (Sq) in the perpendicular direction of the magnetic layer and the ratio of the magnetic force gradient strength of the servo signal (Fact.(p−p)/F0(p−p)), was 0.42 or more, were all 0 dB or higher, and comparable or better results were obtained as compared with the SNR of the servo reproduction signal of the magnetic recording medium employed in LTO7.FIG.19shows the relationship between the SNR and the index Q. In particular, in Examples 1, 2, and 5 in which the value of the index Q is 0.45 or more (rounded to the first decimal place), since SNRs of 0.8 dB or more are obtained, it is expected that favorable SNRs can be ensured even when the azimuth angle of the servo signal increases as the capacity of the magnetic recording medium increases. Further, by adopting the first servo signal recording device including the auxiliary magnetic pole for recording a servo signal, the index Q can be made higher than when the second servo signal recording device that does not include the auxiliary magnetic pole is employed. This is presumably because by a further increase in the magnetic susceptibility of the magnetic layer in the perpendicular direction due to the inductive action of the magnetic flux by the auxiliary magnetic pole, the saturation magnetization of the servo signal or a condition close to this was realized, leading to an increased index Q. As described above, by referring to the index Q, which is the product of the squareness ratio (Sq) in the perpendicular direction of the magnetic layer and the ratio of the magnetic force gradient strength of the servo signal (Fact.(p−p)/F0(p−p)), it is possible to estimate the magnetized state of the servo signal of the magnetic recording medium and the SNR of the reproduction signal. As a result, it is possible to easily manage the magnetic recording medium and provide a magnetic recording medium capable of realizing high SNRs of a servo reproduction signal. Further, it is possible to suppress the degradation of the SNRs of a servo reproduction signal due to the increase in the capacity of the magnetic recording medium. <Details of Magnetic Recording Medium> Subsequently, details of the magnetic recording medium1will be described. [Base Material] The base material11is a non-magnetic support that supports the non-magnetic layer12and the magnetic layer13. The base material11has a long film-like shape. The upper limit value of the average thickness of the base material11is 4.0 μm, favorably 4.2 μm, more favorably 3.8 μm, and still more favorably 3.4 μm. In the case where the upper limit value of the average thickness of the base material11is 4.2 μm or less, it is possible to increase the recording capacity in one cartridge21(seeFIG.5) as compared with the typical magnetic recording medium. The average thickness of the base material11is determined as follows. First, the magnetic recording medium1having a ½ inch width is prepared and cut into a length of 250 mm to prepare a sample. Subsequently, the layers (i.e. the non-magnetic layer12, the magnetic layer13, and the back layer14) other than the base material11of the sample are removed with a solvent such as MEK (methylethylketone) and dilute hydrochloric acid. Next, using a laser hologage manufactured by Mitutoyo as a measurement device, the thickness of the sample (base material11) is measured at five or more points, and the measured values are simply averaged (arithmetically averaged) to calculate the average thickness of the base material11. Note that the measurement positions are randomly selected from the sample. The base material11contains, for example, at least one selected from the group consisting of polyesters, polyolefins, cellulose derivatives, vinyl resins, and different polymer resins. In the case where the base material11contains two or more of the above-mentioned materials, the two or more materials may be mixed, copolymerized, or stacked. The polyesters include, for example, at least one of PET (polyethylene terephthalate), PEN (polyethylene naphthalate), PBT (polybutylene terephthalate), PBN (polybutylene naphthalate), PCT (polycyclohexylene dimethylene terephthalate), PEB (polyethylene-p-oxybenzoate), or polyethylene bisphenoxycarboxylate. The polyolefins include, for example, at least one of PE (polyethylene) or PP (polypropylene). The cellulose derivatives include, for example, at least one of cellulose diacetate, cellulose triacetate, CAB (cellulose acetate butyrate), and CAP (cellulose acetate propionate). The vinyl resins include, for example, at least one of PVC (polyvinyl chloride) or PVDC (polyvinylidene chloride). The different polymer resins include, for example, at least one PA (polyamide, nylon), aromatic PA (aromatic polyamide, aramid), PI (polyimide), aromatic PI (aromatic polyimide), PAI (polyamideimide), aromatic PAI (aromatic polyamideimide), PBO (polybenzoxazole, e.g., Zylon (registered trademark)), polyether, PEK (polyetherketone), PEEK (polyetheretherketone), polyetherester, PES (polyethersulfone), PEI (polyetherimide), PSF (polysulfone), PPS (polyphenylene sulfide), PC (polycarbonate), PAR (polyarylate), and PU (polyurethane). [Magnetic Layer] The magnetic layer13is a recording layer for recording data signals. The magnetic layer13contains a magnetic powder, a binder, conductive particles, and the like. The magnetic layer13may further contain additives such as a lubricant, an abrasive, and a rust inhibitor, as necessary. The magnetic layer13has a surface in which a large number of holes are provided. The lubricant is stored in the large number of holes. It is favorable that the large number of holes extend in the direction perpendicular to the surface of magnetic layer. The thickness of the magnetic layer13is typically 35 nm or more and 90 nm or less. By setting the thickness of the magnetic layer13to 35 nm or more and 90 nm or less as described above, it is possible to improve the electromagnetic conversion characteristics. Further, from the viewpoint of full width at half maximum of the isolated waveform in the reproduced waveform of the servo signal, the thickness of the magnetic layer13is favorably 90 nm or less, more favorably 80 nm or less, more favorably 60 nm or less, more favorably 50 nm or less, and still more favorably 40 nm or less. When the thickness of the magnetic layer13is set to 90 nm or less, the peak of the reproduced waveform of the servo signal can be sharpened by narrowing the full width at half maximum of the isolated waveform in the reproduced waveform of the servo signal (to 195 nm or less). Since this improves the accuracy of reading the servo signal, the number of recording tracks be increased to improve the recording density of data. The thickness of the magnetic layer13can be obtained, for example, in the following manner. First, the magnetic recording medium1is thinly processed perpendicular to the main surface thereof to prepare a sample piece, and the cross section of the test piece is observed by a transmission electron microscope (TEM) under the following conditions.Device: TEM (H9000NAR manufactured by Hitachi, Ltd.)Acceleration voltage: 300 kVMagnification: 100,000 times Next, after measuring the thickness of the magnetic layer13at least 10 points in the longitudinal direction of the magnetic recording medium10using the obtained TEM image, the measured values are simply averaged (arithmetically averaged) to obtain the thickness of the magnetic layer13. Note that the measurement positions are randomly selected from the sample piece. (Magnetic Powder) The magnetic powder contains a powder of nanoparticles containing ε-iron oxide (hereinafter, referred to as “ε-iron oxide particles”). The ε-iron oxide particles are capable of achieving a high coercive force even if the ε-iron oxide particles are fine particles. It is favorable that the ε-iron oxide contained in the ε-iron oxide particles is preferentially crystallographically oriented in the thickness direction (perpendicular direction) of the magnetic recording medium1. The ε-iron oxide particles have a spherical shape or substantially spherical shape, or a cubic shape or substantially cubic shape. Since the ε-iron oxide particles have the above-mentioned shapes, the area of contact between the particles in the thickness direction of the magnetic recording medium1can be reduced, and the aggregation of the particles can be suppressed when ε-iron oxide particles are used as the magnetic particles, as compared with the case where hexagonal plate-shaped barium ferrite particles are used as the magnetic particles. Therefore, it is possible to increase the dispersibility of the magnetic powder and achieve a more favorable SNR (Signal-to-Noise Ratio). The ε-iron oxide particles have a core-shell structure. Specifically, the ε-iron oxide particles include a core portion, and a shell portion that has a two-layer structure and is provided around the core portion. The shell portion having a two-layer structure includes a first shell portion provided on the core portion, and a second shell portion provided on the first shell portion. The core portion contains ε-iron oxides. The ε-iron oxide contained in the core portion favorably has ε-Fe2O3crystal as the main phase, and has more favorably a single phase of ε-Fe2O3. The first shell portion covers at least a part of the periphery of the core portion. Specifically, the first shell portion may partially cover the periphery of the core portion, or may cover the entire periphery of the core portion. From the viewpoint of make exchange coupling of the core portion and the first shell portion sufficient and improving the magnetic properties, the first shell portion favorably covers the entire surface of the core portion21. The first shell portion is a so-called soft magnetic layer, and contains, for example, a soft magnetic material such as α-Fe, a Ni—Fe alloy, or a Fe—Si—Al alloy. α-Fe may be obtained by reducing the ε-iron oxide contained in the core portion21. The second shell portion is an oxide coating film as an oxidation prevention layer. The second shell portion contains α-iron oxide, aluminum oxide, or silicon oxide. The α-iron oxide includes, for example, at least one iron oxide selected from the group consisting of Fe3O4, Fe2O3and FeO. In the case where the first shell portion contains α-Fe (soft magnetic material), the α-iron oxide may be one obtained by oxidizing α-Fe contained in the first shell portion22a. Since the ε-iron oxide particles includes first shell portion as described above the coercive force He of the ε-iron oxide particles (core shell particles) as a whole can be adjusted to a coercive force He suitable for recording while keeping the coercive force He of the core portion alone at a large value in order to ensure high thermal stability. Further, since the ε-iron oxide particles includes the second shell portion as described above, the ε-iron oxide particles are exposed to air and rust or the like is generated on the surfaces of the particles during and before the process of producing the magnetic recording medium, thereby making it possible to suppress the deterioration of the characteristics of the ε-iron oxide particles. Therefore, it is possible to suppress the deterioration of the characteristics of the magnetic recording medium1. The average particle size (average maximum particle size) of the magnetic powder is favorably 22 nm or less, more favorably 8 nm or more and 22 nm or less, and still more favorably 12 nm or more and 22 nm or less. The average aspect ratio of the magnetic powder is favorably 1 or more and 2.5 or less, more favorably 1 or more and 2.1 or less, and still more favorably 1 or more and 1.8 or less. When the average aspect ratio of the magnetic powder is within the range of 1 or more and 2.5 or less, aggregation of the magnetic powder can be suppressed, and the resistivity applied to the magnetic powder can be suppressed when the magnetic powder is perpendicularly oriented in the process of forming the magnetic layer13. Therefore, the perpendicular orientation of the magnetic powder can be improved. The average volume (particle volume) Vave of the magnetic powder is favorably 2,300 nm3or less, more favorably 2,200 nm3or less, more favorably 2,100 nm3or less, more favorably 1,950 nm3or less, more favorably 1,600 nm3or less, and still more favorably 1,300 nm3or less. When the average volume Vave of the magnetic powder is 2,300 nm3or less, the peak of the reproduced waveform of the servo signal can be sharpened by narrowing the full width at half maximum of the isolated waveform in the reproduced waveform of the servo signal (to 195 nm or less). This improves the accuracy of reading the servo signal, so that the recording density of data can be improved by increasing then number of recording tracks (as will be described in detail later). Note that the smaller the average volume Vave of the magnetic powder, the better. Thus, the lower limit value of the volume is not particularly limited. However, for example, the lower limit value is 1000 nm3or more. The average particle size, the average aspect ratio, and the average volume Vave of the above-mentioned magnetic powder are obtained as follows (e.g., in the case where the magnetic powder has a shape such as a spherical shape as in the ε-iron oxide particles). First, the magnetic recording medium1to be measured is processed by the FIB (Focused Ion Beam) method or the like to prepare a slice, and the cross-section of the slice is observed by TEM. Next, 50 magnetic powders are randomly selected from the obtained TEM photograph, and a major axis length DL and a minor axis length DS of each of the magnetic powder are measured. Here, the major axis length DL means the largest one (so-called maximum Feret diameter) of the distances between two parallel lines drawn from all angles so as to be in contact with the contour of the magnetic powder. Meanwhile, the minor axis length DS means the largest one of the lengths of the magnetic powder in a direction perpendicular to the major axis of the magnetic powder. Subsequently, the measured major axis lengths DL of the 50 magnetic powders are simply averaged (arithmetically averaged) to obtain an average major axis length DLave. Then, the average major axis length DLave obtained in this manner is used as the average particle size of the magnetic powder. Further, the measured minor axis lengths DS of the 50 magnetic powders are simply averaged (arithmetically averaged) to obtain an average minor axis length DSave. Next, an average aspect ratio (DLave/DSave) of the magnetic powder is obtained on the basis of the average major axis length DLave and the average minor axis length DSave. Next, an average volume (particle volume) Vave of the magnetic powder is obtained from the following formula by using the average major axis length DLave. Vave=π/6×DLave3 In this description, the case where the ε-iron oxide particles include a shell portion having a two-layer structure has been described. However, the ε-iron oxide particles may include a shell portion having a single-layer structure. In this case, the shell portion has a configuration similar to that of the first shell portion. However, from the viewpoint of suppressing the characteristic deterioration of the ε-iron oxide particles, it is favorable that the ε-iron oxide particles include a shell portion having a two-layer structure as described above. In the above description, the case where the ε-iron oxide particles have a core-shell structure has been described. However, the ε-iron oxide particles may contain an additive instead of the core-shell structure, or may contain an additive while having a core-shell structure. In this case, some Fe of the ε-iron oxide particles are substituted by the additives. Also by causing the ε-iron oxide particles to contain an additive, the coercive force He of the ε-iron oxide particles as a whole can be adjusted to a coercive force He suitable for recording, and thus, the ease of recording can be improved. The additive is a metal element other than iron, favorably, a trivalent metal element, more favorably at least one of Al, Ga, or In, and still more favorably at least one of Al or Ga. Specifically, the ε-iron oxide containing the additive is ε-Fe2-xMxO3crystal (However, M represents a metal element other than iron, favorably a trivalent metal element, more favorably at least one of Al, Ga or In, and still more favorably at least one of Al or Ga. x satisfies the following formula represented by: 0<x<1, for example). The magnetic powder may contain a powder of nanoparticles (hereinafter, referred to as “hexagonal ferrite particles”.) containing hexagonal ferrite. The hexagonal ferrite particles have, for example, a hexagonal plate shape or a substantially hexagonal plate shape. The hexagonal ferrite favorably contains at least one of Ba, Sr, Pb, or Ca, more favorably at least one of Ba or Sr. The hexagonal ferrite may specifically be, for example, barium ferrite or strontium ferrite. Barium ferrite may further contain at least one of Sr, Pb, or Ca, in addition to Ba. Strontium ferrite may further contain at least one of Ba, Pb, or Ca, in addition to Sr. More specifically, the hexagonal ferrite has an average composition represented by the following general formula represented by: MFe12O19. However, M represents, for example, at least one metal selected from the group consisting of Ba, Sr, Pb, and Ca, favorably at least one metal selected from the group consisting of Ba and Sr. M may represent a combination of Ba and one or more metals selected from the group consisting of Sr, Pb, and Ca. Further, M may represent a combination of Sr and one or more metals selected from the group consisting of Ba, Pb, and Ca. In the above-mentioned general formula, some Fe may be substituted by other meatal elements. In the case where the magnetic powder contains a powder of hexagonal ferrite particles, the average particle size of the magnetic powder is favorably 50 nm or less, more favorably 10 nm or more and 40 nm or less, and still more favorably 15 nm or more and 30 nm or less. In the case where the magnetic powder contains a powder of hexagonal ferrite particles, the average aspect ratio of the magnetic powder and the average volume Vave of the magnetic powder are as described above. Note that the average particle size, the average aspect ratio, and the average volume Vave of the magnetic powder are obtained as follows (e.g., in the case where the magnetic powder has a plate-like shape as in hexagonal ferrite). First, the magnetic recording medium1to be measured is processed by the FIB method or the like to produce a slice, and the cross-section of the slice is observed by TEM. Next, 50 magnetic powders oriented at an angle of 75 degrees or more with respect to the horizontal direction are randomly selected from the obtained TEM photograph, and a maximum plate thickness DA of each magnetic powder is measured. Subsequently, the measured maximum plate thicknesses DA of the 50 magnetic powders are simply averaged (arithmetically averaged) to obtain an average maximum plate thickness DAave. Next, the surface of the magnetic layer13of the magnetic recording medium1is observed by TEM. Next, 50 magnetic powders are randomly selected from the obtained TEM photograph, and a maximum plate diameter DB of each magnetic powder is measured. Here, the maximum plate diameter DB means the largest one (so-called maximum Feret diameter) of the distances between two parallel lines drawn from all angles so as to be in contact with the contour of the magnetic powder. Subsequently, the measured maximum plate diameters DB of the 50 magnetic powders are simply averaged (arithmetically averaged) to obtain an average maximum plate diameter DBave. Then, the average maximum plate diameter DBave obtained in this manner is used as the average particle size of the magnetic powder. Next, an average aspect ratio (DBave/DAave) of the magnetic powder is obtained on the basis of the average maximum plate thickness DAave and the average maximum plate diameter DBave. Next, using the average maximum plate thickness DAave and the average maximum plate diameter DBave, an average volume (particle volume) Vave of the magnetic powder is obtained from the following formula. Va⁢v⁢e=3⁢38×D⁢Aa⁢v⁢e×D⁢Ba⁢v⁢e×DBa⁢v⁢e(Math.⁢1) The magnetic powder may contain a powder of nanoparticles (hereinafter, referred to as “cobalt ferrite particles”) containing Co-containing spinel ferrite. The cobalt ferrite particles favorably have uniaxial anisotropy. The cobalt ferrite particles have, for example, a cubic shape or a substantially cubic shape. The Co-containing spinel ferrite may further contain at least one of Ni, Mn, Al, Cu, or Zn, in addition to Co. The Co-containing spinel ferrite has, for example, the average composition represented by the following formula (1). CoxMyFe2Oz(1) (However, in the formula (1), M represents, for example, at least one metal selected from the group consisting of Ni, Mn, Al, Cu, and Zn. x represents a value within the range of 0.4≤x≤1.0. y is a value within the range of 0≤y≤0.3. However, x and y satisfy the relationship of (x+y)≤1.0. z represents a value within the range of 3≤z≤4. Some Fe may be substituted by other metal elements.) In the case where the magnetic powder contains a powder of cobalt ferrite particles, the average particle size of the magnetic powder is favorably 25 nm or less, more favorably 23 nm or less. In the case where the magnetic powder contains a powder of cobalt ferrite particles, the average aspect ratio of the magnetic powder is determined by the method described above, and the average volume Vave of the magnetic powder is determined by the method shown below. Note that in the case where the magnetic powder has a cubic shape as in cobalt ferrite particles, the average volume (particle volume) Vave of the magnetic powder can be obtained as follows. First, the surface of the magnetic layer13of the magnetic recording medium1is observed by TEM. Next, 50 magnetic powders are randomly selected from the obtained TEM photograph, and a side length DC of each of the magnetic powders is measured. Subsequently, the measured side lengths DC of the 50 magnetic powders are simply averaged (arithmetically averaged) to obtain an average side length DCave. Next, using the average side length DCave, the average volume (particle volume) Vave of the magnetic powder is obtained from the following formula. Vave=DCave3 (Binder) As the binder, a resin having a structure in which a crosslinking reaction is imparted to a polyurethane resin, a vinyl chloride resin, or the like is favorable. However, the binder is not limited thereto. Other resins may be appropriately blended depending on the physical properties and the like required for the magnetic recording medium1. The resin to be blended is not particularly limited as long as it is a resin commonly used in the coating-type magnetic recording medium1. Examples of the resin include polyvinyl chloride, polyvinyl acetate, a vinyl chloride-vinyl acetate copolymer, a vinyl chloride-vinylidene chloride copolymer, a vinyl chloride-acrylonitrile copolymer, an acrylic ester-acrylonitrile copolymer, an acrylic ester-vinyl chloride-vinylidene chloride copolymer, a vinyl chloride-acrylonitrile copolymer, an acrylic ester-acrylonitrile copolymer, an acrylic ester-vinylidene chloride copolymer, a methacrylic acid ester-vinylidene chloride copolymer, a methacrylic acid ester-vinyl chloride copolymer, a methacrylic acid ester-ethylene copolymer, polyvinyl fluoride, a vinylidene chloride-acrylonitrile copolymer, an acrylonitrile-butadiene copolymer, a polyamide resin, polyvinyl butyral, cellulose derivatives (cellulose acetate butyrate, cellulose diacetate, cellulose triacetate, cellulose propionate, nitrocellulose), a styrene butadiene copolymer, a polyester resin, an amino resin, and synthetic rubber. Further, examples of the thermosetting resin or the reactive resin include a phenol resin, an epoxy resin, a urea resin, a melamine resin, an alkyd resin, a silicone resin, a polyamine resin, and a urea formaldehyde resin. Further, a polar functional group such as —SO3M, —OSO3M, —COOM, and P═O(OM)2may be introduced into the above-mentioned binders for the purpose of improving dispersibility of the magnetic powder. Here, M in the formula represents a hydrogen atom, or an alkali metal such as lithium, potassium, and sodium. Further, examples of the polar functional groups include those of the side chain type having the terminal group of —NR1R2 or —NR1R2R3+X−and those of the main chain type having >NR1R2+X−. Here, R1, R2, and R3 in the formula each represent a hydrogen atom or a hydrocarbon group, and X represents a halogen element ion such as fluorine, chlorine, bromine, and iodine, or an inorganic or organic ion. Further, examples of the polar functional groups include also —OH, —SH, —CN, and an epoxy group. (Lubricant) It is favorable that the lubricant contains a compound represented by the following general formula (1) and a compound represented by the following general formula (2). In the case where the lubricant contains these compounds, it is possible to particularly reduce the dynamic friction coefficient of the surface of the magnetic layer13. Therefore, it is possible to further improve the traveling property of the magnetic recording medium1. CH3(CH2)nCOOH  (1) (However, in the general formula (1), n represents an integer selected from the range of 14 or more and 22 or less.) CH3(CH2)pCOO(CH2)qCH3(2) (However, in the general formula (2), p represents an integer selected from the range of 14 or more and 22 or less, and q represents an integer selected from the range of 2 or more and 5 or less.) (Additive) The magnetic layer13may further contain, as non-magnetic reinforcing particles, aluminum oxide (α, β, or γ alumina), chromium oxide, silicon oxide, diamond, garnet, emery, boron nitride, titanium carbide, silicon carbide, titanium carbide, titanium oxide (rutile-type or anatase-type titanium oxide), or the like. [Non-Magnetic Layer12] The non-magnetic layer12contains a non-magnetic powder and a binder. The non-magnetic layer12may contain, as necessary, an additive such as conductive particles, a lubricant, a curing agent, and a rust inhibitor. The thickness of the non-magnetic layer12is favorably 0.6 μm or more and 2.0 μm or less, more favorably 0.6 μm or more and 1.4 μm or less, more favorably 0.8 μm or more and 1.4 μm or less, and more favorably 0.6 μm or more and 1.0 μm or less. The thickness of the non-magnetic layer12can be obtained by a method similar to the method of obtaining the thickness of the magnetic layer13(e.g., TEM). Note that the magnification of the TEM image is appropriately adjusted in accordance with the thickness of the non-magnetic layer12. (Non-Magnetic Powder) The non-magnetic powder includes, for example, at least one of an inorganic particle powder or an organic particle powder. Further, the non-magnetic powder may contain a carbon material such as carbon black. Note that one type of non-magnetic powder may be used alone, or two or more types of non-magnetic powders may be used in combination. The inorganic particles include, for example, a metal, a metal oxide, a metal carbonate, a metal sulfate, a metal nitride, a metal carbide, or a metal sulfide. Examples of the shape of the non-magnetic powder include, but not limited to, various shapes such as a needle shape, a spherical shape, a cubic shape, and a plate shape. (Binder) The binder is similar to that in the magnetic layer13described above. [Back Layer14] The back layer14contains a non-magnetic powder and a binder. The back layer14may contain, as necessary, an additive such as a lubricant, a curing agent, and an antistatic agent. As the non-magnetic powder and the binder, materials similar to those used in the above-mentioned non-magnetic layer12are used. (Non-Magnetic Powder) The average particle size of the non-magnetic powder is favorably 10 nm or more and 150 nm or less, more favorably 15 nm or more and 110 nm or less. The average particle size of the magnetic powder is obtained in a way similar to that for the average particle size D of the above-mentioned magnetic powder. The non-magnetic powder may include a non-magnetic powder having two or more particle size distributions. The upper limit value of the average thickness of the back layer14is favorably 0.6 μm or less, more favorably 0.5 μm or less, and still more favorably 0.4 μm or less. When the upper limit value of the average thickness of the back layer14is 0.6 μm or less, since the thickness of the non-magnetic layer12and the base material11can be kept thick even in the case where the average thickness of the magnetic recording medium1is 5.6 μm, it is possible to maintain the traveling stability of the magnetic recording medium1in a recording/reproduction device. The lower limit value of the average thickness of the back layer14is not particularly limited, but is, for example, 0.2 μm or more. The average thickness of the back layer14is obtained as follows. First, the magnetic recording medium1having a ½ inch width is prepared and cut into a length of 250 mm to prepare a sample. Next, using a laser hologage manufactured by Mitutoyo as a measurement device, the thickness of the sample is measured at five or more points, and the measured values are simply averaged (arithmetically averaged) to calculate an average value tT[μm] of the magnetic recording medium1. Note that the measurement positions are randomly selected from the sample. Subsequently, the back layer14of the sample is removed with a solvent such as MEK (methyl ethyl ketone) and dilute hydrochloric acid. After that, the thickness of the sample is measured at five or more points using the above-mentioned laser hologage, and the measured values are simply averaged (arithmetically averaged) to calculate an average value tB[μm] of the magnetic recording medium1from which the back layer14has been removed. Note that the measurement positions are randomly selected from the sample. After that, an average thickness tb[μm] of the back layer14is obtained from the following formula. tb[μm]=tT[μm]−tB[μm] The back layer14has a surface in which a large number of protrusions are provided. The large number of protrusions are for forming a large number of holes in the surface of the magnetic layer13in the state where the magnetic recording medium1is wound in a roll shape. The large number of holes include, for example, a large number of non-magnetic particles protruding from the surface of the back layer14. In this description, the case where a large number of protrusions provided in the surface of the back layer14are transferred to the surface of the magnetic layer13to form a large number of holes in the surface of the magnetic layer13has been described. However, the method of forming a large number of holes is not limited thereto. For example, a large number of holes may be formed in the surface of the magnetic layer13by adjusting the type of solvent contained in the coating material for forming a magnetic layer and the drying condition of the coating material for forming a magnetic layer. [Average Thickness of Magnetic Recording Medium] The upper limit value of the average thickness (average total thickness) of the magnetic recording medium1is favorably 5.6 μm or less, more favorably 5.4 μm or less, more favorably 5.2 μm or less, more favorably 5.0 μm or less, more favorably 4.8 μm or less, more favorably 4.6 m or less, and still more favorably 4.4 μm or less. When the average thickness of the magnetic recording medium1is 5.6 μm or less, the recording capacity in the cartridge21can be made higher than a typical magnetic recording medium. The lower limit value of the average thickness of the magnetic recording medium1is not particularly limited, but is, for example, 3.5 m or more. The average thickness of the magnetic recording medium1is obtained by the procedure described in the above-mentioned method of obtaining the average thickness of the back layer14. (Coercive Force Hc) The upper limit value of the coercive force He in the longitudinal direction of the magnetic recording medium1is, for example, 2,500 or less, favorably 2,000 Oe or less, more favorably 1,900 Oe or less, and still more favorably 1,800 Oe or less. In the case where the lower limit value of the coercive force He measured in the longitudinal direction of the magnetic recording medium1is favorably 1,000 Oe or more, demagnetization due to leakage flux from the recording head can be suppressed. The above-mentioned coercive force He is obtained as follows. First, three magnetic recording mediums1are stacked on top of each other with double-sided tapes, and then punched out by a φ6.39 mm punch to create a measurement sample. Then, the M-H loop of the measurement sample (the entire magnetic recording medium1) corresponding to the longitudinal direction of the magnetic recording medium1(the traveling direction of the magnetic recording medium1) is measured using a vibrating sample magnetometer (VSM). Next, acetone, ethanol, or the like is used to wipe off the coating film (the non-magnetic layer12, the magnetic layer13, the back layer14, and the like), leaving only the base material11. Then, the obtained three base materials11are stacked on top of each other with double-sided tapes, and then punched out by a (6.39 mm punch to obtain a sample for background correction (hereinafter, referred to simply as a sample for correction). Then, the VSM is used to measure the M-H loop of the sample for correction (the base material11) corresponding to the longitudinal direction of the base material11(the traveling direction of the magnetic recording medium1). In the measurement of the M-H loop of the measurement sample (entire magnetic recording medium1) and the M-H loop of the sample for correction (the base material11), a high sensitivity vibrating sample magnetometer “VSM-P7-15 type” manufactured by TOEI INDUSTRIAL CO., LTD. is used. The measurement conditions are as follows. Measurement mode: full loop, maximum magnetic field: 15 kOe, magnetic field step: 40 bit, Time constant of Locking amp: 0.3 sec, Waiting time: 1 sec, MH averaging number: 20. After two M-H loops are obtained, the M-H loop of the sample for correction (the base material11) is subtracted from the M-H loop of the measurement sample (entire magnetic recording medium1) to perform background correction, and the M-H loop after the background correction is obtained. The measurement/analysis program attached to the “VSM-P7-15 type” is used to calculate the background correction. The coercive force He is obtained from the obtained M-H loop after the background correction. Note that for this calculation, the measurement/analysis program attached to the “VSM-P7-15 type” is used. Note that the above-mentioned measurement of the M-H loop is performed at 25° C. Further, “demagnetizing field correction” when measuring the M-H loop in the longitudinal direction of the magnetic recording medium1is not performed. (Degree of Orientation (Squareness Ratio)) The degree of perpendicular orientation is obtained as follows. First, three magnetic recording mediums1are stacked on top of each other with double-sided tapes, and then punched out by a φ6.39 mm punch to create a measurement sample. Then, the VSM is used to measure the M-H loop of the measurement sample (the entire magnetic recording medium1) corresponding to the perpendicular direction (the thickness direction) of the magnetic recording medium1. Next, acetone, ethanol, or the like is used to wipe off the coating film (the non-magnetic layer12, the magnetic layer13, the back layer14, and the like), leaving only the base material11. Then, the obtained three base materials11are stacked on top of each other with double-sided tapes, and then punched out by a φ6.39 mm punch to obtain a sample for background correction (hereinafter, referred to simply as a sample for correction). Then, the VSM is used to measure the M-H loop of the sample for correction (the base material11) corresponding to the perpendicular direction of the base material11(the perpendicular direction of the magnetic recording medium1). In the measurement of the M-H loop of the measurement sample (entire magnetic recording medium1) and the M-H loop of the sample for correction (the base material11), a high sensitivity vibrating sample magnetometer “VSM-P7-15 type” manufactured by TOEI INDUSTRIAL CO., LTD. is used. The measurement conditions are as follows. Measurement mode: full loop, maximum magnetic field: 15 kOe, magnetic field step: 40 bit, Time constant of Locking amp: 0.3 sec, Waiting time: 1 sec, MH averaging number: 20. After two M-H loops are obtained, the M-H loop of the sample for correction (the base material11) is subtracted from the M-H loop of the measurement sample (entire magnetic recording medium1) to perform background correction, and the M-H loop after the background correction is obtained. The measurement/analysis program attached to the “VSM-P7-15 type” is used to calculate the background correction. The saturation magnetization Ms (emu) and residual magnetization Mr (emu) of the obtained M-H loop after the background correction are substituted into the following formula to calculate the degree of perpendicular orientation (%). Note that the above-mentioned measurement of the M-H loop is performed at 25° C. Further, “demagnetizing field correction” when measuring the M-H loop in the perpendicular direction of the magnetic recording medium1is not performed. Note that for this calculation, the measurement/analysis program attached to the “VSM-P7-15 type” is used. Degree of perpendicular orientation (%)=(Mr/Ms)×100 The degree of orientation (degree of longitudinal orientation) in the longitudinal direction (traveling direction) of the magnetic recording medium1is favorably 35% or less, more favorably 30% or less, and still more favorably 25% or less. When the degree of longitudinal orientation is 35% or less, the perpendicular orientation of the magnetic powder becomes sufficiently high, so that a more excellent SNR can be obtained. The degree of longitudinal orientation is determined in a manner similar to that for the degree of perpendicular orientation except that the M-H loop is measured in the longitudinal direction (traveling direction) of the magnetic recording medium1and the base material11. (Dynamic Friction Coefficient) In the case where a ratio (μB/μA) of a dynamic friction coefficient μBbetween the surface of the magnetic layer13and the magnetic head when the tension applied to the magnetic recording medium1is 0.4 N to a dynamic frictional coefficients μAbetween the surface of the magnetic layer13and the magnetic head when the tension applied to the magnetic recording medium1is 1.2 N is favorably 1.0 or more and 2.0 or less, the change in friction coefficient due to the tension fluctuation during traveling can be reduced, and thus, it is possible to stabilize the traveling of the tape. In the case where a ratio (μ1000/μ5) of a value μ1000at the 1000-th traveling to a value μ5at the fifth traveling of the dynamic friction coefficient μAbetween the surface of the magnetic layer13and the magnetic head when the tension applied to the magnetic recording medium1is 0.6 N is favorably 1.0 or more and 2.0 or less, more favorably 1.0 or more and 1.5 or less. In the case where the ratio (μB/μA) is 1.0 or more and 2.0 or less, the change in friction coefficient due to a large number of times of traveling can be reduced, and thus, the traveling of the tape can be stabilized. <Method of Producing Magnetic Recording Medium> Next, a method of producing the magnetic recording medium1will be described. First, a coating material for forming a non-magnetic layer is prepared by kneading and dispersing a non-magnetic powder, a binder, a lubricant, and the like in a solvent. Next, a coating material for forming a magnetic layer is prepared by kneading and dispersing a magnetic powder, a binder, a lubricant, and the like in a solvent. Next, a coating material for forming a back layer is prepared by kneading and dispersing a binder, a non-magnetic powder, and the like in a solvent. For preparing the coating material for forming a magnetic layer, the coating material for forming a non-magnetic layer, and the coating material for forming a back layer, for example, the following solvents, dispersing devices, and kneading devices can be used. Examples of the solvent used for preparing the above-mentioned coating material include a ketone solvent such as acetone, methyl ethyl ketone, methyl isobutyl ketone, and cyclohexanone, an alcohol solvent such as methanol, ethanol, and propanol, an ester solvent such as methyl acetate, ethyl acetate, butyl acetate, propyl acetate, ethyl lactate, and ethylene glycol acetate, an ether solvent such as diethylene glycol dimethyl ether, 2-ethoxyethanol, tetrahydrofuran, and dioxane, an aromatic hydrocarbon solvent such as benzene, toluene, and xylene, and a halogenated hydrocarbon solvent such as methylene chloride, ethylene chloride, carbon tetrachloride, chloroform, and chlorobenzene. These may be used alone or may be appropriately mixed and used. As the above-mentioned kneading apparatus used for the preparation of the coating materials, for example, a kneading apparatus such as a continuous twin-screw kneader, a continuous twin-screw kneader capable of diluting in multiple stages, a kneader, a pressure kneader, and a roll kneader can be used. However, the present technology is not particularly limited to these apparatuses. Further, as the above-mentioned dispersion apparatus used for the preparation of the coating materials, for example, a dispersion apparatus such as a roll mill, a ball mill, a horizontal sand mil, a perpendicular sand mil, a spike mill, a pin mill, a tower mill, a pearl mill (e.g., “DCP mill” manufactured by Eirich Co., Ltd.), a homogenizer, and an ultrasonic disperser can be used. However, the present technology is not particularly limited to these apparatuses. Next, the non-magnetic layer12is formed by applying a coating material for forming a non-magnetic layer onto one main surface of the base material11and drying the coating material. Subsequently, a coating material for forming a magnetic layer is applied onto the non-magnetic layer12and dried to form the magnetic layer13on the non-magnetic layer12. Note that it is favorable to orient, during drying, the magnetic field of the magnetic powder in the thickness direction of the base material11by means of, for example, a solenoid coil. Further, during drying, after orienting the magnetic field of the magnetic powder in the traveling direction (longitudinal direction) of the base material11by means of, for example, a solenoid coil, the magnetic field may be oriented in the thickness direction of the base material11. After forming the magnetic layer13, the back layer14is formed by applying a coating material for forming a back layer onto the other main surface of the base material11and drying the coating material. As a result, the magnetic recording medium1is obtained. After that, calendaring treatment is performed on the obtained magnetic recording medium1to smooth the surface of the magnetic layer13. Next, the magnetic recording medium1on which calendaring treatment has been performed is wound into rolls, and then, heat treatment is performed on the magnetic recording medium1in this condition to transfer a large number of protrusions14A on the surface of back layer14to the surface of the magnetic layer13. As a result, a large number of holes13A are formed on the surface of the magnetic layer13. The temperature of the heat treatment is favorably 55° C. or higher and 75° C. or less. In the case where the temperature of the heat treatment is 55° C. or higher, favorable transferability can be achieved. Meanwhile, in the case where the temperature of the heat treatment is 75° C. or more, the amount of pores becomes too large, and the lubricant on the surface becomes excessive. Here, the temperature of the heat treatment is the temperature of the atmosphere in which the magnetic recording medium1is held. The time of the heat treatment is favorably 15 hours or more and 40 hours or less. In the case where the time of the heat treatment is 15 hours or more, favorable transferability can be obtained. Meanwhile, in the case where the time of the heat treatment is 40 hours or less, a decrease in productivity can be suppressed. Finally, the magnetic recording medium1is cut into a predetermined width, (e.g., ½ inch width). In this way, the target magnetic recording medium1is obtained. [Process of Preparing Coating Material for Forming a Magnetic Layer] Next, the process of preparing a coating material for forming a magnetic layer will be described. First, a first composition of the following formulation was kneaded with an extruder. Next, the kneaded first composition and a second composition of the following formulation were added to a stirring tank including a dispersion device to perform preliminary mixing. Subsequently, sand mill mixing was further performed, and filter treatment was performed to prepare a coating material for forming a magnetic layer. (First Composition)Powder of barium ferrite (BaFe12O19) particles (hexagonal plate-shaped, aspect ratio 2.8, particle volume 1,950 nm3): 100 parts by massVinyl chloride resin (cyclohexanone solution 30 mass %): 51.3 parts by mass (solution included)(the degree of polymerization 300, Mn=10,000, containing OSO3K=0.07 mmol/g and secondary OH=0.3 mmol/g as polar groups.)Aluminum oxide powder: 5 parts by mass(α-Al2O3, average particle size 0.2 μm)Carbon black: 2 parts by mass(Manufactured by Tokai Carbon Co., Ltd., trade name: Seast TA) (Second Composition)Vinyl chloride resin: 1.1 parts by mass(Resin solution: resin content 30% by mass, cyclohexanone 70% by mass)N-butyl stearate: 2 parts by massMethylethylketone: 121.3 parts by massToluene: 121.3 parts by massCyclohexanone: 60.7 parts by mass Finally, four parts by mass of polyisocyanate (trade name: Coronate L, manufactured by Nippon Polyurethane Co., Ltd.) and two parts by mass of myristic acid were added, as curing agents, to the coating material for forming a magnetic layer prepared as described above. [Process of Preparing Coating Material for Forming Non-Magnetic Layer] Next, the process of preparing a coating material for forming a non-magnetic layer will be described. First, a third composition of the following formulation was kneaded with an extruder. Next, the kneaded third composition and a fourth composition of the following formulation were added to a stirring tank including a dispersion device to perform preliminary mixing. Subsequently, sand mill mixing was further performed, and filter treatment was performed to prepare a coating material for forming a non-magnetic layer. (Third Composition)Acicular iron oxide powder: 100 parts by mass(α-Fe2O3, average major axis length 0.15 μm)Vinyl chloride resin: 55.6 parts by mass(Resin solution: resin content 30% by mass, cyclohexanone 70% by mass)Carbon black: 10 parts by mass(Average particle size 20 nm) (Fourth Composition)Polyurethane resin UR8200 (manufactured by Toyobo CO., LTD.): 18.5 parts by massN-butyl stearate: 2 parts by massMethylethylketone: 108.2 parts by massToluene: 108.2 parts by massCyclohexanone: 18.5 parts by mass Finally, four parts by mass of polyisocyanate (trade name: Coronate L, manufactured by Nippon Polyurethane Co., Ltd.) and two parts by mass of myristic acid were added, as curing agents, to the coating material for forming a non-magnetic layer prepared as described above. [Process of Preparing Coating Material for Forming Back Layer] Next, the process of preparing the coating material for forming a back layer will be described. A coating material for forming a back layer was prepared by mixing the following raw materials in a stirring tank including a dispersion device and performing filter treatment thereon.Powder of carbon black particles (average particle size 20 nm): 90 parts by massPowder of carbon black particles (average particle size 270 nm): 10 parts by massPolyester polyurethane: 100 parts by mass(manufactured by Nippon Polyurethane Co., Ltd., product name: N-2304)Methyl ethyl ketone: 500 parts by massToluene: 400 parts by massCyclohexanone: 100 parts by mass Note that the type and the blending amount of the inorganic particles may be changed as follows.Powder of carbon black particles (average particle size 20 nm): 80 parts by massPowder of carbon black particles (average particle size 270 nm): 20 parts by mass Further, the type and the blending amount of the inorganic particles may be changed as follows.Powder of carbon black particles (average particle size 20 nm): 100 parts by mass [Application Process] The coating material for forming a magnetic layer and coating material for forming a non-magnetic layer prepared as described above were used to form a non-magnetic layer with an average thickness of 1.0 to 1.1 μm and a magnetic layer with an average thickness of 40 to 100 nm on one main surface of an elongated polyethylene naphthalate film (hereinafter, referred to as “PEN film”) that is a non-magnetic support (e.g., average thickness 4.0 μm) as follows. First, the coating material for forming a non-magnetic layer was applied onto one main surface of the PEN film and dried to form a non-magnetic layer. Next, the coating material for forming a magnetic layer was applied onto the non-magnetic layer and dried to form a magnetic layer. Note that when the coating material for forming a magnetic layer was dried, the magnetic field of the magnetic powder was oriented in the thickness direction of the film by means of a solenoidal coil. Note that the degree of orientation in the thickness direction (perpendicular direction) and the degree of orientation in the longitudinal direction of the magnetic recording medium were set to predetermined values by adjusting the magnitude of the magnetic field from the solenoid coil (2 to 3 times the holding force of the magnetic powder), adjusting the solid content of the coating material for forming a magnetic layer, or adjusting the conditions for the magnetic powder to orient in a magnetic field by the adjustment of the drying conditions (drying temperature and drying time) of the coating material for forming a magnetic layer. Subsequently, a non-magnetic layer was formed by applying the coating material for forming a back layer onto the other main surface of the PEN film and drying the coating material. In this way, a magnetic recording medium was obtained. Note that in order to increase the degree of orientation, the dispersed condition of the coating material for forming a magnetic layer needs to be improved. In addition, in order to increase the degree of perpendicular orientation, it is also useful to magnetize the magnetic powder in advance before the magnetic recording medium enters the orientation device. [Calendar Process, Transfer Process] Subsequently, calendar treatment was performed to smooth the surface of the magnetic layer. Next, after winding the obtained magnetic recording medium in a roll, heat treatment of 60° C. for 10 hours was performed twice on the magnetic recording medium in this condition. As a result, a large number of protrusions on the surface of the back layer were transferred to the surface of the magnetic layer, and a large number of holes were formed on the surface of the magnetic layer. [Cutting Process] The magnetic recording medium obtained as described above was cut into a ½ inch (12.65 mm) width. As a result, a target elongated magnetic recording medium was obtained. <Details of Cartridge> Next, details of the cartridge21will be described. [Configuration Example 1 of Cartridge] FIG.20is an exploded perspective view showing an example of a configuration of the cartridge21. The cartridge21includes, inside a cartridge case312including a lower shell312A and an upper shell312B, a reel313on which a tape-shaped magnetic recording medium (hereinafter, referred to as “magnetic tape”) MT is wound, a reel lock314and a reel spring315for locking rotation of the reel313, a spider316for releasing the locked state of the reel313, a slide door317for opening and closing a tape outlet312C provided on the cartridge case312across the lower shell312A and the upper shell312B, a door spring318for urging the slide door317to the closed position of the tape outlet312C, a write protect319for preventing erroneous erasure, and a cartridge memory311. The reel313has a substantially disk shape having an opening at the center thereof, and includes a reel hub313A and a flange313B formed of hard materials such as plastics. A leader pin320is provided at one end of the magnetic tape MT. The magnetic tape MT corresponds to the magnetic recording medium1according to this embodiment described above. The cartridge21may be a magnetic tape cartridge conforming to the LTO (Linear Tape-Open) standard, or may be a magnetic tape cartridge conforming to a standard different from the LTO standard. The cartridge memory311is provided in the vicinity of one corner of the cartridge21. With the cartridge21loaded into the data recording device20(FIG.5), the cartridge memory311faces the reader/writer of the data recording device20. The cartridge memory311communicates with the data recording device20, specifically with the reader/writer thereof by using a wireless communication standard conforming to the LTO standard. FIG.21is a block diagram showing an example of a configuration of the cartridge memory311. The cartridge memory311includes an antenna coil (communication unit)431that communicates with a reader/writer using a specified communication standard, a rectifier/power supply circuit432for generating power from radio waves received by the antenna coil431using an induced electromotive force and rectifying the generated power to generate a power source, a clock circuit433that generates a clock using the induced electromotive force similarly from the radio waves received by the antenna coil431, a detection/modulator circuit434for detecting radio waves received by the antenna coil431and modulating signals transmitted by the antenna coil431, a controller (control unit)435that includes a logic circuit and the like for discriminating and processing a command and data from a digital signal extracted from the detection/modulator circuit434, and a memory (storage unit)436that stores information. Further, the cartridge memory311includes a capacitor437connected in parallel to the antenna coil431, and the antenna coil431and the capacitor437constitute a resonant circuit. The memory436stores information and the like relating to the cartridge21. The memory436is a non-volatile memory (NVM). The storage capacity of the memory436is favorably approximately 32 KB or more. The memory436has a first storage region436A and a second storage region436B. The first storage region436A corresponds to, for example, the storage region of a cartridge memory of the LTO standard before LTO8 (hereinafter, referred to as “existing cartridge memory”), and is a region for storing information conforming to the LTO standard before LTO8. Examples of information conforming to the LTO standard before LTO8 include manufacturing information (e.g., a unique number of the cartridge21) and usage history (e.g., the number of times of tape drawing (Thread Count)). The second storage region436B corresponds to an extended storage region for the storage region of the existing cartridge memory. The second storage region436B is a region for storing additional information. Here, the additional information means, for example, information relating to the cartridge21, which is not specified in the LTO standard before LTO8. Examples of the additional information include, but not limited to, tension adjustment information, management ledger data, Index information, and thumbnail information of a moving image stored in the magnetic tape MT. The tension adjustment information is information for adjusting the tension applied to the magnetic tape MT in the longitudinal direction. The tension-adjustment information includes a distance between adjacent servo bands (a distance between servo patterns recorded on adjacent servo bands) at the time of recording data on the magnetic tape MT. The distance between the adjacent servo bands is an example of width-related information relating to the width of the magnetic tape MT. In the following description, information stored in the first storage region436A is referred to as “first information” and information stored in the second storage region436B is referred to as “second information” in some cases. The memory436may include a plurality of banks. In this case, a part of the plurality of banks may constitute the first storage region436A, and the remaining banks may constitute the second storage region436B. The antenna coil431induces an induced voltage by electromagnetic induction. The controller435communicates with the data recording device20in accordance with a specified communication standard via the antenna coil431. Specifically, for example, mutual authentication, transmitting and receiving commands, exchanging data, and the like are performed. The controller435stores information received from the data recording device20via the antenna coil431in the memory436. For example, the tension adjustment information received from the data recording device20via the antenna coil431is stored in the second storage region436B of the memory436. The controller435reads information from the memory436and transmits the read information to the data recording device20via the antenna coil431in response to a request from the data recording device20. For example, the tension adjustment information is read from the second storage region436B of the memory436in response to a request from the data recording device20, and transmitted to the data recording device20via the antenna coil431. [Configuration Example 2 of Cartridge] FIG.22is an exploded perspective view showing an example of a configuration of a cartridge521of the two-reel type. The cartridge521includes an upper half502formed of synthetic resin, a transparent window member523fitted and fixed to a window portion502aopened in the upper surface of the upper half502, a reel holder522that is fixed to the inside of the upper half502to prevent reels506and507from floating, a lower half505corresponding to the upper half502, the reels506and507housed in a space formed by combining the upper half502and the lower half505, a magnetic tape MT1wound on the reels506and507, a front lid509that closes the front-side opening formed by combining the upper half502and the lower half505, and a back lid509A that protects the magnetic tape MT1exposed to the front-side opening. The reel506includes a lower flange506bincluding a cylindrical hub portion506aon which the magnetic tape MT1is wound at the center, an upper flange506chaving substantially the same size as that of the lower flange506b, and a reel plate511sandwiched between the hub portion506aand the upper flange506c. The reel507has a configuration similar to that of the reel506. Mounting holes523afor assembling the reel holder522that is a reel holding means for preventing the reels506and507from floating are provided at positions corresponding to the reels506and507of the window member523. The magnetic tape MT1is configured similarly to the magnetic recording medium1in this embodiment described above. It should be noted that the present technology may take the following configurations.(1) A tape-shaped magnetic recording medium, including:a magnetic layer including a servo band, a servo signal being recorded on the servo band, in whichan index expressed by Sq×Fact.(p−p)/F0(p−p) is 0.42 or more, Sq being a squareness ratio of the magnetic layer in a perpendicular direction, F0(p−p) being a peak-to-peak value of a first magnetic force gradient strength observed by a magnetic force microscope when a servo signal is saturation-recorded on the magnetic layer, Fact.(p−p) being a peak-to-peak value of a second magnetic force gradient strength for the servo signal recorded on the servo band observed by the magnetic force microscope.(2) The magnetic recording medium according to (1) above, in whichthe index is 0.45 or more.(3) The magnetic recording medium according to (1) above, in whichthe squareness ratio (Sq) of the magnetic layer in the perpendicular direction is 0.5 or more.(4) The magnetic recording medium according to (3) above, in whichthe squareness ratio (Sq) of the magnetic layer in the perpendicular direction is 0.6 or more.(5) The magnetic recording medium according to (1) above, in whicha ratio (Fact.(p−p)/F0(p−p)) of Fact.(p−p) to F0(p−p) is 0.6 or more.(6) The magnetic recording medium according to (5) above, in whichthe ratio (Fact.(p−p)/F0(p−p)) of Fact.(p−p) to F0(p−p) is 0.7 or more.(7) The magnetic recording medium according to (1) above, in whicha residual magnetization (Mrt) of the magnetic layer is 0.39 or more.(8) The magnetic recording medium according to (7) above, in whichthe residual magnetization (Mrt) of the magnetic layer is 0.45 or more.(9) The magnetic recording medium according to any one of (1) to (8) above, in whichthe servo signal is a servo signal recording pattern including a plurality of stripes inclined at a predetermined azimuth angle with respect to a tape width direction.(10) The magnetic recording medium according to any one of (1) to (9), in whichthe magnetic layer contains a magnetic powder of hexagonal ferrite, ε-iron oxide, or cobalt ferrite.(11) The magnetic recording medium according to any one of (1) to (10) above, further includinga non-magnetic layer provided between a base material that supports the magnetic layer, one main surface of the base material, and the magnetic layer.(12) The magnetic recording medium according to any one of (1) to (11) above, further includinga back layer provided on the other main surface of the base material.(13) The magnetic recording medium according to any one of (1) to (12) above, in whichan average thickness of the magnetic recording medium is 5.6 μm or less.(14) The magnetic recording medium according to any one of (1) to (13) above, in whichan average thickness of the magnetic recording medium is 5.4 μm or less.(15) The magnetic recording medium according to any one of (1) to (14) above, in whichan average thickness of the magnetic recording medium is 5.2 μm or less.(16) The magnetic recording medium according to any one of (1) to (15) above, in whichan average thickness of the magnetic recording medium is 5.0 μm or less.(17) The magnetic recording medium according to any one of (1) to (16) above, in whichan average thickness of the non-magnetic layer is 0.6 μm or more and 2.0 μm or less.(18)(19) A cartridge, including:a tape-shaped magnetic recording medium including a magnetic layer including a servo band, a servo signal being recorded on the servo band, in whichan index expressed by Sq×Fact.(p−p)/F0(p−p) is 0.42 or more, Sq being a squareness ratio of the magnetic layer in a perpendicular direction, F0(p−p) being a peak-to-peak value of a first magnetic force gradient strength observed by a magnetic force microscope when a servo signal is saturation-recorded on the magnetic layer, Fact.(p−p) being a peak-to-peak value of a second magnetic force gradient strength for the servo signal recorded on the servo band observed by the magnetic force microscope. REFERENCE SIGNS LIST 1magnetic recording medium5recording track6servo signal recording pattern7stripe11base material12non-magnetic layer13magnetic layer14back layer200servo signal recording device210servo write head220auxiliary magnetic poled data bands servo band
124,941
11862213
DETAILED DESCRIPTION Embodiments will be described hereinafter with reference to the accompanying drawings. In general, according to one embodiment, a magnetic disk device includes a disk that has a track including a first servo sector and a second servo sector that is different from the first servo sector, a head that writes data to the disk and reads data from the disk, and a controller that records first signal strength record data related to a signal strength at which first target servo data that is a target of the first servo sector is read, and standardizes first signal strength data related to a signal strength at which the first target servo data is read when the first target servo data is read. Embodiments will be described below with reference to the drawings. The drawings are merely examples, and do not limit the scope of the invention. First Embodiment FIG.1is a block diagram illustrating the configuration of a magnetic disk device1according to the first embodiment. The magnetic disk device1includes a head disk assembly (HDA) described later, a driver IC20, a head amplifier integrated circuit (hereinafter, head amplifier IC or preamplifier)30, a volatile memory70, a nonvolatile memory80, a buffer memory (buffer)90, and a system controller130that is a one-chip integrated circuit. The magnetic disk device1is connected to a host system (hereinafter, simply referred to as host)100. Note that the magnetic disk device1may be a two-dimensional magnetic recording (TDMR) magnetic disk device or the like having a plurality of read heads15R in a head15. The HDA includes a magnetic disk (hereinafter, referred to as disk)10, a spindle motor (hereinafter, referred to as SPM)12, an arm13on which the head15is mounted, and a voice coil motor (hereinafter, referred to as VCM)14. The disk10is attached to the SPM12and rotates by drive of the SPM12. The arm13and the VCM14constitute an actuator. By drive of the VCM14, the actuator controls movement of the head15mounted on the arm13to a predetermined position of the disk10. Two or more of the disks10and the heads15may be provided. Two or more actuators may also be provided. In the disk10, a user data region10aavailable from a user and a system area10bin which information necessary for system management is written are allocated to a region in which the data can be written. Note that as a region different from the user data region10aand the system area10b, a media cache (or sometimes referred to as media cache region) that temporarily holds data (or a command) transferred from the host or the like before being written to a predetermined region of the user data region10amay be allocated to the disk10. Hereinafter, a direction from the inner circumference toward the outer circumference of the disk10or a direction from the outer circumference toward the inner circumference of the disk10is referred to as a radial direction. In the radial direction, a direction from the inner circumference toward the outer circumference is referred to as an outer direction (or outside), and a direction from the outer circumference toward the inner circumference, that is, a direction opposite to the outer direction is referred to as an inner direction (or inside). A direction orthogonal to the radial direction of the disk10is referred to as a circumferential direction. That is, the circumferential direction corresponds to a direction along the circumference of the disk10. A predetermined position of the disk10in the radial direction is sometimes referred to as radial position, and a predetermined position of the disk10in the circumferential direction is sometimes referred to as circumferential position. The radial position and the circumferential position are sometimes collectively referred to simply as a position. The disk10is divided into a plurality of regions (hereinafter, referred to as a zone or a zone region) for each predetermined range in the radial direction. The zone includes a plurality of tracks. The track includes a plurality of sectors. Note that the “track” is used in various meanings such as a region among a plurality of regions into which the disk10is divided for each predetermined range in the radial direction, data written in a region among a plurality of regions into which the disk10is divided for each predetermined range in the radial direction, a region extending in the circumferential direction at a predetermined radial position of the disk10, data written in a region extending in the circumferential direction at a predetermined radial position of the disk10, a region for a circle of a predetermined radial position of the disk10, data for a circle written in a region for a circle of a predetermined radial position of the disk10, a path of the head15positioned and written at a predetermined radial position of the disk10, data written by the head15positioned at a predetermined radial position of the disk10, and data written in a predetermined track of the disk10. The “sector” is used in various meanings such as a region among a plurality of regions into which a predetermined track of the disk10is divided in the circumferential direction, data written in a region among a plurality of regions into which a predetermined track of the disk10is divided in the circumferential direction, a region of a predetermined circumferential position at a predetermined radial position of the disk10, data written in a region of a predetermined circumferential position at a predetermined radial position of the disk10, and data written in a predetermined sector of the disk10. The “radial width of the track” is sometimes referred to as “track width”. The center position of the track width is sometimes referred to as track center. The track center is sometimes simply referred to as track. The head15includes a write head15W, the read head15R, and a heater (heat generation element)15H mounted on a slider as a main body. The write head15W writes data on the disk10. For example, the write head15W writes a predetermined track or a predetermined sector onto the disk10. Hereinafter, “to write data” is sometimes referred to as “data write”, “write processing”, or the like. The read head15R reads data recorded on the disk10. For example, the read head15R reads a predetermined track or a predetermined sector of the disk10. Hereinafter, “to read data” is sometimes referred to as “data read”, “read processing”, or the like. Note that the “write head15W” is sometimes simply referred to as the “head15”, and the “read head15R” is sometimes simply referred to as the “head15”. The “write head15W and read head15R” are sometimes collectively referred to as the “head15”. The “center part of the head15” is sometimes referred to as the “head15”, the “center part of the write head15W” is sometimes referred to as the “write head15W”, and the “center part of the read head15R” is sometimes referred to as the “read head15R”. The “center part of the write head15W” is sometimes referred to as the “head15”, and the “center part of the read head15R” is sometimes referred to as the “head15”. To “position center part of the head15at a predetermined position” is sometimes expressed as to “position the head15at a predetermined position”, to “arrange the head15at a predetermined position”, to “locate the head15in a predetermined position”, or the like. To “position the center part of the head15at a target position of a predetermined region (hereinafter, sometimes referred to as region target position), for example, to position the center part of the head15at a radial center of the predetermined region” is sometimes expressed as to “position the head15at a predetermined region”, to “arrange the head15at a predetermined region”, to “locate the head15at a predetermined region”, to “position at a predetermined region”, to “arrange at a predetermined region”, to “locate at a predetermined region”, or the like. To “position the center part of the head15at a target position of a predetermined region (hereinafter, sometimes referred to as track target position), for example, to position the center part of the head15at a track center” is sometimes expressed as to “position the head15at a predetermined track”, to “arrange the head15at a predetermined track”, to “locate the head15at a predetermined track”, to “position at a track”, to “arrange at a track”, to “locate at a track”, or the like. The heater15H generates heat by being supplied with power. The heater15H may be separately provided in the vicinity of the write head15W and in the vicinity of the read head15R. In a case where the magnetic disk device1is a TDMR type magnetic disk device, the head15may include one write head15W, a plurality of read heads15R, and at least one heater15H. FIG.2is a schematic view illustrating an example of the configuration of the disk10according to the present embodiment. As illustrated inFIG.2, in the circumferential direction, a direction in which the disk10rotates is referred to as a rotation direction. In the example illustrated inFIG.2, the rotation direction is indicated in the anticlockwise direction, but may be in the opposite direction (clockwise direction). The disk10has a plurality of servo regions SV and a plurality of data regions DA. For example, the plurality of servo regions SV may extend radially in the radial direction of the disk10and may be discretely arranged at predetermined intervals in the circumferential direction. For example, the plurality of servo regions SV may extend spirally from the inner circumference to the outer circumference or from the outer circumference to the inner circumference, and may be discretely arranged at predetermined intervals in the circumferential direction. For example, the plurality of servo regions SV may be arranged in an island shape in the radial direction and may be discretely arranged at predetermined intervals varying in the circumferential direction. Hereinafter, “one servo region SV in a predetermined track” is sometimes referred to as “servo sector”. That is, the servo region SV has at least one servo sector. Note that the “servo region SV” is sometimes referred to as “servo sector SV”. The servo sector includes servo data. Hereinafter, the “arrangement and the like of several servo data constituting a servo sector” is sometimes referred to as “servo pattern”. Note that the “servo data written in a servo sector” is sometimes referred to as “servo sector”. Each of the plurality of data regions DA is arranged between the plurality of servo regions SV. For example, the data region DA corresponds to a region between two consecutive servo regions SV in the circumferential direction. Note that the “one data region DA in a predetermined track” is sometimes referred to as “data sector region”. That is, the data region DA has at least one data sector region. Note that the “data region DA” is sometimes referred to as “data sector region DA”. The data sector region includes at least one sector. The “data sector region” is sometimes referred to as “sector”. Note that “data written in a data sector region” is sometimes referred to as “data sector region”. The head15rotates about a rotation axis by drive of the VCM14with respect to the disk10and moves to a predetermined position from the inner direction toward the outer direction, or moves from the outer direction toward the inner direction. FIG.3is a schematic view illustrating an example of the configuration of a servo sector SS and a data sector region DSR according to the present embodiment.FIG.3illustrates the predetermined servo sector SS and the data sector region DSR written in a predetermined track TRn of the disk10. As illustrated inFIG.3, in the circumferential direction, a direction toward a tip of a front arrow is referred to as a front (or front direction), and a direction toward a tip of a rear arrow is referred to as a rear (or rear direction). For example, in the circumferential direction, a direction to read/write (read/write direction) corresponds to a direction from the front direction toward the rear direction. The read/write direction may correspond to a direction from the rear direction toward the front direction. The read/write direction corresponds to a direction opposite to the rotation direction illustrated inFIG.2, for example. The servo sector SS includes servo data, for example, a preamble, a sync mark, a gray code, a position error signal (PES), a repeatable run-out (RRO), and the like. Note that the servo sector SS may include servo data other than the preamble, the sync mark, the gray code, the PES, and the RRO. In the servo sector SS, the preamble, the sync mark, the gray code, the PES, and the RRO are continuously arranged in this order from the front to the rear in the circumferential direction. The preamble includes preamble information for synchronization with a reproduction signal of a servo pattern including the sync mark and the gray code. The sync mark includes sync mark information indicating the start of the servo pattern. The gray code includes an address (cylinder address) of a predetermined track and an address of a servo sector of the predetermined track. The PES corresponds to data corresponding to a tracking position error signal. The RRO is data related to eccentricity of the disk10. For example, the RRO corresponds to a path that is a target (hereinafter, sometimes referred to as target path) of the head15arranged concentrically with the disk10caused by blurring (repeatable run-out: RRO) synchronized with rotation of the disk10when servo data is written into the disk, for example, data corresponding to an error caused by track distortion with respect to a track center. In the example illustrated inFIG.3, the data sector region DSR includes a region or data (hereinafter, sometimes referred to as signal strength target region or signal strength target servo data) that is a target of the predetermined servo sector SS when the servo sector SS is read in a region (hereinafter, sometimes referred to as signal strength record region) adjacent to the servo sector SS, for example, data (hereinafter, sometimes referred to as signal strength record data) SIS related to the signal strength of a reproduction signal (hereinafter, sometimes referred to as target servo reproduction signal) corresponding to a part or entirety of the servo sector SS. Note that the signal strength record region may be included in the servo sector SS or may be included in a region other than the data sector region DSR and the servo sector SS. The “signal strength record data SIS” is sometimes referred to as “signal strength record region SIS”. The “signal strength record region SIS” is sometimes referred to as “signal strength record data SIS”. Hereinafter, the “signal strength target region”, the “signal strength target servo data”, and the “part or entirety of the servo sector SS” are sometimes simply referred to as the “servo sector SS”. The signal strength record region is adjacent to the servo sector SS in the read/write direction. In other words, the signal strength record region is adjacent immediately after the servo sector SS. For example, the signal strength record region is adjacent immediately after the RRO of the servo sector SS. Note that the signal strength record region needs not be adjacent to the servo sector SS in the read/write direction. The signal strength record region needs not be adjacent to the servo sector SS. For example, the signal strength record region needs not be adjacent to the RRO of the servo sector SS. The term “adjacent” includes not only meanings such as “continuous” and “arranged side by side in contact with each other” in a predetermined direction but also meanings such as “separated to such an extent as to be regarded as substantially continuous”. The signal strength record data SIS is adjacent to the servo sector SS in the read/write direction. In other words, the signal strength record data SIS is adjacent immediately after the servo sector SS. For example, the signal strength record data SIS is adjacent immediately after the RRO of the servo sector SS. Note that the signal strength record data SIS needs not be adjacent to the servo sector SS in the read/write direction. The signal strength record data SIS needs not be adjacent to the servo sector SS. For example, the signal strength record data SIS needs not be adjacent to the RRO of the servo sector SS. For example, the signal strength target region, the signal strength target servo data, and the signal strength record data SIS are servo data that can always obtain a same read signal (or reproduction signal) for which rewrite processing of rewriting data of a predetermined region to this region or the like is not executed. Terms such as “same”, “identical”, “match”, and “equivalent” include not only the meaning of exactly the same but also the meaning of being different to an extent that can be regarded as being substantially the same. The signal strength record data SIS is data related to the signal strength when the signal strength target region (or the signal strength target servo data) is read. The signal strength record data SIS is a value obtained by performing Fourier transform on the target servo reproduction signal, for example. The signal strength record data SIS is a value obtained by performing Fourier transform on and dividing each of, for example, the target servo reproduction signal and an ideal signal or a demodulated signal. The signal strength record data SIS is, for example, a ½ subharmonic after the Fourier transform of the preamble that is a 2T pattern, and is a fundamental frequency or an n-th harmonic obtained by performing the Fourier transform on and dividing the reproduction signal (target servo reproduction signal) of sync mark/gray code/RRO and the ideal signal or the demodulated signal. The signal strength record data SIS is an amplitude of the target servo reproduction signal, for example (hereinafter, sometimes referred to as target servo reproduction signal amplitude). Note that the plurality of servo sectors SS may include a normal servo sector (hereinafter, normal servo sector) and a short servo sector. The normal servo sector corresponds to, for example, the servo sector SS illustrated inFIG.3. For example, the short servo sector has less servo data to be read than that of the normal servo sector, has a smaller number of servo data than that of the normal servo sector, and has a length smaller than the circumferential length of the normal servo sector. When the plurality of servo sectors SS include the normal servo sector and the short servo sector, the signal strength record region may be arranged immediately after the read/write direction of the normal servo sector, and needs not be arranged between the short servo sector and a next servo sector in the read/write direction of this short servo sector. In other words, when the plurality of servo sectors SV include the normal servo sector and the short servo sector, the signal strength record region is adjacent in the read/write direction of the normal servo sector and is not adjacent in the read/write direction of the short servo sector. When the plurality of servo sectors SS include the normal servo sector and the short servo sector, the signal strength record data SIS may be written immediately after the read/write direction of the normal servo sector, and needs not be written between the short servo sector and a next servo sector in the read/write direction of this short servo sector. In other words, when the plurality of servo sectors SS include the normal servo sector and the short servo sector, the signal strength record data SIS is adjacent in the read/write direction of the normal servo sector and is not adjacent in the read/write direction of the short servo sector. FIG.4is a schematic view illustrating an example of the disk10and the head15before expansion. InFIG.4, a rotation direction B of the disk10matches the direction of an air flow C.FIG.4illustrates a direction Z corresponding to a thickness or a height direction. Hereinafter, a direction from the head15toward the disk10in the direction Z is sometimes referred to as downward direction (or simply down), and a direction from the disk10toward the head15in the direction Z is sometimes referred to an upward direction (or simply up). The head15includes a slider150. The slider150is formed of, for example, a sintered body (AlTiC) of alumina and titanium carbide. The slider150has a disk opposing surface (air bearing surface (ABS))15S opposing a surface10S of the disk10, and a trailing end151positioned on an outflow side of the air flow C. The slider150includes the write head15W, the read head15R, and the heater15H. The write head15W and the read head15R are partially exposed to the disk opposing surface15S. The write head15W is magnetized when a recording magnetic field is excited by supplying a current (write current or recording current) of a predetermined magnitude. By changing the magnetization direction of a recording bit of a magnetic recording layer of the disk10by a magnetic flux flowing through a magnetized part, the write head15W records, on the disk10, a magnetization pattern corresponding to the recording current. As illustrated inFIG.4, when the heater15H does not generate heat, the surrounding (hereinafter, sometimes referred to as record/reproduction part) WRP of the write head15W and the read head15R does not protrude toward the disk10. Hereinafter, an interval between the disk10and the head15, for example, the lowermost part (hereinafter, sometimes referred to as flying lowermost point) of the head15(surrounding of the write head15W and the read head15R) in the direction Z is sometimes referred to as “flying height”. FIG.5is a schematic view illustrating an example of the disk10and the head15after expansion. As illustrated inFIG.5, when the heater15H generates heat, the record/reproduction part WRP expands (thermally expands) by the heat of the heater15H and protrudes toward the disk10. In this case, the vertex of the thermally expanded record/reproduction part WRP becomes the flying lowermost point of the head15. The driver IC20controls drive of the SPM12and the VCM14according to control of the system controller130(an MPU40described later in detail). The head amplifier IC (preamplifier)30includes a read amplifier and a write driver. The read amplifier amplifies and outputs, to the system controller130, the read signal read from the disk10(a read/write (R/W) channel40described later in detail). The write driver outputs, to the head15, a write current corresponding to a signal output from the R/W channel40. The volatile memory70is a semiconductor memory in which stored data is lost when power supply is cut off. The volatile memory70stores data and the like necessary for processing in each section of the magnetic disk device1. The volatile memory70is, for example, a dynamic random access memory (DRAM) or a synchronous dynamic random access memory (SDRAM). The nonvolatile memory80is a semiconductor memory that records stored data even when power supply is cut off. The nonvolatile memory80is, for example, flash read only memory (FROM) of a NOR type or NAND type. The buffer memory90is a semiconductor memory that temporarily records data and the like transmitted and received between the magnetic disk device1and the host100. The buffer memory90may be configured integrally with the volatile memory70. The buffer memory90is, for example, a DRAM, a static random access memory (SRAM), an SDRAM, a ferroelectric random access memory (FeRAM), a magnetoresistive random access memory (MRAM), or the like. The system controller (controller)130is achieved by using, for example, a large-scale integrated circuit (LSI) called a system-on-a-chip (SoC) in which a plurality of elements are integrated on a single chip. The system controller130includes the read/write (R/W) channel40, a hard disk controller (HDC)50, and a microprocessor (MPU)60. The system controller130is electrically connected to, for example, the driver IC20, the head amplifier IC30, the volatile memory70, the nonvolatile memory80, the buffer memory90, the host100, and the like. In response to an instruction from an MPU60described later, the R/W channel40executes signal processing of data (hereinafter, sometimes referred to as read data) transferred from the disk10to the host100and data (hereinafter, sometimes referred to as write data) transferred from the host100. The R/W channel40has a circuit or a function for modulating write data. The R/W channel40has a circuit or a function of measuring and demodulating the signal quality of read data. The R/W channel40is electrically connected to, for example, the head amplifier IC30, the HDC50, the MPU60, and the like. The HDC50controls data transfer. For example, the HDC50controls data transfer between the host100and the disk10in response to an instruction from the MPU60described later. The HDC50is electrically connected to, for example, the R/W channel40, the MPU60, the volatile memory70, the nonvolatile memory80, the buffer memory90, and the like. The MPU60is a main controller that controls each section of the magnetic disk device1. The MPU60controls the VCM14via the driver IC20to execute servo control for positioning the head15. The MPU60controls the SPM12via the driver IC20to rotate the disk10. The MPU60controls a write operation of data to the disk10and selects a storage destination of data transferred from the host100, for example, write data. The MPU60controls a read operation of data from the disk10and controls processing of data transferred from the disk10to the host100, for example, read data. The MPU60manages a region in which data is recorded. The MPU60is connected to each section of the magnetic disk device1. The MPU60is electrically connected to, for example, the driver IC20, the R/W channel40, the HDC50, and the like. The MPU60includes a read/write control section610, a flying height control section620, and a high fly write (HFW) detection section630. The MPU60executes, on firmware, processing of each section, for example, the read/write control section610, the flying height control section620, the HFW detection section630, and the like. The MPU60may include, as a circuit, each section, for example, the read/write control section610, the flying height control section620, the HFW detection section630, and the like. The read/write control section610, the flying height control section620, the HFW detection section630, and the like may be included in the R/W channel40or the HDC50. The read/write control section610controls read processing of reading data from the disk10and write processing of writing data to the disk10according to a command or the like from the host100. The read/write control section610controls the VCM14via the driver IC20, positions the head15at a predetermined position on the disk10, and executes read processing or write processing. Hereinafter, the term “access” is sometimes used in the sense including recording or writing data into a predetermined region (write processing), reading out or reading data from a predetermined region (read processing), and moving the head15or the like to a predetermined region. The flying height control section620controls the flying height of the head15. The flying height control section620controls the flying height of the head15(for example, the record/reproduction part WRP) by controlling the current applied (or voltage applied) from the head amplifier IC30to the heater15H. The flying height control section620controls the flying height of the head15to a predetermined flying height (hereinafter, sometimes referred to as normal flying height) at which write processing or read processing of data can be normally performed. The HFW detection section630detects high fly write (HFW). The HFW is an event in which the head15comes into contact with contamination occurring in the disk10and lifts to a flying height (hereinafter, sometimes referred to as high flying height or abnormal flying height) higher than the normal flying height, and magnetization in the write head15W becomes insufficient in a predetermined region of the disk10for overwriting to the disk10, so that data cannot be normally written into this region and a read error is caused when this region is read. The HFW detection section630writes or records each signal strength record data SIS corresponding to each servo sector SS as an RRO component for each servo sector SS. The HFW detection section630continuously writes the signal strength record data SIS corresponding to the predetermined servo sector SS immediately after in the read/write direction of the predetermined servo sector SS or a servo sector (hereinafter, sometimes referred to as another servo sector) SS different from the predetermined servo sector SS. In other words, the HFW detection section630writes the signal strength record data SIS corresponding to the predetermined servo sector SS into the signal strength record region immediately after in the read/write direction of the predetermined servo sector SS or the other servo sector SS. The HFW detection section630may continuously write the signal strength record data SIS corresponding to the predetermined servo sector SS immediately after in the read/write direction of this servo sector SS, or may continuously write the same immediately after in the read/write direction of a servo sector SS other than this servo sector SS. The HFW detection section630may record the signal strength record data SIS corresponding to the predetermined servo sector SS into a region other than the signal strength record region immediately after in the read/write direction of this servo sector SS, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90, or may record the same into a region other than the signal strength record region immediately after in the read/write direction of the servo sector SS other than this servo sector SS, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. For example, in a predetermined track, the HFW detection section630continuously writes, immediately after in the read/write direction of the current servo sector SS, the signal strength record data (hereinafter, sometimes referred to as next signal strength record data) SIS corresponding to the servo sector (hereinafter, sometimes referred to as next servo sector) SS to be targeted next arranged second after in the read/write direction of the currently targeted servo sector (hereinafter, sometimes referred to as current servo sector) SS. For example, the HFW detection section630may continuously write the signal strength record data (hereinafter, sometimes referred to as current signal strength record data) SIS corresponding to the current servo sector SS immediately after in the read/write direction of the current servo sector SS. Note that HFW detection section630may continuously write the current signal strength record data SIS corresponding to the current servo sector SS immediately after in the read/write direction of this current servo sector SS, or may continuously write the same immediately after in the read/write direction of the servo sector SS other than this current servo sector SS. The HFW detection section630may write the current signal strength record data SIS corresponding to the current servo sector SS into a region other than the signal strength record region immediately after in the read/write direction of this current servo sector SS, or may write the same into a region other than the signal strength record region immediately after in the read/write direction of the servo sector SS other than this current servo sector SS. The HFW detection section630detects HFW by monitoring a frequency component of the target servo reproduction signal or a ratio of this frequency component during the write processing. In a case of reading a predetermined region where data is written by the head15having the high flying height, the amplitude of the reproduction signal when reading this region decreases, and therefore, the frequency component of this reproduction signal decreases or the ratio between the fundamental frequency and the third harmonic of this reproduction signal changes. When reading the predetermined servo sector SS during the write processing, the HFW detection section630standardizes the signal strength (hereinafter, sometimes referred to as target servo reproduction signal strength) of the target servo reproduction signal of this servo sector SS that has just been read. When reading the predetermined servo sector SS during the write processing, the HFW detection section630standardizes the target servo reproduction signal strength corresponding to this servo sector SS that has just been read based on the signal strength record data SIS corresponding to this servo sector SS that has been written by reading in advance the target region of this servo sector SS. For example, during the write processing, the HFW detection section630standardizes this target servo reproduction signal strength by dividing or subtracting the signal strength record data SIS corresponding to this servo sector SS from the target servo reproduction signal strength corresponding to the predetermined servo sector SS. In other words, during the write processing, the HFW detection section630divides or subtracts the signal strength record data SIS corresponding to this servo sector SS from the target servo reproduction signal strength corresponding to the predetermined servo sector SS to calculate the standardized target servo reproduction signal strength (hereinafter, sometimes referred to as standardized reproduction signal strength) corresponding to this servo sector SS. The signal strength record data SIS corresponding to the predetermined servo sector SS and the target servo reproduction signal strength corresponding to this servo sector SS are signal strengths when the same data in the same region of this servo sector are read at different timings, for example. Note that the signal strength record data SIS corresponding to the predetermined servo sector SS and the target servo reproduction signal strength corresponding to this servo sector SS may be the signal strengths in a case where the same data or different data in the same region or different regions of this servo sector SS are read. The target servo reproduction signal strength is a value obtained by performing Fourier transform on the target servo reproduction signal similarly to the signal strength record data SIS, for example. Similarly to the signal strength record data SIS, for example, the target servo reproduction signal strength is a value obtained by performing Fourier transform on and dividing each of the target servo reproduction signal and an ideal signal or a demodulated signal. Similarly to the signal strength record data SIS, for example, the target servo reproduction signal strength is a ½ subharmonic after the Fourier transform of the preamble that is a 2T pattern, and is a fundamental frequency or an n-th harmonic obtained by performing the Fourier transform on and dividing the reproduction signal (target servo reproduction signal) of sync mark/gray code/RRO and the ideal signal or the demodulated signal. Similarly to the signal strength record data SIS, for example, the target servo reproduction signal strength is an amplitude of the target servo reproduction signal (hereinafter, sometimes referred to as target servo reproduction signal amplitude). For example, when reading the current servo sector SS during the write processing, the HFW detection section630standardizes the signal strength (hereinafter, sometimes referred to as current target servo reproduction signal strength) of the target servo reproduction signal (hereinafter, sometimes referred to as current target servo reproduction signal) corresponding to the current servo sector that has just been read. When reading the current servo sector SS during the write processing, the HFW detection section630standardizes the current target servo reproduction signal strength corresponding to the current servo sector SS based on the current signal strength record data SIS. For example, during the write processing, the HFW detection section630standardizes the current target servo reproduction signal strength by dividing or subtracting the current signal strength record data SIS from the current target servo reproduction signal strength corresponding to the current servo sector SS. In other words, during the write processing, the HFW detection section630divides or subtracts the current signal strength record data SIS from the current target servo reproduction signal strength corresponding to the current servo sector SS to calculate the standardized current target servo reproduction signal strength (hereinafter, sometimes referred to as current standardized reproduction signal strength). The HFW detection section630determines whether the standardized reproduction signal strength corresponding to the predetermined servo sector SS is smaller than a threshold (hereinafter, sometimes referred to as HFW threshold) or equal to or greater than the HFW threshold (or equal to or less than the HFW threshold or larger than the HFW threshold). For example, the HFW detection section630determines whether the current standardized reproduction signal strength corresponding to the current servo sector SS is smaller than the HFW threshold or equal to or greater than the HFW threshold (or equal to or less than the HFW threshold or larger than the HFW threshold). If determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold (or equal to or less than the HFW threshold), the HFW detection section630determines that HFW occurs in a predetermined region of the disk10. If determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is equal to or greater than the HFW threshold (or larger than the HFW threshold), the HFW detection section630determines that HFW does not occur in a predetermined region of the disk10. For example, if determining that the current standardized reproduction signal strength corresponding to the current servo sector is smaller than the HFW threshold (or equal to or less than the HFW threshold), the HFW detection section630determines that HFW occurs in a predetermined region of the disk10. If determining that the current standardized reproduction signal strength corresponding to the current servo sector is equal to or greater than the HFW threshold (or larger than the HFW threshold), the HFW detection section630determines that HFW does not occur in a predetermined region of the disk10. Note that if determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is greater than the HFW threshold (or equal to or greater than the HFW threshold), the HFW detection section630may determine that HFW occurs in a predetermined region of the disk10. If determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is equal to or less than the HFW threshold (or smaller than the HFW threshold), the HFW detection section630may determine that HFW occurs in a predetermined region of the disk10. If determining that HFW occurs in a predetermined region of the disk10, the HFW detection section630stops the write operation in the predetermined region of the disk10. For example, if determining that HFW occurs in a predetermined region of the disk10, the HFW detection section630stops the write operation in the predetermined region of the disk10and executes rewrite processing on the predetermined region of the disk10. For example, if determining that HFW occurs in the predetermined region of the disk10based on the standardized reproduction signal strength corresponding to the predetermined servo sector SS, the HFW detection section630stops the write operation in the predetermined region of the disk10and executes rewrite processing on the predetermined region of the disk10. For example, if determining that HFW occurs in the predetermined region of the disk10based on the signal strength record data corresponding to the predetermined servo sector SS, the HFW detection section630stops the write operation in the predetermined region of the disk10and executes rewrite processing on the data sector region DSR immediately before this servo sector SS. For example, if determining that HFW occurs in the predetermined region of the disk10, the HFW detection section630stops the write operation in the predetermined region of the disk10, and executes processing (hereinafter, sometimes referred to as saving processing) of recording or storing data in the predetermined region of the disk10in another alternative region, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. For example, if determining that HFW occurs in the predetermined region of the disk10based on the standardized reproduction signal strength corresponding to the predetermined servo sector SS, the HFW detection section630stops the write operation in the predetermined region of the disk10, and executes the saving processing of the data in the predetermined region of the disk10to another alternative region, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. For example, when HFW detection section630determines that HFW occurs in the predetermined region of the disk10based on the signal strength record data corresponding to the predetermined servo sector SS, the write operation is stopped in the predetermined region of the disk10, and the data of the data sector region DSR immediately before the servo sector SS is saved in another alternative region, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. FIG.6is a schematic view illustrating an example of arrangement of the signal strength record data SIS according to the present embodiment.FIG.6illustrates a track TRm. The track TRm includes servo sectors SS (k−1), SS (k), and SS (k+1), and signal strength record data SIS (k), SIS (k+1), and SIS (k+2). InFIG.6, the servo sectors SS (k−1), SS (k), and SS (k+1) are arranged at intervals in the read/write direction in the described order. In other words, the servo sector SS (k) is arranged at intervals in the read/write direction of the servo sector SS (k−1). The servo sector SS (k+1) is arranged at intervals in the read/write direction of the servo sector SS (k). InFIG.6, the signal strength record data SIS (k), SIS (k+1), and SIS (k+2) are arranged at intervals in the read/write direction in the described order. In other words, the signal strength record data SIS (k+1) is arranged at intervals in the read/write direction of the signal strength record data SIS (k). The signal strength record data SIS (k+2) is arranged at intervals in the read/write direction of the signal strength record data SIS (k+1). The signal strength record data SIS (k) is arranged between the servo sectors SS (k−1) and SS (k), and is adjacent in the read/write direction of the servo sector SS (k−1). The signal strength record data SIS (k) corresponds to the servo sector SS (k). The signal strength record data SIS (k+1) is arranged between the servo sectors SS (k) and SS (k+1), and is adjacent in the read/write direction of the servo sector SS (k). The signal strength record data SIS (k+1) corresponds to the servo sector SS (k+1). The signal strength record data SIS (k+2) is adjacent in the read/write direction of the servo sector SS (k+1). The signal strength record data SIS (k+2) corresponds to the servo sector SS (k+2) next to the servo sector SS (k+1) not illustrated. In the example illustrated inFIG.6, in the track TRm, the MPU60writes the signal strength record data SIS (k) adjacent in the read/write direction of the servo sector SS (k−1), writes the signal strength record data SIS (k+1) adjacent in the read/write direction of the servo sector SS (k), and writes the signal strength record data SIS (k+2) adjacent in the read/write direction of the servo sector SS (k+1). In other words, in the track TRm, the MPU60writes the signal strength record data SIS (k) immediately after the servo sector SS (k−1), writes the signal strength record data SIS (k+1) immediately after the servo sector SS (k), and writes the signal strength record data SIS (k+2) immediately after the servo sector SS (k+1). FIG.7is a schematic view illustrating an example of the HFW detection method according to the present embodiment. The track TRm illustrated inFIG.7corresponds to the track TRm illustrated inFIG.6.FIG.7illustrates an HFW threshold HTH. InFIG.7, the signal strength record data SIS (k−1) corresponds to the signal strength record data corresponding to the servo sector SS (k−1). In the example illustrated inFIG.7, the MPU60reads the signal strength record data SIS (k−1) corresponding to the servo sector SS (k−1) during the write processing of the track TRm. The MPU60reads the target servo reproduction signal strength corresponding to the servo sector SS (k−1). The MPU60standardizes the target servo reproduction signal strength corresponding to the servo sector SS (k−1) to the standardized reproduction signal strength based on the signal strength record data SIS (k−1) corresponding to the servo sector SS (k−1). The MPU60determines whether the standardized reproduction signal strength corresponding to the servo sector SS (k−1) is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold. In the example shown inFIG.7, if determining that the standardized reproduction signal strength corresponding to the servo sector SS (k−1) is smaller than the HFW threshold HTH, the MPU60stops the write operation in a data sector region DSR (k−1) corresponding to the servo sector SS (k−1) of the disk10, and executes rewrite processing on the data sector region DSR (k−1) corresponding to the servo sector SS (k−1). In the example illustrated inFIG.7, during the write processing, the MPU60reads the target servo reproduction signal strength of the servo sector SS (k−1) and reads the signal strength record data SIS (k). The MPU60reads the target servo reproduction signal strength of the servo sector SS (k) during the write processing. The MPU60standardizes the target servo reproduction signal strength corresponding to the servo sector SS (k) to the standardized reproduction signal strength based on the signal strength record data SIS (k) corresponding to the servo sector SS (k). The MPU60determines whether the standardized reproduction signal strength corresponding to the servo sector SS (k) is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold. In the example shown inFIG.7, if determining that the standardized reproduction signal strength corresponding to the servo sector SS (k) is smaller than the HFW threshold HTH, the MPU60stops the write operation in a data sector region DSR (k) corresponding to the servo sector SS (k) of the disk10, and executes rewrite processing on the data sector region DSR (k) corresponding to the servo sector SS (k). In the example illustrated inFIG.7, during the write processing, the MPU60reads the target servo reproduction signal strength of the servo sector SS (k) and reads the signal strength record data SIS (k+1). The MPU60reads the target servo reproduction signal strength of the servo sector SS (k+1) during the write processing. The MPU60standardizes the target servo reproduction signal strength corresponding to the servo sector SS (k+1) to the standardized reproduction signal strength based on the signal strength record data SIS (k+1) corresponding to the servo sector SS (k+1). The MPU60determines whether the standardized reproduction signal strength corresponding to the servo sector SS (k+1) is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold. In the example shown inFIG.7, if determining that the standardized reproduction signal strength corresponding to the servo sector SS (k+1) is smaller than the HFW threshold HTH, the MPU60stops the write operation in a data sector region DSR (k+1) corresponding to the servo sector SS (k+1) of the disk10, and executes rewrite processing on the data sector region DSR (k+1) corresponding to the servo sector SS (k+1). As illustrated inFIG.7, the MPU60reads and demodulates the signal strength record data SIS adjacent immediately after in the read/write direction of the servo sector (hereinafter, sometimes referred to as preceding servo sector) SS targeted before being arranged second before the current servo sector SS, and standardizes the current target servo reproduction signal strength of the current servo sector, and thus it is possible to minimize delay in write fault determination. FIG.8is a schematic view illustrating an example of a change in the target servo reproduction signal strength of the target servo reproduction signal of each servo sector SS with respect to each servo sector SS when each servo sector SS written by the head15having the normal flying height and the high flying height is read. InFIG.8, the horizontal axis represents the servo sector SS, and the vertical axis represents the target servo reproduction signal strength. In the vertical axis ofFIG.8, the target servo reproduction signal strength increases toward the tip side of the large arrow, and decreases toward the tip side of the small arrow.FIG.8illustrates a change (hereinafter, sometimes referred to as change in target servo reproduction signal strength corresponding to the normal flying height) USL of the target servo reproduction signal strength of each servo sector SS with respect to each servo sector SS in a case where each servo sector SS written by the head15having the normal flying height is read, and a change (hereinafter, sometimes referred to as change in target servo reproduction signal strength corresponding to the high flying height) HSL of the target servo reproduction signal strength of each servo sector SS with respect to each servo sector SS in a case where each servo sector SS written by the head15having the high flying height is read.FIG.8illustrates a threshold (hereinafter, sometimes referred to as reproduction signal strength threshold) STH of the target servo reproduction signal strength. As illustrated inFIG.8, for example, the waveform of the change USL in the target servo reproduction signal strength corresponding to the normal flying height and the waveform of the change HSL in the target servo reproduction signal strength corresponding to the high flying height are similar to each other. In other words, the waveform of the target servo reproduction signal strength corresponding to the normal flying height and the waveform of the target servo reproduction signal strength corresponding to the high flying height are similar to each other. In the example illustrated inFIG.8, the change USL in the target servo reproduction signal strength corresponding to the normal flying height and the change HSL in the target servo reproduction signal strength corresponding to the high flying height have both a part that is larger and a part that is smaller than the reproduction signal strength threshold STH. Therefore, as illustrated inFIG.8, it is difficult to determine HFW based on one threshold and the target servo reproduction signal strength. FIG.9is a schematic view illustrating an example of a change in each standardized reproduction signal strength corresponding to each servo sector SS with respect to each servo sector SS in a case of reading each servo sector SS written by the head15with the normal flying height and the high flying height. InFIG.9, the horizontal axis represents the servo sector SS, and the vertical axis represents the standardized reproduction signal strength. In the vertical axis ofFIG.9, the standardized reproduction signal strength increases toward the tip side of the large arrow, and decreases toward the tip side of the small arrow.FIG.9illustrates a change (hereinafter, sometimes referred to as change in standardized reproduction signal strength corresponding to the normal flying height) NUSL in the standardized reproduction signal strength corresponding to each servo sector SS with respect to each servo sector SS in a case where each servo sector SS written by the head15having the normal flying height is read, and a change (hereinafter, sometimes referred to as change in standardized reproduction signal strength corresponding to the high flying height) NHSL in the standardized reproduction signal strength corresponding to each servo sector SS with respect to each servo sector SS in a case where each servo sector SS written by the head15having the high flying height is read.FIG.9illustrates the HFW threshold HTH. In the example illustrated inFIG.9, the change NUSL in the standardized reproduction signal strength corresponding to the normal flying height is larger than the HFW threshold HTH. The change NHSL in the standardized reproduction signal strength corresponding to the high flying height is smaller than the HFW threshold HTH. Therefore, it is possible to determine HFW based on the standardized reproduction signal strength. The MPU60determines whether the standardized reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold or equal to or greater than the HFW threshold. If determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold, the MPU60determines that HFW occurs. If determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is equal to or greater than the HFW threshold, the MPU60determines that HFW does not occur. FIG.10is a schematic view illustrating an example of a change in the bit error rate (BER) with respect to the bit per inch (BPI). InFIG.10, the horizontal axis represents the bit per inch (BPI), and the vertical axis represents the bit error rate (BER). In the horizontal axis ofFIG.10, the BPI increases toward the tip side of the arrow and decreases toward the side opposite to the tip side of the arrow. The horizontal axis inFIG.10indicates BPI BP1and BP2. The BPI BP2is larger than the BPI BP1. In the vertical axis ofFIG.10, the BER increases toward the tip side of the arrow and decreases toward the side opposite to the tip side of the arrow. InFIG.10, the vertical axis represents BER BE1, BE2, and BEs. The BER BE2is larger than the BER BE1. The BER BEs is larger than the BER BE2. The BER BEs corresponds to the BER of the standard of the magnetic disk device1set so as not to generate an unrecoverable error that is an error of being incapable of reading, for example.FIG.10illustrates a change (hereinafter, sometimes referred to as change in BER corresponding to the normal flying height) BRLU in the BER with respect to the BPI in a case of reading a predetermined region where data is written by the head15having the normal flying height, and a change (hereinafter, sometimes referred to as change in BER corresponding to the high flying height) BRLH in the BER with respect to the BPI in a case of reading a predetermined region where data is written by the head15having the high flying height. In the example illustrated inFIG.10, in a case of not applying the HFW detection method according to the present embodiment, in consideration of a case where the HFW occurs, it is necessary to set the BPI to BPI BP1so as to become BER BE1with a certain margin with respect to BER BRs. In the example illustrated inFIG.10, in a case of applying the HFW detection method according to the present embodiment, it is less necessary to consider a case where the HFW occurs, and therefore, it becomes possible to set the BPI to BPI BP2such that the BER becomes BE2, for example. That is, it is possible to improve the BPI by applying the HFW detection method according to the present embodiment. FIG.11is a schematic view illustrating an example of a change in area density capability (ADC) with respect to the BPI. InFIG.11, the horizontal axis represents the BPI, and the vertical axis represents the area density capability (ADC). The ADC corresponds to a product (BPI×TPI) of the BPI and a track per inch (TPI). In the horizontal axis ofFIG.11, the BPI increases toward the tip side of the arrow, and decreases toward the side opposite to the tip side of the arrow. The horizontal axis inFIG.11indicates BPI BP1and BP2. In the vertical axis ofFIG.11, the ADC increases toward the tip side of the arrow, and decreases toward the side opposite to the tip side of the arrow. InFIG.11, the vertical axis represents ADC AD1and AD2. The ADC AD2is larger than the ADC AD1.FIG.11illustrates a change (hereinafter, sometimes referred to as change in ADC) ADL of the ADC with respect to the BPI in the magnetic disk device1. In the example illustrated inFIG.11, in a case where the BPI is BP1, the ADC becomes AD1. There is a predetermined interval between the value at which the ADC is maximized and the ADC AD1. That is, loss occurs in the ADC in the magnetic disk device1. When the BPI is set to BP2, the ADC becomes AD2. When the ADC becomes AD2, the loss of the ADC in the magnetic disk device1is reduced. FIG.12is a flowchart illustrating an example of the HFW detection method according to the present embodiment. During the write processing, the MPU60reads the signal strength record data SIS corresponding to the predetermined servo sector SS (B1201), and reads the target servo reproduction signal strength of this servo sector SS (B1202). Based on this signal strength record data SIS, the MPU60standardizes this target servo reproduction signal strength to the standardized reproduction signal strength (B1203). For example, the MPU60subtracts or divides this signal strength record data SIS from the target servo reproduction signal strength corresponding to the predetermined servo sector SS to calculate the standardized reproduction signal strength. The MPU60determines whether the standardized reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold or equal to or greater than the HFW threshold (B1204). If determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is equal to or greater than the HFW threshold (NO in B1204), the MPU60determines that HFW does not occur in the predetermined region, and ends the processing. If determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold (YES in B1204), the MPU60determines that HFW occurs in a predetermined region, stops the write processing in this region (B1205), and ends the processing. For example, if determining that the standardized reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold, the MPU60determines that HFW occurs in the predetermined region, stops the write processing in the predetermined region, executes the rewrite processing on this predetermined region, or executes saving processing on this predetermined region, and ends the processing. According to the present embodiment, during the write processing, the magnetic disk device1reads the signal strength record data SIS corresponding to the predetermined servo sector SS, and reads the target servo reproduction signal strength of this servo sector SS. Based on this signal strength record data SIS, the magnetic disk device1standardizes this target servo reproduction signal strength to the standardized reproduction signal strength. The magnetic disk device1determines whether this standardized reproduction signal strength is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold. If determining that this standardized reproduction signal strength is smaller than the HFW threshold HTH, the MPU60determines that HFW occurs in the predetermined region, stops the write processing in the predetermined region, and executes the rewrite processing on the predetermined region or executes the saving processing on the predetermined region. Therefore, the magnetic disk device1can improve the BPI. The magnetic disk device1can improve reliability. Next, a magnetic disk device according to another embodiment and modifications will be described. In the other embodiment and modifications, the identical parts as those of the first embodiment described above are given the identical reference numerals, and the detailed description thereof will be omitted. (Modification 1) A magnetic disk device1according to Modification 1 is different in the HFW detection method from the magnetic disk device1according to the above-described embodiment. For example, during the write processing, the MPU60averages the predetermined target servo reproduction signal strength corresponding to the predetermined servo sector SS and the target servo reproduction signal strength (hereinafter, sometimes referred to as other target servo reproduction signal strength) corresponding to the other servo sector SS different from this servo sector SS to calculate the target servo reproduction signal strength (hereinafter, sometimes referred to as averaged servo reproduction signal strength) corresponding to the predetermined servo sector SS. During the write processing, the MPU60averages the predetermined signal strength record data SIS corresponding to the predetermined servo sector SS and the signal strength record data (hereinafter, sometimes referred to as other signal strength record data) SIS corresponding to the other servo sector SS to calculate the signal strength record data (hereinafter, sometimes referred to as averaged signal strength record data) SIS corresponding to the predetermined servo sector SS. During the write processing, based on the averaged signal strength record data SIS corresponding to the predetermined servo sector SS and the other servo sector SS, the MPU60standardizes the averaged servo reproduction signal strength corresponding to this predetermined servo sector SS and the other servo sector SS. For example, during the write processing, the MPU60standardizes this averaged servo reproduction signal strength by dividing or subtracting the averaged signal strength record data SIS corresponding to this predetermined servo sector SS and the other servo sector SS from the averaged servo reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sector SS. In other words, during the write processing, the MPU60divides or subtracts the averaged signal strength record data SIS corresponding to this predetermined servo sector SS and the other servo sector SS from the averaged servo reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sector SS to calculate the standardized reproduction signal strength (hereinafter, sometimes referred to as averaged standardized reproduction signal strength) corresponding to this predetermined servo sector SS and this other servo sector SS. In a case where the magnetic disk device1is a TDMR type magnetic disk device, during the write processing, the MPU60may calculate the averaged servo reproduction signal strength corresponding to the predetermined servo sector SS by averaging a plurality of target servo reproduction signal strengths corresponding to the predetermined servo sector SS read by the plurality of read heads15R mounted on one head15. In this case, during the write processing, the MPU60may calculate the averaged signal strength record data SIS corresponding to the predetermined servo sector SS by averaging the plurality of signal strength record data SIS corresponding to the predetermined servo sector SS read by the plurality of read heads15R mounted on one head15. For example, during the write processing, the MPU60averages the current target servo reproduction signal strength corresponding to the current servo sector SS and the target servo reproduction signal strength (hereinafter, sometimes referred to as preceding target servo reproduction signal strength) corresponding to the preceding servo sector SS arranged second before the current servo sector SS to calculate the target servo reproduction signal strength (hereinafter, sometimes referred to as current averaged servo reproduction signal strength) corresponding to the current servo sector SS and the preceding servo sector SS. During the write processing, the MPU60averages the current signal strength record data SIS corresponding to the current servo sector SS and the signal strength record data (hereinafter, sometimes referred to as preceding signal strength record data) SIS corresponding to the preceding servo sector SS to calculate the signal strength record data (hereinafter, sometimes referred to as current averaged signal strength record data) SIS corresponding to the current servo sector SS and the preceding servo sector SS. During the write processing, based on the current averaged signal strength record data SIS corresponding to the current servo sector SS and the preceding servo sector SS, the MPU60standardizes the current averaged servo reproduction signal strength corresponding to this current servo sector SS and this preceding servo sector SS. For example, during the write processing, the MPU60standardizes this current averaged servo reproduction signal strength by dividing or subtracting the current averaged signal strength record data SIS from the current averaged servo reproduction signal strength. In other words, during the write processing, the MPU60calculates the current standardized reproduction signal strength corresponding to the current servo sector SS and the preceding servo sector SS by dividing or subtracting the current averaged signal strength record data SIS from the current averaged servo reproduction signal strength. For example, if determining that HFW occurs in the predetermined region of the disk10based on the averaged standardized reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sectors SS, the MPU60stops the write operation in the predetermined region of the disk10, and executes the rewrite processing from the data sector region (hereinafter, sometimes referred to as other data sector region) DSR corresponding to the other servo sector SS of the disk10to the data sector region DSR corresponding to the predetermined servo sector SS. For example, if determining that HFW occurs in another data sector region corresponding to the other servo sector SS of the disk10and a predetermined data sector region corresponding to the predetermined servo sector SS based on the averaged signal strength record data corresponding to the predetermined servo sector SS and the other servo sector SS, the MPU60stops the write operation in the other data sector region corresponding to the other servo sector SS of the disk10and the predetermined data sector region corresponding to the predetermined servo sector SS, and executes the rewrite processing from this other data sector region to this predetermined data sector region. For example, if determining that HFW occurs in the predetermined region of the disk10based on the averaged standardized reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sectors SS, the MPU60stops the write operation in the predetermined region of the disk10, and executes the saving processing of the data in the region from the data sector region (hereinafter, sometimes referred to as other data sector region) DSR corresponding to the other servo sector SS of the disk10to the data sector region DSR corresponding to the predetermined servo sector SS to another alternative region, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. For example, if determining that HFW occurs in another data sector region corresponding to the other servo sector SS of the disk10and a predetermined data sector region corresponding to the predetermined servo sector SS based on the averaged signal strength record data corresponding to the predetermined servo sector SS and the other servo sector SS, the MPU60stops the write operation in the other data sector region corresponding to the other servo sector SS of the disk10and the predetermined data sector region corresponding to the predetermined servo sector SS, and executes the saving processing on data in the region from this other data sector region to this predetermined data sector region to another alternative region, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. FIG.13is a flowchart illustrating an example of the HFW detection method according to Modification 1. During the write processing, the MPU60reads the other signal strength record data SIS corresponding to the other servo sector SS different from the predetermined servo sector (B1301), and reads the other target servo reproduction signal strength corresponding to the other servo sector SS (B1302). The MPU60reads the predetermined signal strength record data SIS corresponding to the predetermined servo sector SS (B1303), and reads the predetermined target servo reproduction signal strength corresponding to the predetermined servo sector SS (B1304). The MPU60averages the predetermined signal strength record data SIS and the other signal strength record data SIS to calculate the averaged signal strength record data SIS (B1305). The MPU60averages the predetermined target servo reproduction signal strength and the other target servo reproduction signal strength to calculate the averaged servo reproduction signal strength (B1306). Based on this averaged signal strength record data SIS, the MPU60standardizes this averaged servo reproduction signal strength to the averaged standardized reproduction signal strength (B1307). The MPU60determines whether the averaged standardized reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sector SS is smaller than the HFW threshold or equal to or greater than the HFW threshold (B1308). If determining that the averaged standardized reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sectors SS is equal to or greater than the HFW threshold (NO in B1308), the MPU60determines that HFW does not occur in the predetermined region, and ends the processing. If determining that the averaged standardized reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sector SS is smaller than the HFW threshold (YES in B1308), the MPU60determines that HFW occurs in a predetermined region, stops the write processing in this region (B1309), and ends the processing. According to Modification 1, during the write processing, the magnetic disk device1averages the predetermined target servo reproduction signal strength and the other target servo reproduction signal strength to calculate the averaged servo reproduction signal strength. Based on this averaged signal strength record data SIS, the magnetic disk device1standardizes this averaged servo reproduction signal strength to the standardized reproduction signal strength. The magnetic disk device1determines whether the averaged standardized reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sectors SS is smaller than the HFW threshold or equal to or greater than the HFW threshold HTH. If determining that this averaged standardized reproduction signal strength is smaller than the HFW threshold HTH, the magnetic disk device1determines that HFW occurs in the predetermined data sector region DSR corresponding to the predetermined servo sector SS and the other data sector region DSR corresponding to the other servo sector SS, stops the write processing in the predetermined data sector region DSR corresponding to the servo sector SS and the other data sector region DSR corresponding to the other servo sector SS, and executes the rewrite processing on the predetermined data sector region DSR corresponding to the servo sector SS and the other data sector region DSR corresponding to the other servo sector SS, or executes the saving processing on the predetermined data sector region DSR corresponding to the servo sector SS and the other data sector region DSR corresponding to the other servo sector SS. Therefore, the magnetic disk device1can improve the BPI. The magnetic disk device1can improve reliability. (Modification 2) A magnetic disk device1according to Modification 2 is different in the HFW detection method from the magnetic disk device1according to the above-described embodiment. For example, during the write processing, the MPU60standardizes the predetermined target servo reproduction signal strength corresponding to the predetermined servo sector SS to a predetermined signal strength (hereinafter, sometimes referred to as target standardized signal strength) based on the predetermined signal strength record data SIS corresponding to the predetermined servo sector SS. During the write processing, based on the other signal strength record data SIS corresponding to the other servo sector SS, the MPU60standardizes the other target servo reproduction signal strength corresponding to the other servo sector SS to a predetermined signal strength (hereinafter, sometimes referred to as other standardized signal strength). The MPU60averages the predetermined target standardized signal strength corresponding to the predetermined servo sector SS and the other standardized signal strength corresponding to the other servo sector SS to calculate the averaged standardized signal strength corresponding to the predetermined servo sector SS. If determining that the averaged standardized signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold (or equal to or less than the HFW threshold), the HFW detection section630determines that HFW occurs in the predetermined region of the disk10. If determining that the averaged standardized signal strength corresponding to the predetermined servo sector SS is equal to or greater than the HFW threshold (or larger than the HFW threshold), the HFW detection section630determines that HFW occurs in the predetermined region of the disk10. FIG.14is a schematic view illustrating an example of the HFW detection method according to Modification 2. The track TRm illustrated inFIG.14corresponds to the track TRm illustrated inFIG.6. In the example illustrated inFIG.14, during the write processing, the MPU60reads the signal strength record data SIS (k−1) corresponding to the servo sector SS (k−1). The MPU60reads the target servo reproduction signal strength of the servo sector SS (k−1) during the write processing. The MPU60standardizes the target servo reproduction signal strength corresponding to the servo sector SS (k−1) to the standardized reproduction signal strength based on the signal strength record data SIS (k−1) corresponding to the servo sector SS (k−1). The MPU60reads the signal strength record data SIS (k) corresponding to the servo sector SS (k) during the write processing. The MPU60reads the target servo reproduction signal strength of the servo sector SS (k) during the write processing. The MPU60standardizes the target servo reproduction signal strength corresponding to the servo sector SS (k) to the standardized reproduction signal strength based on the signal strength record data SIS (k) corresponding to the servo sector SS (k). The MPU60averages the standardized reproduction signal strength corresponding to the servo sector SS (k−1) and the standardized reproduction signal strength corresponding to the servo sector SS (k) to calculate the averaged standardized signal strength corresponding to the predetermined servo sector SS. The MPU60determines whether the averaged standardized reproduction signal strength corresponding to the servo sectors SS (k−1) and SS (k) is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold. In the example illustrated inFIG.14, if determining that the averaged standardized reproduction signal strength corresponding to the servo sectors SS (k−1) and SS (k) is smaller than the HFW threshold HTH, the MPU60stops the write operation in the data sector region DSR (k−1) corresponding to the servo sector SS (k−1) of the disk10and the data sector region DSR (k) corresponding to the servo sector SS (k), and executes the rewrite processing from this data sector region DSR (k−1) to this data sector region DSR (k). In the example illustrated inFIG.14, during the write processing, the MPU60reads the signal strength record data SIS (k) corresponding to the servo sector SS (k). The MPU60reads the target servo reproduction signal strength of the servo sector SS (k) during the write processing. The MPU60standardizes the target servo reproduction signal strength corresponding to the servo sector SS (k) to the standardized reproduction signal strength based on the signal strength record data SIS (k) corresponding to the servo sector SS (k). The MPU60reads the signal strength record data SIS (k+1) corresponding to the servo sector SS (k+1) during the write processing. The MPU60reads the target servo reproduction signal strength of the servo sector SS (k+1) during the write processing. The MPU60standardizes the target servo reproduction signal strength corresponding to the servo sector SS (k+1) to the standardized reproduction signal strength based on the signal strength record data SIS (k+1) corresponding to the servo sector SS (k+1). The MPU60averages the standardized reproduction signal strength corresponding to the servo sector SS (k) and the standardized reproduction signal strength corresponding to the servo sector SS (k+1) to calculate the averaged standardized signal strength corresponding to the predetermined servo sector SS. The MPU60determines whether the averaged standardized reproduction signal strength corresponding to the servo sectors SS (k) and SS (k+1) is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold. In the example illustrated inFIG.14, if determining that the averaged standardized reproduction signal strength corresponding to the servo sectors SS (k) and SS (k+1) is smaller than the HFW threshold HTH, the MPU60stops the write operation in the data sector region DSR (k) corresponding to the servo sector SS (k) of the disk10and the data sector region DSR (k+1) corresponding to the servo sector SS (k+1), and executes the rewrite processing from this data sector region DSR (k) to this data sector region DSR (k+1). FIG.15is a flowchart illustrating an example of the HFW detection method according to Modification 2. During the write processing, the MPU60reads the other signal strength record data SIS corresponding to the other servo sector SS different from the predetermined servo sector (B1501), and reads the other target servo reproduction signal strength corresponding to the other servo sector SS (B1502). The MPU60reads the predetermined signal strength record data SIS corresponding to the predetermined servo sector SS (B1503), and reads the predetermined target servo reproduction signal strength corresponding to the predetermined servo sector SS (B1504). The MPU60standardizes the other target servo reproduction signal strength to the other standardized signal strength based on the other signal strength record data SIS (B1505). The MPU60standardizes the target servo reproduction signal strength to the target standardized signal strength based on the predetermined signal strength record data SIS (B1506). The MPU60averages the other standardized signal strengths and the target standardized signal strength to calculate an averaged standardized signal strength corresponding to a predetermined servo sector SS (B1507). The MPU60determines whether the averaged standardized signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold or equal to or greater than the HFW threshold (B1508). If determining that the averaged standardized signal strength corresponding to the predetermined servo sector SS is equal to or greater than the HFW threshold (NO in B1508), the MPU60determines that HFW does not occur in the predetermined region, and ends the processing. If determining that the averaged standardized signal strength corresponding to the predetermined servo sector SS is smaller than the HFW threshold (YES in B1508), the MPU60determines that HFW occurs in the predetermined region, stops the write processing in this region (B1509), and ends the processing. According to Modification 2, during the write processing, the magnetic disk device1averages the other standardized signal strength and the target standardized signal strength to calculate the averaged standardized signal strength corresponding to the predetermined servo sector SS and the other servo sector SS. The magnetic disk device1determines whether the averaged standardized signal strength corresponding to the predetermined servo sector SS and the other servo sectors SS is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold HTH. If determining that this averaged standardized signal strength is smaller than the HFW threshold HTH, the magnetic disk device1determines that HFW occurs in the predetermined data sector region DSR corresponding to the predetermined servo sector SS and the other data sector region DSR corresponding to the other servo sector SS, stops the write processing in the predetermined data sector region DSR corresponding to the servo sector SS and the other data sector region DSR corresponding to the other servo sector SS, and executes the rewrite processing on the predetermined data sector region DSR corresponding to the servo sector SS and the other data sector region DSR corresponding to the other servo sector SS, or executes the saving processing on the predetermined data sector region DSR corresponding to the servo sector SS and the other data sector region DSR corresponding to the other servo sector SS. Therefore, the magnetic disk device1can improve the BPI. The magnetic disk device1can improve reliability. (Modification 3) A magnetic disk device1according to Modification 3 is different in the HFW detection method from the magnetic disk device1according to the above-described embodiment. In a predetermined track, the MPU60continuously writes, immediately after in the read/write direction of the predetermined servo sector SS or the other servo sector SS, the signal strength record data (hereinafter, sometimes referred to as averaged signal strength record data) ASIS corresponding to the predetermined servo sector SS calculated by averaging the predetermined signal strength record data SIS corresponding to the predetermined servo sector SS and the other signal strength record data SIS corresponding to the other servo sector SS. The MPU60may continuously write the averaged signal strength record data ASIS corresponding to the predetermined servo sector SS and the other servo sectors SS immediately after in the read/write direction of this servo sector SS, or may continuously write the same immediately after in the read/write direction of a servo sector other than this servo sector SS. The MPU60may write the averaged signal strength record data ASIS corresponding to the predetermined servo sector SS and the other servo sectors SS into a region other than the signal strength record region immediately after in the read/write direction of this servo sector SS, or may write the same into a region other than the signal strength record region immediately after in the read/write direction of the servo sectors SS other than this servo sector SS. Thus, by averaging the plurality of, for example two signal strength record data SIS corresponding to each of the plurality of, for example two servo sectors, the servo region that is the target of Fourier transform can be regarded as several times, for example, twice. For example, in a predetermined track, the MPU60continuously writes, immediately after in the read/write direction of the current servo sector SS, the averaged signal strength record data (hereinafter, sometimes referred to as next averaged signal strength record data) SIS corresponding to the current servo sector SS and the next servo sector SS calculated by averaging the current signal strength record data SIS corresponding to the current servo sector SS and the next signal strength record data SIS corresponding to the next servo sector SS. For example, the MPU60may continuously write, immediately after in the read/write direction of the current servo sector SS, the averaged signal strength record data (hereinafter, sometimes referred to as current averaged signal strength record data) SIS corresponding to the preceding servo sector SS and the current servo sector SS calculated by averaging the preceding signal strength record data SIS corresponding to the preceding servo sector SS and the current signal strength record data SIS corresponding to the current servo sector SS. The MPU60may continuously write the current averaged signal strength record data SIS corresponding to the preceding servo sector SS and the current servo sector SS immediately after in the read/write direction of this current servo sector SS, or may continuously write the same immediately after in the read/write direction of the servo sector SS other than this current servo sector SS. The MPU60may write the current averaged signal strength record data SIS corresponding to the preceding servo sector SS and the current servo sector SS into a region other than the signal strength record region immediately after in the read/write direction of this current servo sector SS, or may write the same into a region other than the signal strength record region immediately after in the read/write direction of the servo sector SS other than this current servo sector SS. During the write processing of the predetermined track, the MPU60standardizes the sum of the target servo reproduction signal strength corresponding to this servo sector SS that has just been read and the other target servo reproduction signal strength corresponding to the other servo sector SS based on the averaged signal strength record data SIS corresponding to this servo sector SS that has been written by reading in advance the target region of this servo sector SS and the other servo sector SS. For example, during the write processing of the predetermined track, the MPU60standardizes the sum of the predetermined target servo reproduction signal strength corresponding to the predetermined servo sector SS and the other target servo reproduction signal strength corresponding to the other servo sector SS by dividing or subtracting the averaged signal strength record data SIS corresponding to this predetermined servo sector SS and this other servo sector SS from the sum of the predetermined target servo reproduction signal strength corresponding to the predetermined servo sector SS and the other target servo reproduction signal strength corresponding to the other servo sector SS. In other words, the MPU60calculates the averaged standardized reproduction signal strength corresponding to this predetermined servo sector SS and this other servo sector SS by dividing or subtracting the averaged signal strength record data SIS corresponding to this predetermined servo sector SS and this other servo sector SS from the sum of the predetermined target servo reproduction signal strength corresponding to the predetermined servo sector SS and the other target servo reproduction signal strength corresponding to the other servo sector SS. For example, when reading the preceding servo sector SS and the current servo sector SS during the write processing of the predetermined track, the MPU60standardizes the sum of the preceding target servo reproduction signal strength corresponding to the preceding servo sector SS and the current target servo reproduction signal strength corresponding to the current servo sector SS based on the current averaged signal strength record data SIS. For example, during the write processing of the predetermined track, the MPU60standardizes the sum of the preceding target servo reproduction signal strength corresponding to the preceding servo sector SS and the current target servo reproduction signal strength corresponding to the current servo sector SS by dividing or subtracting the current averaged signal strength record data SIS from the sum of the preceding target servo reproduction signal strength corresponding to the preceding servo sector SS and the current target servo reproduction signal strength corresponding to the current servo sector SS. In other words, during the write processing of the predetermined track, the MPU60calculates the averaged standardized reproduction signal strength (hereinafter, sometimes referred to as current averaged standardized reproduction signal strength) corresponding to the current servo sector SS by dividing or subtracting the current averaged signal strength record data SIS from the sum of the preceding target servo reproduction signal strength corresponding to the preceding servo sector SS and the current target servo reproduction signal strength corresponding to the current servo sector SS. In a case where the magnetic disk device1is a TDMR type magnetic disk device, the MPU60may calculate the averaged signal strength record data SIS corresponding to the predetermined servo sector SS by averaging a plurality of signal strength record data SIS corresponding to the predetermined servo sector SS read by the plurality of read heads15R mounted on one head15. Based on this averaged signal strength record data SIS, the MPU60may standardize the target servo reproduction signal strength corresponding to this servo sector SS that has just been read. For example, if determining that HFW occurs in the predetermined region of the disk10based on the averaged standardized reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sector SS, the MPU60stops the write operation in the predetermined region of the disk10, and executes the rewrite processing from the other data sector region DSR corresponding to the other servo sector SS of the disk10to the data sector region DSR corresponding to the predetermined servo sector SS. For example, if determining that HFW occurs in another data sector region corresponding to the other servo sector SS of the disk10and a predetermined data sector region corresponding to the predetermined servo sector SS based on the averaged signal strength record data corresponding to the predetermined servo sector SS and the other servo sector SS, the MPU60stops the write operation in the other data sector region corresponding to the other servo sector SS of the disk10and the predetermined data sector region corresponding to the predetermined servo sector SS, and executes the rewrite processing from this other data sector region to this predetermined data sector region. For example, if determining that HFW occurs in the predetermined region of the disk10based on the averaged standardized reproduction signal strength corresponding to the predetermined servo sector SS and the other servo sectors SS, the MPU60stops the write operation in the predetermined region of the disk10, and executes the saving processing of the data in the region from the data sector region (hereinafter, sometimes referred to as other data sector region) DSR corresponding to the other servo sector SS of the disk10to the data sector region DSR corresponding to the predetermined servo sector SS to another alternative region, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. For example, if determining that HFW occurs in another data sector region corresponding to the other servo sector SS of the disk10and a predetermined data sector region corresponding to the predetermined servo sector SS based on the averaged signal strength record data corresponding to the predetermined servo sector SS and the other servo sector SS, the MPU60stops the write operation in the other data sector region corresponding to the other servo sector SS of the disk10and the predetermined data sector region corresponding to the predetermined servo sector SS, and executes the saving processing on data in the region from this other data sector region to this predetermined data sector region to another alternative region, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. FIG.16is a schematic view illustrating an example of arrangement of the averaged signal strength record data SIS according to Modification 3.FIG.16illustrates the track TRm. The track TRm includes the servo sectors SS (k−1), SS (k), and SS (k+1), and averaged signal strength record data ASIS (k), ASIS (k+1), and ASIS (k+2). InFIG.16, the averaged signal strength record data ASIS (k), ASIS (k+1), and ASIS (k+2) are arranged at intervals in the read/write direction in the described order. In other words, the averaged signal strength record data ASIS (k+1) is arranged at intervals in the read/write direction of the averaged signal strength record data ASIS (k). The averaged signal strength record data ASIS (k+2) is arranged at intervals in the read/write direction of the averaged signal strength record data ASIS (k+1). The averaged signal strength record data ASIS (k) is arranged between the servo sectors SS (k−1) and SS (k), and is adjacent in the read/write direction of the servo sector SS (k−1). The averaged signal strength record data ASIS (k) corresponds to the signal strength record data in which the signal strength record data SIS (k−1) and SIS (k) are averaged. The averaged signal strength record data ASIS (k) corresponds to the servo sector SS (k). The averaged signal strength record data ASIS (k+1) is arranged between the servo sectors SS (k) and SS (k+1), and is adjacent in the read/write direction of the servo sector SS (k). The averaged signal strength record data ASIS (k+1) corresponds to the signal strength record data in which the signal strength record data corresponding to the servo sectors SS (k) and SS (k+1) are averaged. The averaged signal strength record data ASIS (k+1) corresponds to the servo sector SS (k+1). The averaged signal strength record data ASIS (k+2) is adjacent in the read/write direction of the servo sector SS (k+1). The averaged signal strength record data ASIS (k+2) corresponds to the signal strength record data in which the signal strength record data corresponding to the servo sectors SS (k+1) and SS (k+2) are averaged. The averaged signal strength record data ASIS (k+2) corresponds to the servo sector SS (k+2) next to the servo sector SS (k+1) not illustrated. In the example illustrated inFIG.16, in the track TRm, the MPU60writes the signal strength record data ASIS (k) adjacent in the read/write direction of the servo sector SS (k−1), writes the signal strength record data SIS (k+1) adjacent in the read/write direction of the servo sector SS (k), and writes the signal strength record data ASIS (k+2) adjacent in the read/write direction of the servo sector SS (k+1). In other words, in the track TRm, the MPU60writes the signal strength record data ASIS (k) immediately after the servo sector SS (k−1), writes the signal strength record data SIS (k+1) immediately after the servo sector SS (k), and writes the signal strength record data ASIS (k+2) immediately after the servo sector SS (k+1). FIG.17is a schematic view illustrating an example of the HFW detection method according to Modification 3. The track TRm illustrated inFIG.17corresponds to the track TRm illustrated inFIG.16. In the example illustrated inFIG.17, the MPU60reads the target servo reproduction signal strength corresponding to the servo sector SS (k−1) during the write processing of the track TRm. The MPU60reads the averaged signal strength record data ASIS (k) corresponding to the servo sectors SS (k−1) and SS (k). The MPU60reads the target servo reproduction signal strength corresponding to the servo sector SS (k). Based on the averaged signal strength record data ASIS (k) corresponding to the servo sectors SS (k−1) and SS (k), the MPU60standardizes, to the averaged standardized reproduction signal strength, the sum of the target servo reproduction signal strength corresponding to the servo sector SS (k−1) and the target servo reproduction signal strength corresponding to the servo sector SS (k). The MPU60determines whether the standardized reproduction signal strength corresponding to the servo sectors SS (k−1) and SS (k) is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold HTH. In the example illustrated inFIG.17, if determining that the standardized reproduction signal strength corresponding to the servo sectors SS (k−1) and SS (k) is smaller than the HFW threshold HTH, the MPU60stops the write operation in the data sector region DSR (k−1) corresponding to the servo sector SS (k−1) and the data sector region DSR (k) corresponding to the servo sector SS (k), and executes the rewrite processing from this data sector DSR (k−1) to this data sector region DSR (k). In the example illustrated inFIG.17, the MPU60reads the target servo reproduction signal strength corresponding to the servo sector SS (k) during the write processing of the track TRm. The MPU60reads the averaged signal strength record data ASIS (k+1) corresponding to the servo sectors SS (k) and SS (k+1). The MPU60reads the target servo reproduction signal strength corresponding to the servo sector SS (k+1). Based on the averaged signal strength record data ASIS (k+1) corresponding to the servo sectors SS (k) and SS (k+1), the MPU60standardizes, to the averaged standardized reproduction signal strength, the sum of the target servo reproduction signal strength corresponding to the servo sector SS (k) and the target servo reproduction signal strength corresponding to the servo sector SS (k+1). The MPU60determines whether the standardized reproduction signal strength corresponding to the servo sectors SS (k) and SS (k+1) is smaller than the HFW threshold HTH or equal to or greater than the HFW threshold HTH. In the example illustrated inFIG.17, if determining that the standardized reproduction signal strength corresponding to the servo sectors SS (k) and SS (k+1) is smaller than the HFW threshold HTH, the MPU60stops the write operation in the data sector region DSR (k) corresponding to the servo sector SS (k) and the data sector region DSR (k+1) corresponding to the servo sector SS (k+1), and executes the rewrite processing from this data sector DSR (k) to this data sector region DSR (k+1). FIG.18is a flowchart illustrating an example of the HFW detection method according to Modification 3. During the write processing, the MPU60reads the other target servo reproduction signal strength corresponding to the other servo sector SS (B1801), and reads the predetermined averaged signal strength record data ASIS corresponding to the predetermined servo sector SS (B1802). The MPU60reads the target servo reproduction signal strength of this predetermined servo sector SS (B1803). The MPU60standardizes the sum of this other target servo reproduction signal strength and this predetermined target servo reproduction signal strength to the averaged standardized reproduction signal strength based on this predetermined averaged signal strength record data ASIS (B1804). The MPU60determines whether the averaged standardized reproduction signal strength corresponding to the other servo sectors SS and the predetermined servo sector SS is smaller than the HFW threshold or equal to or greater than the HFW threshold (B1805). If determining that the averaged standardized reproduction signal strength corresponding to the other servo sector SS and the predetermined servo sector SS is equal to or greater than the HFW threshold (NO in B1805), the MPU60determines that HFW does not occur in the other servo sector SS and the predetermined servo sector SS, and ends the processing. If determining that the averaged standardized reproduction signal strength corresponding to the other servo sector SS and the predetermined servo sector SS is smaller than the HFW threshold (YES in B1805), the MPU60determines that HFW occurs in the other servo sector SS and the predetermined servo sector SS, stops the write processing in the other servo sector SS and the predetermined servo sector SS (B1806), and ends the processing. According to Modification 3, during the write processing, the magnetic disk device1reads the other target servo reproduction signal strength corresponding to the other servo sector SS, reads the predetermined averaged signal strength record data ASIS corresponding to the predetermined servo sector SS, and reads the target servo reproduction signal strength of this predetermined servo sector SS. The magnetic disk device1standardizes the sum of this other target servo reproduction signal strength and this predetermined target servo reproduction signal strength to the averaged standardized reproduction signal strength based on this predetermined averaged signal strength record data ASIS. The magnetic disk device1determines whether this averaged standardized reproduction signal strength is smaller than the HFW threshold or equal to or greater than the HFW threshold. If determining that this averaged standardized reproduction signal strength is smaller than the HFW threshold, the MPU60determines that HFW occurs in the other data sector region corresponding to the other servo sector SS and the predetermined data sector region corresponding to the predetermined servo sector SS, stops the write processing in the other data sector region corresponding to the other servo sector SS and the predetermined data sector region corresponding to the predetermined servo sector SS, and executes the rewrite processing from the other data sector region corresponding to the other servo sector SS to the predetermined data sector region corresponding to the predetermined servo sector SS, or executes the saving processing to the other data sector region corresponding to the other servo sector SS and the predetermined data sector region corresponding to the predetermined servo sector SS. Therefore, the magnetic disk device1can improve the BPI. The magnetic disk device1can improve reliability. Second Embodiment A magnetic disk device1according to the second embodiment is different in HFW detection method from the magnetic disk devices1of the first embodiment, Modification 1, Modification 2, and Modification 3 described above. FIG.19is a schematic view illustrating an example of the configuration of a servo sector SS according to the second embodiment.FIG.19illustrates a predetermined servo sector SS written in a predetermined track TR of a disk10. In the example illustrated inFIG.19, the data sector region DSR does not include the signal strength record data SIS corresponding to the predetermined servo sector SS in the signal strength record region adjacent to the predetermined servo sector SS. That is, the data sector region DSR does not include the signal strength record region. The MPU60has thresholds (hereinafter, sometimes referred to as reproduction signal strength threshold) of a plurality of target servo reproduction signal strengths corresponding to the plurality of respective servo sectors. For example, the reproduction signal strength threshold corresponds to an intermediate value between the target servo reproduction signal strength (hereinafter, sometimes referred to as target servo reproduction signal strength corresponding to the normal flying height) of this servo sector SS when reading the servo sector SS written by the head15having the normal flying height and the target servo reproduction signal strength (hereinafter, sometimes referred to as target servo reproduction signal strength corresponding to the high flying height) of this servo sector SS when reading the servo sector SS written by the head15having the high flying height. The reproduction signal strength threshold may correspond to an average value of a plurality of intermediate values between a plurality of target servo reproduction signal strengths corresponding to the normal flying heights corresponding to the plurality of respective servo sectors and a plurality of target servo reproduction signal strengths corresponding to the high flying heights corresponding to the plurality of respective servo sectors. The MPU60determines whether the target servo reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the reproduction signal strength threshold corresponding to this servo sector SS or is equal to or greater than the reproduction signal strength threshold (or whether to be equal to or less than the reproduction signal strength threshold or larger than the reproduction signal strength threshold). For example, the MPU60determines whether the current target servo reproduction signal strength corresponding to the current servo sector SS is smaller than the reproduction signal strength threshold (hereinafter, sometimes referred to as current reproduction signal strength threshold) or equal to or greater than the current reproduction signal strength threshold. If determining that the target servo reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the reproduction signal strength threshold (or equal to or less than the reproduction signal strength threshold), the MPU60determines that HFW occurs in the predetermined region of the disk10. If determining that the target servo reproduction signal strength corresponding to the predetermined servo sector SS is equal to or greater than the reproduction signal strength threshold (or larger than the reproduction signal strength threshold), the MPU60determines that HFW does not occur in the predetermined region of the disk10. For example, if determining that the current target servo reproduction signal strength corresponding to the current servo sector is smaller than the current reproduction signal strength threshold (or equal to or less than the current reproduction signal strength threshold), the MPU60determines that HFW occurs in the predetermined region of the disk10. If determining that the current target servo reproduction signal strength corresponding to the current servo sector is equal to or greater than the current reproduction signal strength threshold (or larger than the current reproduction signal strength threshold), the MPU60determines that HFW does not occur in the predetermined region of the disk10. If determining that HFW occurs in a predetermined region of the disk10, the HFW detection section630stops the write operation in the predetermined region of the disk10. For example, if determining that HFW occurs in a predetermined region of the disk10, the HFW detection section630stops the write operation in the predetermined region of the disk10and executes rewrite processing on the predetermined region of the disk10. For example, if determining that HFW occurs in the predetermined region of the disk10, the HFW detection section630stops the write operation in the predetermined region of the disk10, and executes processing (hereinafter, sometimes referred to as saving processing) of recording or storing data in the predetermined region of the disk10in another alternative region, for example, the disk10, the volatile memory70, the nonvolatile memory80, or the buffer memory90. FIG.20is a schematic view illustrating an example of a change in each reproduction signal strength threshold with respect to each servo sector SS according to the second embodiment. InFIG.20, the horizontal axis represents the servo sector SS, and the vertical axis represents the target servo reproduction signal strength. In the vertical axis ofFIG.20, the target servo reproduction signal strength increases toward the tip side of the large arrow, and decreases toward the tip side of the small arrow.FIG.20illustrates a change (hereinafter, sometimes referred to as change in the reproduction signal strength threshold) MTH of each reproduction signal strength threshold with respect to each servo sector corresponding to an intermediate value between the change USL in the target servo reproduction signal strength corresponding to the normal flying height and the change HSL in the target servo reproduction signal strength corresponding to the high flying height. As indicated by the change MTH of the reproduction signal strength threshold inFIG.20, each reproduction signal strength threshold corresponding to each servo sector SS corresponds to an intermediate value between the target servo reproduction signal strength corresponding to each high flying height in each servo sector and the target servo reproduction signal strength corresponding to each normal flying height in each servo sector SS. For example, the MPU60has the change MTH in the reproduction signal strength threshold. FIG.21is a flowchart illustrating an example of the HFW detection method according to the second embodiment. The MPU60reads the target servo reproduction signal strength of the predetermined servo sector SS during the write processing (B2101). The MPU60determines whether the target servo reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the reproduction signal strength threshold corresponding to this servo sector SS or is equal to or greater than the reproduction signal strength threshold (B2102). If determining that the target servo reproduction signal strength corresponding to the predetermined servo sector SS is equal to or greater than the reproduction signal strength threshold (NO in B2102), the MPU60determines that HFW does not occur in the predetermined region, and ends the processing. If determining that the target servo reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the reproduction signal strength threshold (YES in B2102), the MPU60determines that HFW occurs in the predetermined region, stops the write processing in this region (B2103), and ends the processing. According to the second embodiment, the magnetic disk device1reads the target servo reproduction signal strength of the predetermined servo sector SS during the write processing. The magnetic disk device1determines whether the target servo reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the reproduction signal strength threshold corresponding to this servo sector SS or is equal to or greater than the reproduction signal strength threshold. If determining that the target servo reproduction signal strength corresponding to the predetermined servo sector SS is smaller than the reproduction signal strength threshold, the magnetic disk device1determines that HFW occurs in the predetermined region, stops the write processing in this region, and ends the processing. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. An example of a magnetic disk device obtained from the configuration disclosed in the present description will be additionally described below. (1) A magnetic disk device including:a disk that has a track including a first servo sector and a second servo sector that is different from the first servo sector;a head that writes data to the disk and reads data from the disk; anda controller that records first signal strength record data related to a signal strength at which first target servo data that is a target of the first servo sector is read, and standardizes first signal strength data related to a signal strength at which the first target servo data is read when the first target servo data is read. (2) The magnetic disk device according to (1), in which the controller standardizes the first signal strength data to first standardized data based on the first signal strength record data. (3) The magnetic disk device according to (1), in which the controller calculates first standardized data by subtracting or dividing the first signal strength record data from the first signal strength data. (4) The magnetic disk device according to (2) or (3), in which the controller determines whether the first standardized data is smaller than a first threshold or equal to or greater than the first threshold. (5) The magnetic disk device according to (4), in which the controller stops write processing in a case of determining that the first standardized data is smaller than the first threshold. (6) The magnetic disk device according to (4), in which the controller stops write processing in a case of determining that the first standardized data is equal to or greater than the first threshold. (7) The magnetic disk device according to (5) or (6), in which the controller executes rewrite processing or writes to another alternative region in a case of stopping write processing. (8) The magnetic disk device according to any one of (1) to (7), in which the controller writes the first signal strength record data adjacent to the first servo sector between the first servo sector and the second servo sector arranged next to the first servo sector. (9) The magnetic disk device according to (7) further includinga volatile memory and a nonvolatile memory, in which the alternative region has the disk, the volatile memory, or the nonvolatile memory. (10) The magnetic disk device according to any one of (1) to (9), in which the first signal strength record data and the first signal strength data are values obtained by performing Fourier transform of a reproduction signal when the first target servo data is read. (11) The magnetic disk device according to any one of (1) to (9), in which the first signal strength record data and the first signal strength data are values obtained by performing Fourier transform on and dividing a reproduction signal and an ideal signal or a demodulated signal when the first target servo data is read. (12) The magnetic disk device according to any one of (1) to (9), in which the first signal strength record data and the first signal strength data are amplitude of a reproduction signal when the first target servo data is read. (13) The magnetic disk device according to (1), in which the controller records second signal strength record data related to a signal strength at which second target servo data that is a target of the second servo sector is read, calculates first averaged signal strength record data in which the first signal strength record data and the second signal strength record data are averaged when the first target servo data is read, calculates first averaged signal strength data in which the first signal strength data and the second signal strength data related to a signal strength at which the second target servo data is read are averaged, and standardizes the first averaged signal strength data to first averaged standardized data based on the first averaged signal strength record data. (14) The magnetic disk device according to (13), in which the controller executes rewrite processing on a first data region corresponding to the first servo sector and a second data region corresponding to the second servo sector in a case of stopping write processing based on the first averaged standardized data. (15) The magnetic disk device according to (13) or (14), in which the controller determines whether the first averaged standardized data is smaller than a first threshold or equal to or greater than the first threshold. (16) The magnetic disk device according to (1), in which the controller records second signal strength record data related to a signal strength at which second target servo data that is a target of the second servo sector is read, standardizes the first signal strength data to first standardized data based on the first signal strength record data when the first target servo data is read, standardizes, to second standardized data, second signal strength data related to a signal strength at which the second target servo data is read based on the second signal strength record data when the second target servo data is read, and calculates first averaged standardized data at which the first standardized data and the second standardized data are averaged. (17) The magnetic disk device according to (16), in which the controller executes rewrite processing on a first data region corresponding to the first servo sector and a second data region corresponding to the second servo sector in a case of stopping write processing based on the first averaged standardized data. (18) The magnetic disk device according to (1), in whichthe head has a first read head and a second read head that read data from the disk, andwhen the first target servo data is read by the first read head and the second read head, the controller calculates first averaged signal strength record data in which the first signal strength record data in which the first target servo data is read by the first read head and second signal strength record data related to a signal strength at which the first target servo data is read by the second read head are averaged, calculates first averaged signal strength data in which the first signal strength data read from the first target servo data by the first read head and second signal strength data related to a signal strength read from the first target servo data by the second read head are averaged, and standardizes the first averaged signal strength data to first averaged standardized data based on the first averaged signal strength record data. (19) A magnetic disk device including:a disk that has a track including a first servo sector and a second servo sector that is different from the first servo sector;a head that writes data to the disk and reads data from the disk; anda controller that determines whether first signal strength data related to a signal strength at which first target servo data that is a target of the first servo sector is read is smaller than a first threshold corresponding to the first signal strength data or equal to or greater than the first threshold when the first target servo data is read. (20) The magnetic disk device according to (19), in which the controller stops write processing in a case of determining that the first signal strength data is smaller than the first threshold. (21) The magnetic disk device according to (19) or (20), in which the first threshold corresponds to an intermediate value between the first signal strength data when the head reads the first servo sector with a first flying height and the first signal strength data when the head reads the first servo sector with a second flying height higher than the first flying height. (22) The magnetic disk device according to (19), in which the controller calculates the first threshold by averaging the first signal strength data and second signal strength data related to a signal strength at which second target servo data that is a target of the second servo sector is read when the second target servo data is read. (23) A magnetic disk device including:a disk that has a track including a first servo sector and a second servo sector that is different from the first servo sector;a head that writes data to the disk and reads data from the disk; anda controller that records first averaged signal strength record data in which signal strength a signal strength at which first target servo data that is a target of the first servo sector is read and a signal strength at which second target servo data that is a target of the second servo sector is read are averaged, and standardizes, to first standardized data, first signal strength data related to a signal strength at which the first target servo data is read when the first target servo data is read. (24) The magnetic disk device according to (23), in which the controller standardizes the first signal strength data to the first standardized data based on the first averaged signal strength record data. (25) The magnetic disk device according to (23) or (24), in which the controller executes rewrite processing on a first data region corresponding to the first servo sector and a second data region corresponding to the second servo sector in a case of stopping write processing based on the first averaged standardized data.
122,391
11862214
DESCRIPTION OF THE PREFERRED EMBODIMENTS Magnetic Tape An aspect of the invention relates to a magnetic tape including a non-magnetic support, and a magnetic layer containing a ferromagnetic powder. An arithmetic average roughness Ra measured at an edge portion of a surface of the magnetic layer is referred to as an edge portion Ra, and an arithmetic average roughness Ra measured at a central portion of the surface of the magnetic layer is referred to as a central portion Ra. In the magnetic tape described above, the edge portion Ra is 1.50 nm or less, the central portion Ra is 0.30 to 1.30 nm, and a Ra ratio (central portion Ra/edge portion Ra) is 0.75 to 0.95. In the invention and the specification, the “surface of the magnetic layer” is identical to a surface of the magnetic tape on the magnetic layer side. Description of Head Tilt Angle Hereinafter, prior to the description of the magnetic tape, a configuration of the magnetic head, a head tilt angle, and the like will be described. In addition, a reason why it is considered that the phenomenon occurring during the recording or during the reproducing described above can be suppressed by tilting an axial direction of the module of the magnetic head with respect to the width direction of the magnetic tape while the magnetic tape is running will also be described later. The magnetic head may include one or more modules including an element array including a plurality of magnetic head elements between a pair of servo signal reading elements, and can include two or more modules or three or more modules. The total number of such modules can be, for example, 5 or less, 4 or less, or 3 or less, or the magnetic head may include the number of modules exceeding the total number exemplified here. Examples of arrangement of the plurality of modules can include “recording module-reproducing module” (total number of modules: 2), “recording module-reproducing module-recording module” (total number of modules: 3), and the like. However, the invention is not limited to the examples shown here. Each module can include an element array including a plurality of magnetic head elements between a pair of servo signal reading elements, that is, arrangement of elements. The module including a recording element as the magnetic head element is a recording module for recording data on the magnetic tape. The module including a reproducing element as the magnetic head element is a reproducing module for reproducing data recorded on the magnetic tape. In the magnetic head, the plurality of modules are arranged, for example, in a recording and reproducing head unit so that an axis of the element array of each module is oriented in parallel. The “parallel” does not mean only parallel in the strict sense, but also includes a range of errors normally allowed in the technical field of the invention. For example, the range of errors means a range of less than ±10° from an exact parallel direction. In each element array, the pair of servo signal reading elements and the plurality of magnetic head elements (that is, recording elements or reproducing elements) are usually arranged to be in a straight line spaced apart from each other. Here, the expression that “arranged in a straight line” means that each magnetic head element is arranged on a straight line connecting a central portion of one servo signal reading element and a central portion of the other servo signal reading element. The “axis of the element array” in the present invention and the present specification means the straight line connecting the central portion of one servo signal reading element and the central portion of the other servo signal reading element. Next, the configuration of the module and the like will be further described with reference to the drawings. However, the embodiment shown in the drawings is an example and the invention is not limited thereto. FIG.1is a schematic view showing an example of a module of a magnetic head. The module shown inFIG.1includes a plurality of magnetic head elements between a pair of servo signal reading elements (servo signal reading elements1and2). The magnetic head element is also referred to as a “channel”. “Ch” in the drawing is an abbreviation for a channel. The module shown inFIG.1includes a total of 32 magnetic head elements of Ch0to Ch31. InFIG.1, “L” is a distance between the pair of servo signal reading elements, that is, a distance between one servo signal reading element and the other servo signal reading element. In the module shown inFIG.1, the “L” is a distance between the servo signal reading element1and the servo signal reading element2. Specifically, the “L” is a distance between a central portion of the servo signal reading element1and a central portion of the servo signal reading element2. Such a distance can be measured by, for example, an optical microscope or the like. FIG.2is an explanatory diagram of a relative positional relationship between the module and the magnetic tape during running of the magnetic tape in the magnetic tape device. InFIG.2, a dotted line A indicates a width direction of the magnetic tape. A dotted line B indicates an axis of the element array. An angle θ can be the head tilt angle during the running of the magnetic tape, and is an angle formed by the dotted line A and the dotted line B. During the running of the magnetic tape, in a case where the angle θ is 0°, a distance in a width direction of the magnetic tape between one servo signal reading element and the other servo signal reading element of the element array (hereinafter, also referred to as an “effective distance between servo signal reading elements”) is “L”. On the other hand, in a case where the angle θ exceeds 0°, the effective distance between the servo signal reading elements is “L cos θ” and the L cos θ is smaller than the L. That is, “L cos θ<L”. As described above, during the recording or the reproducing, in a case where the magnetic head for recording or reproducing data records or reproduces data while being deviated from a target track position due to width deformation of the magnetic tape, phenomenons such as overwriting on recorded data, reproducing failure, and the like may occur. For example, in a case where a width of the magnetic tape contracts or extends, a phenomenon may occur in which the magnetic head element that should record or reproduce at a target track position records or reproduces at a different track position. In addition, in a case where the width of the magnetic tape extends, the effective distance between the servo signal reading elements may be shortened than a spacing of two adjacent servo bands with a data band interposed therebetween (also referred to as a “servo band spacing” or “spacing of servo bands”, specifically, a distance between the two servo bands in the width direction of the magnetic tape), and a phenomenon in that the data is not recorded or reproduced at a part close to an edge of the magnetic tape can occur. With respect to this, in a case where the element array is tilted at the angle θ exceeding 0°, the effective distance between the servo signal reading elements becomes “L cos θ” as described above. The larger the value of 0, the smaller the value of L cos θ, and the smaller the value of θ, the larger the value of L cos θ. Accordingly, in a case where the value of θ is changed according to a degree of dimension change (that is, contraction or expansion) in the width direction of the magnetic tape, the effective distance between the servo signal reading elements can be brought closer to or matched with the spacing of the servo bands. Therefore, during the recording or the reproducing, it is possible to prevent the occurrence of phenomenons such as overwriting on recorded data, reproducing failure, and the like caused in a case where the magnetic head for recording or reproducing data records or reproduces data while being deviated from a target track position due to width deformation of the magnetic tape, or it is possible to reduce a frequency of occurrence thereof. FIG.3is an explanatory diagram of a change in angle θ during the running of the magnetic tape. The angle θ at the start of running, θinitial, can be set to, for example, 0° or more or more than 0°. InFIG.3, a central diagram shows a state of the module at the start of running. InFIG.3, a right diagram shows a state of the module in a case where the angle θ is set to an angle θcwhich is a larger angle than the θinitial. The effective distance between the servo signal reading elements L cos θcis a value smaller than L cos θinitialat the start of running of the magnetic tape. In a case where the width of the magnetic tape is contracted during the running of the magnetic tape, it is preferable to perform such angle adjustment. On the other hand, inFIG.3, a left diagram shows a state of the module in a case where the angle θ is set to an angle θewhich is a smaller angle than the θinitial. The effective distance between the servo signal reading elements L cos θeis a value larger than L cos θinitialat the start of running of the magnetic tape. In a case where the width of the magnetic tape is expanded during the running of the magnetic tape, it is preferable to perform such angle adjustment. As described above, the change of the head tilt angle during the running of the magnetic tape can contribute to prevention of the occurrence of phenomenons such as overwriting on recorded data, reproducing failure, and the like caused in a case where the magnetic head for recording or reproducing data records or reproduces data while being deviated from a target track position due to width deformation of the magnetic tape, or to reduction of a frequency of occurrence thereof. Meanwhile, the recording of data on the magnetic tape and the reproducing of the recorded data are performed by bringing the surface of the magnetic layer of the magnetic tape into contact with the magnetic head and sliding. The inventors considered that, during such sliding, in a case where the head tilt angle changes, a contact state between the magnetic head and the surface of the magnetic layer can change and this can be a reason of a decrease in running stability. Specifically, the inventors surmised that, in a case where the contact state between the surface of the magnetic layer of the magnetic tape and the magnetic head (for example, a contact state between a portion near an edge of the module of the magnetic head and the edge portion of the surface of the magnetic layer) changes greatly depending on the difference in the head tilt angle, the running stability decreases and such decrease in running stability can become more remarkable in a high temperature and low humidity environment. However, the present invention is not limited to the inference of the inventors described in the present specification. Based on the surmise described above, the present inventors conducted intensive studies. As a result, the inventors newly found that, regarding surface properties of the surface of the magnetic layer of the magnetic tape, by making a surface roughness of the edge portion rougher than a surface roughness of the central portion, specifically, by setting each of the edge portion Ra, the central portion Ra, and the Ra ratio (central portion Ra/edge portion Ra) to be within the ranges described above, it is possible to improve the running stability in a case of performing the recording and/or reproducing of data at different head tilt angles in the high temperature and low humidity environment. In the following, the running stability in a case of performing the recording and/or reproducing of data by changing the head tilt angle during the running of the magnetic tape in the high temperature and low humidity environment is also simply referred to as “running stability”. In addition, the high temperature and low humidity environment can be, for example, an environment having a temperature of approximately 30° C. to 50° C. A humidity of the environment can be, for example, approximately 0% to 30% as a relative humidity. The temperatures and the humidity described for the environment in the specification are an atmosphere temperature and a relative humidity of such an environment. Edge Portion Ra, Central Portion Ra, Ra Ratio (Central Portion Ra/Edge Portion Ra) In the present invention and the present specification, the arithmetic average roughness Ra is measured by a noncontact optical surface roughness meter. The measurement conditions and data processing conditions are as follows. As the noncontact optical surface roughness meter, for example, Bruker's noncontact optical surface roughness meter Contour can be used, and in examples which will be described later, this noncontact optical surface roughness meter was used. Measurement Conditions Measurement environment: Temperature of 23° C. and relative humidity of 50% Measurement mode: Phase Shift Interferometry (PSI) Objective lens: 10× Intermediate lens: 1.0× Visual field for measurement: 355 μm×474 μm Data Processing Conditions Distortion/tilt correction: Cylinder and Tilt (Zero Level: Zero Mean) Filter: GaussianBand Pass: Order=0Type=RegularHigh Pass Filter=1.11 μmLow Pass Filter=50 μm In the present invention and the present specification, the edge portion Ra which is the arithmetic average roughness Ra measured at the edge portion of the surface of the magnetic layer is a value obtained by the following method. Both ends of the magnetic tape in the width direction are called edges. At an arbitrary position on the surface of the magnetic layer of the magnetic tape to be measured, one edge is included in the visual field for measurement, and measurement is performed with the noncontact optical surface roughness meter under the measurement conditions described above. After data processing is performed on the obtained measurement results under the data processing conditions described above, a range of 200 μm in width×200 μm in length (that is, distance of the magnetic tape in the longitudinal direction) is designated at an arbitrary position in the visual field for measurement, in a region having a width of 200 μm from a “position of an inner side of 50 μm from the edge” to a “position of an inner side of 200 μm further from the position of the inner side of 50 μm from the edge”, and the Ra of the designated range is obtained. An analysis unit provided in the noncontact optical surface roughness meter can calculate and output the Ra. Then, the other edge is included in the visual field for measurement and the Ra is obtained by the method described above. For each of the one edge side and the other edge side, the measurement is performed three times in total by shifting the measurement position by 1 mm or more. By the measurement described above, a total of 6 Ras are obtained. The arithmetic average of the Ras obtained accordingly is defined as the edge portion Ra. In the present invention and the present specification, the central portion Ra which is the arithmetic average roughness Ra measured at the central portion of the surface of the magnetic layer is a value obtained by the following method. On the surface of the magnetic layer, for one edge randomly selected from both edges of the magnetic tape, a range of 200 μm in width×200 μm in length is designated at an arbitrary position in the visual field for measurement, in a region having a width of 6 mm from a “position of an inner side of 3 mm from the edge” to a “position of an inner side of 6 mm further from the position of the inner side of 3 mm from the edge”, and the Ra of the designated range is obtained. For the edge side selected above, the measurement is performed six times in total by shifting the measurement position by 1 mm or more. By the measurement described above, a total of 6 Ras are obtained. The arithmetic average of the Ras obtained accordingly is defined as the central portion Ra. The Ra ratio (central portion Ra/edge portion Ra) is calculated from the edge portion Ra and the central portion Ra obtained by the method described above. In the magnetic tape described above, the Ra ratio (central portion Ra/edge portion Ra) is 0.75 or more, preferably 0.77 or more, and more preferably 0.80 or more, from a viewpoint of improving the running stability in a case of performing the recording and/or reproducing of data at different head tilt angles in the high temperature and low humidity environment. In addition, from the viewpoint described above, the Ra ratio (central portion Ra/edge portion Ra) is 0.95 or less, preferably 0.93 or less, and more preferably 0.90 or less. In the magnetic tape described above, the edge portion Ra is 1.50 nm or less, preferably 1.30 nm or less, more preferably 1.00 nm or less, even more preferably 0.95 nm or less, and still preferably 0.90 nm or less, from a viewpoint of improving the running stability. The edge portion Ra can be, for example, 0.10 nm or more, 0.20 nm or more, or 0.30 nm or more, or can be less than the value exemplified here. In the magnetic tape, the central portion Ra is 1.30 nm or less, preferably 1.20 nm or less, more preferably 1.10 nm or less, even more preferably 1.00 nm or less, still preferably 0.90 nm or less, and still more preferably 0.80 nm or less, from a viewpoint of improving the running stability. In addition, from the viewpoint described above, the central portion Ra is 0.30 nm or more, preferably 0.40 nm or more, and more preferably 0.50 nm or more. A control method of the Ra ratio, the edge portion Ra, and the central portion Ra will be described later. Standard Deviation of Curvature Next, a standard deviation of a curvature will be described. The curvature of the magnetic tape in the longitudinal direction of the present invention and the present specification is a value obtained by the following method in an environment of an atmosphere temperature of 23° C. and a relative humidity of 50%. The magnetic tape is normally accommodated and circulated in a magnetic tape cartridge. As the magnetic tape to be measured, a magnetic tape taken out from an unused magnetic tape cartridge that is not attached to the magnetic tape device is used. FIG.4is an explanatory diagram of the curvature of the magnetic tape in the longitudinal direction. A tape sample having a length of 100 m in the longitudinal direction is cut out from a randomly selected portion of the magnetic tape to be measured. One end of this tape sample is defined as a position of 0 m, and a position spaced apart from this one end toward the other end by D m (D meters) in the longitudinal direction is defined as a position of D m. Accordingly, a position spaced apart by 10 m in the longitudinal direction is defined as a position of 10 m, a position spaced apart by 20 m is defined as a position of 20 m, and in this manner, a position of 30 m, a position of 40 m, a position of 50 m, a position of 60 m, a position of 70 m, a position of 80 m, a position of 90 m, and a position of 100 m are defined at intervals of 10 m sequentially. A tape sample having a length of 1 m from the 0 m position to the position of 1 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 0 m. A tape sample having a length of 1 m from the 10 m position to the position of 11 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 10 m. A tape sample having a length of 1 m from the 20 m position to the position of 21 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 20 m. A tape sample having a length of 1 m from the 30 m position to the position of 31 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 30 m. A tape sample having a length of 1 m from the 40 m position to the position of 41 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 40 m. A tape sample having a length of 1 m from the 50 m position to the position of 51 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 50 m. A tape sample having a length of 1 m from the 60 m position to the position of 61 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 60 m. A tape sample having a length of 1 m from the 70 m position to the position of 71 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 70 m. A tape sample having a length of 1 m from the 80 m position to the position of 81 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 80 m. A tape sample having a length of 1 m from the 90 m position to the position of 91 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 90 m. A tape sample having a length of 1 m from the 99 m position to the position of 100 m is cut out. This tape sample is used as a tape sample for measuring the curvature at the position of 100 m. The tape sample of each position is hung for 24 hours±4 hours in a tension-free state by gripping an upper end portion with a gripping member (clip or the like) by setting the longitudinal direction as the vertical direction. Then, within 1 hour, the following measurement is performed. As shown inFIG.4, the tape piece is placed on a flat surface in a tension-free state. The tape piece may be placed on a flat surface with the surface on the magnetic layer side facing upward, or may be placed on a flat surface with the other surface facing upward. InFIG.4, S indicates a tape sample and W indicates the width direction of the tape sample. Using an optical microscope, a distance L1(unit: mm) that is a shortest distance between a virtual line54connecting both terminal portions52and53of the tape sample S and a maximum curved portion55in the longitudinal direction of the tape sample S is measured.FIG.4shows an example in which the tape sample is curved upward on a paper surface. Even in a case where the tape sample is curved downward, the distance L1(mm) is measured in the same manner. The distance L1is displayed as a positive value regardless of which side is curved. In a case where no curve in the longitudinal direction is confirmed, the L1is set to 0 (zero) mm. By doing so, a standard deviation of the curvature L1measured for a total of 11 positions from the position of 0 m to the position of 100 m (that is, a positive square root of the dispersion) is the standard deviation of the curvature of the magnetic tape to be measured in the longitudinal direction (unit: mm/m). In the magnetic tape, the standard deviation of the curvature obtained by the method described above can be, for example, 7 mm/m or less and 6 mm/m or less, and from a viewpoint of further improving the running stability, it is preferably 5 mm/m or less, more preferably 4 mm/m or less, and even more preferably 3 mm/m or less. The standard deviation of the curvature of the magnetic tape can be, for example, 0 mm/m or more, more than 0 mm/m, 1 mm/m or more, or 2 mm/m or more. It is preferable that the value of the standard deviation of the curvature is small, from a viewpoint of further improving the running stability. The standard deviation of the curvature can be controlled by adjusting the manufacturing conditions of the manufacturing step of the magnetic tape. This point will be described later in detail. Hereinafter, the magnetic tape will be described more specifically. Magnetic Layer Ferromagnetic Powder As the ferromagnetic powder contained in the magnetic layer, a well-known ferromagnetic powder can be used as one kind or in combination of two or more kinds as the ferromagnetic powder used in the magnetic layer of various magnetic recording media. It is preferable to use a ferromagnetic powder having an average particle size as the ferromagnetic powder, from a viewpoint of improvement of a recording density. From this viewpoint, an average particle size of the ferromagnetic powder is preferably equal to or smaller than 50 nm, more preferably equal to or smaller than 45 nm, even more preferably equal to or smaller than 40 nm, further preferably equal to or smaller than 35 nm, further more preferably equal to or smaller than 30 nm, further even more preferably equal to or smaller than 25 nm, and still preferably equal to or smaller than 20 nm. Meanwhile, from a viewpoint of stability of magnetization, the average particle size of the ferromagnetic powder is preferably equal to or greater than 5 nm, more preferably equal to or greater than 8 nm, even more preferably equal to or greater than 10 nm, still preferably equal to or greater than 15 nm, and still more preferably equal to or greater than 20 nm. Hexagonal Ferrite Powder As a preferred specific example of the ferromagnetic powder, a hexagonal ferrite powder can be used. For details of the hexagonal ferrite powder, descriptions disclosed in paragraphs 0012 to 0030 of JP2011-225417A, paragraphs 0134 to 0136 of JP2011-216149A, paragraphs 0013 to 0030 of JP2012-204726A, and paragraphs 0029 to 0084 of JP2015-127985A can be referred to, for example. In the invention and the specification, the “hexagonal ferrite powder” is a ferromagnetic powder in which a hexagonal ferrite type crystal structure is detected as a main phase by X-ray diffraction analysis. The main phase is a structure to which a diffraction peak at the highest intensity in an X-ray diffraction spectrum obtained by the X-ray diffraction analysis belongs. For example, in a case where the diffraction peak at the highest intensity in the X-ray diffraction spectrum obtained by the X-ray diffraction analysis belongs to a hexagonal ferrite type crystal structure, it is determined that the hexagonal ferrite type crystal structure is detected as a main phase. In a case where only a single structure is detected by the X-ray diffraction analysis, this detected structure is set as a main phase. The hexagonal ferrite type crystal structure includes at least an iron atom, a divalent metal atom, and an oxygen atom as constituting atoms. A divalent metal atom is a metal atom which can be divalent cations as ions, and examples thereof include an alkali earth metal atom such as a strontium atom, a barium atom, or a calcium atom, and a lead atom. In the invention and the specification, the hexagonal strontium ferrite powder is powder in which a main divalent metal atom included in this powder is a strontium atom, and the hexagonal barium ferrite powder is a powder in which a main divalent metal atom included in this powder is a barium atom. The main divalent metal atom is a divalent metal atom occupying the greatest content in the divalent metal atom included in the powder based on atom %. However, the divalent metal atom described above does not include rare earth atom. The “rare earth atom” of the invention and the specification is selected from the group consisting of a scandium atom (Sc), an yttrium atom (Y), and a lanthanoid atom. The lanthanoid atom is selected from the group consisting of a lanthanum atom (La), a cerium atom (Ce), a praseodymium atom (Pr), a neodymium atom (Nd), a promethium atom (Pm), a samarium atom (Sm), an europium atom (Eu), a gadolinium atom (Gd), a terbium atom (Tb), a dysprosium atom (Dy), a holmium atom (Ho), an erbium atom (Er), a thulium atom (Tm), an ytterbium atom (Yb), and a lutetium atom (Lu). Hereinafter, the hexagonal strontium ferrite powder which is one aspect of the hexagonal ferrite powder will be described more specifically. An activation volume of the hexagonal strontium ferrite powder is preferably in a range of 800 to 1,600 nm3. The atomized hexagonal strontium ferrite powder showing the activation volume in the range described above is suitable for manufacturing a magnetic tape exhibiting excellent electromagnetic conversion characteristics. The activation volume of the hexagonal strontium ferrite powder is preferably equal to or greater than 800 nm3, and can also be, for example, equal to or greater than 850 nm3. In addition, from a viewpoint of further improving the electromagnetic conversion characteristics, the activation volume of the hexagonal strontium ferrite powder is more preferably equal to or smaller than 1,500 nm3, even more preferably equal to or smaller than 1,400 nm3, still preferably equal to or smaller than 1,300 nm3, still more preferably equal to or smaller than 1,200 nm3, and still even more preferably equal to or smaller than 1,100 nm3. The same applies to the activation volume of the hexagonal barium ferrite powder. The “activation volume” is a unit of magnetization reversal and an index showing a magnetic magnitude of the particles. Regarding the activation volume and an anisotropy constant Ku which will be described later disclosed in the invention and the specification, magnetic field sweep rates of a coercivity Hc measurement part at time points of 3 minutes and 30 minutes are measured by using an oscillation sample type magnetic-flux meter (measurement temperature: 23° C.±1° C.), and the activation volume and the anisotropy constant Ku are values acquired from the following relational expression of Hc and an activation volume V. A unit of the anisotropy constant Ku is 1 erg/cc=1.0×10−1J/m3. Hc=2Ku/Ms{1−[(kT/KuV)ln(At/0.693)]1/2} [In the expression, Ku: anisotropy constant (unit: J/m3), Ms: saturation magnetization (unit: kA/m), k: Boltzmann's constant, T: absolute temperature (unit: K), V: activation volume (unit: cm3), A: spin precession frequency (unit: s−1), and t: magnetic field reversal time (unit: s)] The anisotropy constant Ku can be used as an index of reduction of thermal fluctuation, that is, improvement of thermal stability. The hexagonal strontium ferrite powder can preferably have Ku equal to or greater than 1.8×105J/m3, and more preferably have Ku equal to or greater than 2.0×105J/m3. In addition, Ku of the hexagonal strontium ferrite powder can be, for example, equal to or smaller than 2.5×105J/m3. However, the high Ku is preferable, because it means high thermal stability, and thus, Ku is not limited to the exemplified value. The hexagonal strontium ferrite powder may or may not include the rare earth atom. In a case where the hexagonal strontium ferrite powder includes the rare earth atom, a content (bulk content) of the rare earth atom is preferably 0.5 to 5.0 atom % with respect to 100 atom % of the iron atom. In one embodiment, the hexagonal strontium ferrite powder including the rare earth atom can have a rare earth atom surface layer portion uneven distribution. The “rare earth atom surface layer portion uneven distribution” of the invention and the specification means that a content of rare earth atom with respect to 100 atom % of iron atom in a solution obtained by partially dissolving the hexagonal strontium ferrite powder with acid (hereinafter, referred to as a “rare earth atom surface layer portion content” or simply a “surface layer portion content” regarding the rare earth atom) and a content of rare earth atom with respect to 100 atom % of iron atom in a solution obtained by totally dissolving the hexagonal strontium ferrite powder with acid (hereinafter, referred to as a “rare earth atom bulk content” or simply a “bulk content” regarding the rare earth atom) satisfy a ratio of rare earth atom surface layer portion content/rare earth atom bulk content >1.0. The content of rare earth atom of the hexagonal strontium ferrite powder which will be described later is identical to the rare earth atom bulk content. With respect to this, the partial dissolving using acid is to dissolve the surface layer portion of particles configuring the hexagonal strontium ferrite powder, and accordingly, the content of rare earth atom in the solution obtained by the partial dissolving is the content of rare earth atom in the surface layer portion of the particles configuring the hexagonal strontium ferrite powder. The rare earth atom surface layer portion content satisfying a ratio of “rare earth atom surface layer portion content/rare earth atom bulk content >1.0” means that the rare earth atoms are unevenly distributed in the surface layer portion (that is, a larger amount of the rare earth atoms is present, compared to that inside), among the particles configuring the hexagonal strontium ferrite powder. The surface layer portion of the invention and the specification means a part of the region of the particles configuring the hexagonal strontium ferrite powder towards the inside from the surface. In a case where the hexagonal strontium ferrite powder includes the rare earth atom, a content (bulk content) of the rare earth atom is preferably in a range of 0.5 to 5.0 atom % with respect to 100 atom % of the iron atom. It is thought that the rare earth atom having the bulk content in the range described above and uneven distribution of the rare earth atom in the surface layer portion of the particles configuring the hexagonal strontium ferrite powder contribute to the prevention of a decrease in reproducing output during the repeated reproducing. It is surmised that this is because the rare earth atom having the bulk content in the range described above included in the hexagonal strontium ferrite powder and the uneven distribution of the rare earth atom in the surface layer portion of the particles configuring the hexagonal strontium ferrite powder can increase the anisotropy constant Ku. As the value of the anisotropy constant Ku is high, occurrence of a phenomenon called thermal fluctuation (that is, improvement of thermal stability) can be prevented. By preventing the occurrence of the thermal fluctuation, a decrease in reproducing output during the repeated reproducing can be prevented. It is surmised that the uneven distribution of the rare earth atom in the surface layer portion of the particles of the hexagonal strontium ferrite powder contributes to stabilization of a spin at an iron (Fe) site in a crystal lattice of the surface layer portion, thereby increasing the anisotropy constant Ku. In addition, it is surmised that the use of the hexagonal strontium ferrite powder having the rare earth atom surface layer portion uneven distribution as the ferromagnetic powder of the magnetic layer also contributes to the prevention of chipping of the surface of the magnetic layer due to the sliding with the magnetic head. That is, it is surmised that, the hexagonal strontium ferrite powder having the rare earth atom surface layer portion uneven distribution can also contribute to the improvement of running durability of the magnetic tape. It is surmised that this is because the uneven distribution of the rare earth atom on the surface of the particles configuring the hexagonal strontium ferrite powder contributes to improvement of an interaction between the surface of the particles and an organic substance (for example, binding agent and/or additive) included in the magnetic layer, thereby improving hardness of the magnetic layer. From a viewpoint of preventing reduction of the reproduction output in the repeated reproduction and/or a viewpoint of further improving running durability, the content of rare earth atom (bulk content) is more preferably in a range of 0.5 to 4.5 atom %, even more preferably in a range of 1.0 to 4.5 atom %, and still preferably in a range of 1.5 to 4.5 atom %. The bulk content is a content obtained by totally dissolving the hexagonal strontium ferrite powder. In the invention and the specification, the content of the atom is a bulk content obtained by totally dissolving the hexagonal strontium ferrite powder, unless otherwise noted. The hexagonal strontium ferrite powder including the rare earth atom may include only one kind of rare earth atom or may include two or more kinds of rare earth atom, as the rare earth atom. In a case where two or more kinds of rare earth atoms are included, the bulk content is obtained from the total of the two or more kinds of rare earth atoms. The same also applies to the other components of the invention and the specification. That is, for a given component, only one kind may be used or two or more kinds may be used, unless otherwise noted. In a case where two or more kinds are used, the content is a content of the total of the two or more kinds. In a case where the hexagonal strontium ferrite powder includes the rare earth atom, the rare earth atom included therein may be any one or more kinds of the rare earth atom. Examples of the rare earth atom preferable from a viewpoint of preventing reduction of the reproduction output during the repeated reproduction include a neodymium atom, a samarium atom, an yttrium atom, and a dysprosium atom, a neodymium atom, a samarium atom, an yttrium atom are more preferable, and a neodymium atom is even more preferable. In the hexagonal strontium ferrite powder having the rare earth atom surface layer portion uneven distribution, a degree of uneven distribution of the rare earth atom is not limited, as long as the rare earth atom is unevenly distributed in the surface layer portion of the particles configuring the hexagonal strontium ferrite powder. For example, regarding the hexagonal strontium ferrite powder having the rare earth atom surface layer portion uneven distribution, a ratio of the surface layer portion content of the rare earth atom obtained by partial dissolving performed under the dissolving conditions which will be described later and the bulk content of the rare earth atom obtained by total dissolving performed under the dissolving conditions which will be described later, “surface layer portion content/bulk content” is greater than 1.0 and can be equal to or greater than 1.5. The “surface layer portion content/bulk content” greater than 1.0 means that the rare earth atoms are unevenly distributed in the surface layer portion (that is, a larger amount of the rare earth atoms is present, compared to that inside), in the particles configuring the hexagonal strontium ferrite powder. A ratio of the surface layer portion content of the rare earth atom obtained by partial dissolving performed under the dissolving conditions which will be described later and the bulk content of the rare earth atom obtained by total dissolving performed under the dissolving conditions which will be described later, “surface layer portion content/bulk content” can be, for example, equal to or smaller than 10.0, equal to or smaller than 9.0, equal to or smaller than 8.0, equal to or smaller than 7.0, equal to or smaller than 6.0, equal to or smaller than 5.0, or equal to or smaller than 4.0. However, in the hexagonal strontium ferrite powder having the rare earth atom surface layer portion uneven distribution, the “surface layer portion content/bulk content” is not limited to the exemplified upper limit or the lower limit, as long as the rare earth atom is unevenly distributed in the surface layer portion of the particles configuring the hexagonal strontium ferrite powder. The partial dissolving and the total dissolving of the hexagonal strontium ferrite powder will be described below. Regarding the hexagonal strontium ferrite powder present as the powder, sample powder for the partial dissolving and the total dissolving are collected from powder of the same batch. Meanwhile, regarding the hexagonal strontium ferrite powder included in a magnetic layer of a magnetic tape, a part of the hexagonal strontium ferrite powder extracted from the magnetic layer is subjected to the partial dissolving and the other part is subjected to the total dissolving. The extraction of the hexagonal strontium ferrite powder from the magnetic layer can be performed by, for example, a method disclosed in a paragraph 0032 of JP2015-91747A. The partial dissolving means dissolving performed so that the hexagonal strontium ferrite powder remaining in the solution can be visually confirmed in a case of the completion of the dissolving. For example, by performing the partial dissolving, a region of the particles configuring the hexagonal strontium ferrite powder which is 10% to 20% by mass with respect to 100% by mass of a total of the particles can be dissolved. On the other hand, the total dissolving means dissolving performed until the hexagonal strontium ferrite powder remaining in the solution is not visually confirmed in a case of the completion of the dissolving. The partial dissolving and the measurement of the surface layer portion content are, for example, performed by the following method. However, dissolving conditions such as the amount of sample powder and the like described below are merely examples, and dissolving conditions capable of performing the partial dissolving and the total dissolving can be randomly used. A vessel (for example, beaker) containing 12 mg of sample powder and 10 mL of hydrochloric acid having a concentration of 1 mol/L is held on a hot plate at a set temperature of 70° C. for 1 hour. The obtained solution is filtered with a membrane filter having a hole diameter of 0.1 μm. The element analysis of the filtrate obtained as described above is performed by an inductively coupled plasma (ICP) analysis device. By doing so, the surface layer portion content of the rare earth atom with respect to 100 atom % of the iron atom can be obtained. In a case where a plurality of kinds of rare earth atoms are detected from the element analysis, a total content of the entirety of the rare earth atoms is the surface layer portion content. The same applies to the measurement of the bulk content. Meanwhile, the total dissolving and the measurement of the bulk content are, for example, performed by the following method. A vessel (for example, beaker) containing 12 mg of sample powder and 10 mL of hydrochloric acid having a concentration of 4 mol/L is held on a hot plate at a set temperature of 80° C. for 3 hours. After that, the process is performed in the same manner as in the partial dissolving and the measurement of the surface layer portion content, and the bulk content with respect to 100 atom % of the iron atom can be obtained. From a viewpoint of increasing reproducing output in a case of reproducing data recorded on a magnetic tape, it is desirable that the mass magnetization σs of ferromagnetic powder contained in the magnetic tape is high. In regards to this point, in hexagonal strontium ferrite powder which includes the rare earth atom but does not have the rare earth atom surface layer portion uneven distribution, σs tends to significantly decrease, compared to that in hexagonal strontium ferrite powder not including the rare earth atom. With respect to this, it is thought that, hexagonal strontium ferrite powder having the rare earth atom surface layer portion uneven distribution is also preferable for preventing such a significant decrease in σs. In one embodiment, σs of the hexagonal strontium ferrite powder can be equal to or greater than 45 A×m2/kg and can also be equal to or greater than 47 A×m2/kg. On the other hand, from a viewpoint of noise reduction, σs is preferably equal to or smaller than 80 A×m2/kg and more preferably equal to or smaller than 60 A×m2/kg. σs can be measured by using a well-known measurement device capable of measuring magnetic properties such as an oscillation sample type magnetic-flux meter. In the invention and the specification, the mass magnetization σs is a value measured at a magnetic field strength of 15 kOe, unless otherwise noted. 1 [kOe]=(106/4π) [A/m] Regarding the content (bulk content) of the constituting atom in the hexagonal strontium ferrite powder, a content of the strontium atom can be, for example, in a range of 2.0 to 15.0 atom % with respect to 100 atom % of the iron atom. In one embodiment, in the hexagonal strontium ferrite powder, the divalent metal atom included in this powder can be only a strontium atom. In another embodiment, the hexagonal strontium ferrite powder can also include one or more kinds of other divalent metal atoms, in addition to the strontium atom. For example, the hexagonal strontium ferrite powder can include a barium atom and/or a calcium atom. In a case where the other divalent metal atom other than the strontium atom is included, a content of a barium atom and a content of a calcium atom in the hexagonal strontium ferrite powder respectively can be, for example, in a range of 0.05 to 5.0 atom % with respect to 100 atom % of the iron atom. As the crystal structure of the hexagonal ferrite, a magnetoplumbite type (also referred to as an “M type”), a W type, a Y type, and a Z type are known. The hexagonal strontium ferrite powder may have any crystal structure. The crystal structure can be confirmed by X-ray diffraction analysis. In the hexagonal strontium ferrite powder, a single crystal structure or two or more kinds of crystal structure can be detected by the X-ray diffraction analysis. For example, In one embodiment, in the hexagonal strontium ferrite powder, only the M type crystal structure can be detected by the X-ray diffraction analysis. For example, the M type hexagonal ferrite is represented by a compositional formula of AFe12O19. Here, A represents a divalent metal atom, in a case where the hexagonal strontium ferrite powder has the M type, A is only a strontium atom (Sr), or in a case where a plurality of divalent metal atoms are included as A, the strontium atom (Sr) occupies the hexagonal strontium ferrite powder with the greatest content based on atom % as described above. A content of the divalent metal atom in the hexagonal strontium ferrite powder is generally determined according to the type of the crystal structure of the hexagonal ferrite and is not particularly limited. The same applies to a content of an iron atom and a content of an oxygen atom. The hexagonal strontium ferrite powder at least includes an iron atom, a strontium atom, and an oxygen atom, and can also include a rare earth atom. In addition, the hexagonal strontium ferrite powder may or may not include atoms other than these atoms. As an example, the hexagonal strontium ferrite powder may include an aluminum atom (Al). A content of the aluminum atom can be, for example, 0.5 to 10.0 atom % with respect to 100 atom % of the iron atom. From a viewpoint of preventing the reduction of the reproduction output during the repeated reproduction, the hexagonal strontium ferrite powder includes the iron atom, the strontium atom, the oxygen atom, and the rare earth atom, and a content of the atoms other than these atoms is preferably equal to or smaller than 10.0 atom %, more preferably in a range of 0 to 5.0 atom %, and may be 0 atom % with respect to 100 atom % of the iron atom. That is, In one embodiment, the hexagonal strontium ferrite powder may not include atoms other than the iron atom, the strontium atom, the oxygen atom, and the rare earth atom. The content shown with atom % described above is obtained by converting a value of the content (unit: % by mass) of each atom obtained by totally dissolving the hexagonal strontium ferrite powder into a value shown as atom % by using the atomic weight of each atom. In addition, in the invention and the specification, a given atom which is “not included” means that the content thereof obtained by performing total dissolving and measurement by using an ICP analysis device is 0% by mass. A detection limit of the ICP analysis device is generally equal to or smaller than 0.01 ppm (parts per million) based on mass. The expression “not included” is used as a meaning including that a given atom is included with the amount smaller than the detection limit of the ICP analysis device. In one embodiment, the hexagonal strontium ferrite powder does not include a bismuth atom (Bi). Metal Powder As a preferred specific example of the ferromagnetic powder, a ferromagnetic metal powder can also be used. For details of the ferromagnetic metal powder, descriptions disclosed in paragraphs 0137 to 0141 of JP2011-216149A and paragraphs 0009 to 0023 of JP2005-251351A can be referred to, for example. ε-Iron Oxide Powder As a preferred specific example of the ferromagnetic powder, an ε-iron oxide powder can also be used. In the invention and the specification, the “ε-iron oxide powder” is a ferromagnetic powder in which an ε-iron oxide type crystal structure is detected as a main phase by X-ray diffraction analysis. For example, in a case where the diffraction peak at the highest intensity in the X-ray diffraction spectrum obtained by the X-ray diffraction analysis belongs to an ε-iron oxide type crystal structure, it is determined that the ε-iron oxide type crystal structure is detected as a main phase. As a manufacturing method of the ε-iron oxide powder, a manufacturing method from a goethite, a reverse micelle method, and the like are known. All of the manufacturing methods are well known. For the method of manufacturing the ε-iron oxide powder in which a part of Fe is substituted with substitutional atoms such as Ga, Co, Ti, Al, or Rh, a description disclosed in J. Jpn. Soc. Powder Metallurgy Vol. 61 Supplement, No. 51, pp. S280-S284, J. Mater. Chem. C, 2013, 1, pp. 5200-5206 can be referred, for example. However, the manufacturing method of the ε-iron oxide powder capable of being used as the ferromagnetic powder in the magnetic layer of the magnetic tape is not limited to the method described here. An activation volume of the ε-iron oxide powder is preferably in a range of 300 to 1,500 nm3. The atomized ε-iron oxide powder showing the activation volume in the range described above is suitable for manufacturing a magnetic tape exhibiting excellent electromagnetic conversion characteristics. The activation volume of the ε-iron oxide powder is preferably equal to or greater than 300 nm3, and can also be, for example, equal to or greater than 500 nm3. In addition, from a viewpoint of further improving the electromagnetic conversion characteristics, the activation volume of the ε-iron oxide powder is more preferably equal to or smaller than 1,400 nm3, even more preferably equal to or smaller than 1,300 nm3, still preferably equal to or smaller than 1,200 nm3, and still more preferably equal to or smaller than 1,100 nm3. The anisotropy constant Ku can be used as an index of reduction of thermal fluctuation, that is, improvement of thermal stability. The ε-iron oxide powder can preferably have Ku equal to or greater than 3.0×104J/m3, and more preferably have Ku equal to or greater than 8.0×104J/m3. In addition, Ku of the ε-iron oxide powder can be, for example, equal to or smaller than 3.0×105J/m3. However, the high Ku is preferable, because it means high thermal stability, and thus, Ku is not limited to the exemplified value. From a viewpoint of increasing reproducing output in a case of reproducing data recorded on a magnetic tape, it is desirable that the mass magnetization σs of ferromagnetic powder contained in the magnetic tape is high. In regard to this point, in one embodiment, σs of the ε-iron oxide powder can be equal to or greater than 8 A×m2/kg and can also be equal to or greater than 12 A×m2/kg. On the other hand, from a viewpoint of noise reduction, σs of the ε-iron oxide powder is preferably equal to or smaller than 40 A×m2/kg and more preferably equal to or smaller than 35 A×m2/kg. In the invention and the specification, average particle sizes of various powder such as the ferromagnetic powder and the like are values measured by the following method with a transmission electron microscope, unless otherwise noted. The powder is imaged at an imaging magnification ratio of 100,000 with a transmission electron microscope, the image is printed on photographic printing paper or displayed on a display so that the total magnification ratio of 500,000 to obtain an image of particles configuring the powder. A target particle is selected from the obtained image of particles, an outline of the particle is traced with a digitizer, and a size of the particle (primary particle) is measured. The primary particle is an independent particle which is not aggregated. The measurement described above is performed regarding 500 particles arbitrarily extracted. An arithmetic average of the particle size of 500 particles obtained as described above is the average particle size of the powder. As the transmission electron microscope, a transmission electron microscope H-9000 manufactured by Hitachi, Ltd. can be used, for example. In addition, the measurement of the particle size can be performed by a well-known image analysis software, for example, image analysis software KS-400 manufactured by Carl Zeiss. The average particle size shown in examples which will be described later is a value measured by using transmission electron microscope H-9000 manufactured by Hitachi, Ltd. as the transmission electron microscope, and image analysis software KS-400 manufactured by Carl Zeiss as the image analysis software, unless otherwise noted. In the invention and the specification, the powder means an aggregate of a plurality of particles. For example, the ferromagnetic powder means an aggregate of a plurality of ferromagnetic particles. The aggregate of the plurality of particles not only includes an embodiment in which particles configuring the aggregate are directly in contact with each other, but also includes an embodiment in which a binding agent or an additive which will be described later is interposed between the particles. A term, particles may be used for representing the powder. As a method for collecting a sample powder from the magnetic tape in order to measure the particle size, a method disclosed in a paragraph of 0015 of JP2011-048878A can be used, for example. In the invention and the specification, unless otherwise noted,(1) in a case where the shape of the particle observed in the particle image described above is a needle shape, a fusiform shape, or a columnar shape (here, a height is greater than a maximum long diameter of a bottom surface), the size (particle size) of the particles configuring the powder is shown as a length of a major axis configuring the particle, that is, a major axis length,(2) in a case where the shape of the particle is a planar shape or a columnar shape (here, a thickness or a height is smaller than a maximum long diameter of a plate surface or a bottom surface), the particle size is shown as a maximum long diameter of the plate surface or the bottom surface, and(3) in a case where the shape of the particle is a sphere shape, a polyhedron shape, or an unspecified shape, and the major axis configuring the particles cannot be specified from the shape, the particle size is shown as an equivalent circle diameter. The equivalent circle diameter is a value obtained by a circle projection method. In addition, regarding an average acicular ratio of the powder, a length of a minor axis, that is, a minor axis length of the particles is measured in the measurement described above, a value of (major axis length/minor axis length) of each particle is obtained, and an arithmetic average of the values obtained regarding 500 particles is calculated. Here, unless otherwise noted, in a case of (1), the minor axis length as the definition of the particle size is a length of a minor axis configuring the particle, in a case of (2), the minor axis length is a thickness or a height, and in a case of (3), the major axis and the minor axis are not distinguished, thus, the value of (major axis length/minor axis length) is assumed as 1, for convenience. In addition, unless otherwise noted, in a case where the shape of the particle is specified, for example, in a case of definition of the particle size (1), the average particle size is an average major axis length, and in a case of the definition (2), the average particle size is an average plate diameter. In a case of the definition (3), the average particle size is an average diameter (also referred to as an average particle diameter). The content (filling percentage) of the ferromagnetic powder of the magnetic layer is preferably in a range of 50% to 90% by mass and more preferably in a range of 60% to 90% by mass with respect to a total mass of the magnetic layer. A high filling percentage of the ferromagnetic powder in the magnetic layer is preferable from a viewpoint of improvement of recording density. Binding Agent The magnetic tape may be a coating type magnetic tape, and can include a binding agent in the magnetic layer. The binding agent is one or more kinds of resin. As the binding agent, various resins normally used as a binding agent of a coating type magnetic recording medium can be used. As the binding agent, a resin selected from a polyurethane resin, a polyester resin, a polyamide resin, a vinyl chloride resin, an acrylic resin obtained by copolymerizing styrene, acrylonitrile, or methyl methacrylate, a cellulose resin such as nitrocellulose, an epoxy resin, a phenoxy resin, and a polyvinylalkylal resin such as polyvinyl acetal or polyvinyl butyral can be used alone or a plurality of resins can be mixed with each other to be used. Among these, a polyurethane resin, an acrylic resin, a cellulose resin, and a vinyl chloride resin are preferable. These resins may be a homopolymer or a copolymer. These resins can be used as the binding agent even in the non-magnetic layer and/or a back coating layer which will be described later. For the binding agent described above, descriptions disclosed in paragraphs 0028 to 0031 of JP2010-24113A can be referred to. In addition, the binding agent may be a radiation curable resin such as an electron beam curable resin. For the radiation curable resin, paragraphs 0044 and 0045 of JP2011-048878A can be referred to. An average molecular weight of the resin used as the binding agent can be, for example, 10,000 to 200,000 as a weight-average molecular weight. The weight-average molecular weight of the invention and the specification is a value obtained by performing polystyrene conversion of a value measured by gel permeation chromatography (GPC) under the following measurement conditions. The weight-average molecular weight of the binding agent shown in examples which will be described later is a value obtained by performing polystyrene conversion of a value measured under the following measurement conditions. The amount of the binding agent used can be, for example, 1.0 to 30.0 parts by mass with respect to 100.0 parts by mass of the ferromagnetic powder. GPC device: HLC-8120 (manufactured by Tosoh Corporation) Column: TSK gel Multipore HXL-M (manufactured by Tosoh Corporation, 7.8 mmID (inner diameter)×30.0 cm) Eluent: Tetrahydrofuran (THF) Curing Agent A curing agent can also be used together with the binding agent. As the curing agent, In one embodiment, a thermosetting compound which is a compound in which a curing reaction (crosslinking reaction) proceeds due to heating can be used, and in another embodiment, a photocurable compound in which a curing reaction (crosslinking reaction) proceeds due to light irradiation can be used. At least a part of the curing agent is included in the magnetic layer in a state of being reacted (crosslinked) with other components such as the binding agent, by proceeding the curing reaction in the manufacturing step of the magnetic tape. The preferred curing agent is a thermosetting compound, and polyisocyanate is suitable. For the details of polyisocyanate, descriptions disclosed in paragraphs 0124 and 0125 of JP2011-216149A can be referred to. The amount of the curing agent can be, for example, 0 to 80.0 parts by mass with respect to 100.0 parts by mass of the binding agent in the magnetic layer forming composition, and is preferably 50.0 to 80.0 parts by mass, from a viewpoint of improvement of hardness of each layer such as the magnetic layer. Additives The magnetic layer may include one or more kinds of additives, as necessary. As the additives, the curing agent described above is used as an example. In addition, examples of the additive included in the magnetic layer include a non-magnetic powder (for example, inorganic powder, carbon black, or the like), a lubricant, a dispersing agent, a dispersing assistant, a fungicide, an antistatic agent, and an antioxidant. For the lubricant, a description disclosed in paragraphs 0030 to 0033, 0035, and 0036 of JP2016-126817A can be referred to. The lubricant may be included in the non-magnetic layer which will be described later. For the lubricant which can be included in the non-magnetic layer, a description disclosed in paragraphs 0030, 0031, 0034 to 0036 of JP2016-126817A can be referred to. For the dispersing agent, a description disclosed in paragraphs 0061 and 0071 of JP2012-133837A can be referred to. The dispersing agent may be added to a non-magnetic layer forming composition. For the dispersing agent which can be added to the non-magnetic layer forming composition, a description disclosed in paragraph 0061 of JP2012-133837A can be referred to. As the non-magnetic powder which may be contained in the magnetic layer, non-magnetic powder which can function as an abrasive, non-magnetic powder (for example, non-magnetic colloid particles) which can function as a projection formation agent which forms projections suitably protruded from the surface of the magnetic layer, and the like can be used. An average particle size of colloidal silica (silica colloid particles) shown in the examples which will be described later is a value obtained by a method disclosed in a measurement method of an average particle diameter in a paragraph 0015 of JP2011-048878A. As the additives, a commercially available product can be suitably selected according to the desired properties or manufactured by a well-known method, and can be used with any amount. As an example of the additive which can be used for improving dispersibility of the abrasive in the magnetic layer including the abrasive, a dispersing agent disclosed in paragraphs 0012 to 0022 of JP2013-131285A can be used. The magnetic layer described above can be provided on the surface of the non-magnetic support directly or indirectly through the non-magnetic layer. Non-Magnetic Layer Next, the non-magnetic layer will be described. The magnetic tape may include a magnetic layer directly on the non-magnetic support or may include a non-magnetic layer containing the non-magnetic powder between the non-magnetic support and the magnetic layer. The non-magnetic powder used for the non-magnetic layer may be a powder of an inorganic substance (inorganic powder) or a powder of an organic substance (organic powder). In addition, carbon black and the like can be used. Examples of the inorganic substance include metal, metal oxide, metal carbonate, metal sulfate, metal nitride, metal carbide, and metal sulfide. The non-magnetic powder can be purchased as a commercially available product or can be manufactured by a well-known method. For details thereof, descriptions disclosed in paragraphs 0146 to 0150 of JP2011-216149A can be referred to. For carbon black capable of being used in the non-magnetic layer, a description disclosed in paragraphs 0040 and 0041 of JP2010-24113A can be referred to. The content (filling percentage) of the non-magnetic powder of the non-magnetic layer is preferably in a range of 50% to 90% by mass and more preferably in a range of 60% to 90% by mass with respect to a total mass of the non-magnetic layer. In one embodiment, the non-magnetic layer can contain a Fe-based inorganic oxide powder as the non-magnetic powder. In the invention and the specification, the “Fe-based inorganic oxide powder” refers to an inorganic oxide powder containing iron as a constituent element. Specific examples of the Fe-based inorganic oxide powder can include an α-iron oxide powder and a goethite powder. In the invention and the specification, the “α-iron oxide powder” is a non-magnetic powder in which an α-iron oxide type crystal structure is detected as a main phase by X-ray diffraction analysis. The α-iron oxide powder is also generally called hematite or the like. According to the studies of the present inventors regarding the non-magnetic layer, it is found that the non-magnetic layer containing the Fe-based inorganic oxide powder having an average particle volume of 2.0×10−6μm3tends to have high hardness. The present inventors consider that this point is preferable for stably performing a burnishing process which will be described later. The present inventors surmise that, in a case where the burnishing process is stably performed, the edge portion Ra and the central portion Ra can be easily controlled, and as a result, the Ra ratio can be easily controlled. From this point, the average particle volume of the Fe-based inorganic oxide powder contained in the non-magnetic layer is preferably 2.0×10−6μm3or less, more preferably 1.5×10−6μm3or less, and even more preferably 1.0×10−6μm3or less. The average particle volume can be, for example, 1.0×10−9μm3or more or 1.0×10−8μm3or more, or can be less than the value exemplified here. In the invention and the specification, the average particle volume is a value obtained by the following method. In order to observe the Fe-based inorganic oxide powder contained in the non-magnetic layer of the magnetic tape, first, as a sample pretreatment, flaking is performed by a microtome method. The flaking is performed so that a flaky sample capable of observing a cross section of the magnetic tape in the thickness direction is obtained along the longitudinal direction of the magnetic tape. In the examples which will be described later, Leica EM UC6 manufactured by Leica was used as a microtome in order to obtain the average particle volume of the Fe-based inorganic oxide powder. For the obtained flaky sample, a cross section observation is performed so as to include a range from the non-magnetic support to the magnetic layer, using a transmission electron microscope (TEM) at an acceleration voltage of 300 kV and a to magnetic layer of 200,000 times, and a cross-sectional TEM image is obtained. As the transmission electron microscope, for example, JEM-2100Plus manufactured by JEOL Ltd. can be used. For the examples which will be described later, JEM-2100Plus manufactured by JEOL Ltd. was used as a transmission electron microscope, in order to obtain the average particle volume of the Fe-based inorganic oxide powder. In the obtained cross-sectional TEM image, 50 particles of Fe-based inorganic oxide powder are specified from the particles contained in the non-magnetic layer by using a micro electron beam diffraction method. Electron beam diffraction by the micro electron beam diffraction method is performed using a transmission electron microscope at an acceleration voltage of 200 kV and a camera length of 50 cm. For the examples which will be described later, JEM-2100Plus manufactured by JEOL Ltd. was used as the transmission electron microscope for the electron beam diffraction by the micro electron beam diffraction method. Then, using the 50 particles of the Fe-based inorganic oxide powder specified as described above, the average particle volume is obtained as follows. First, a major axis length (hereinafter referred to as “DL”) and a minor axis length (hereinafter referred to as “DS”) of each particle are measured. The major axis length DL means a maximum distance among distances between two parallel lines drawn from all angles so as to be in contact with a contour of the particle (so-called maximum feret's diameter). In a case where a direction of the major axis length defined as described above is called a major axis direction, the minor axis length DS means a maximum length among lengths of the particle in a direction orthogonal to the major axis direction of the particle. Next, an average major axis length DLave is obtained as an arithmetic average of the major axis lengths DL of the 50 measured particles. ave is an abbreviation for average. In addition, an average minor axis length DSave is obtained as an arithmetic average of the minor axis lengths DS of the 50 particles. From the average major axis length DLave and the average minor axis length DSave, an average volume Vave of the particles is obtained by the following equation. Vave=π/6×DSave2×DLave In addition, in one embodiment, the non-magnetic layer may contain carbon black as the non-magnetic powder. An average particle size of carbon black can be, for example, 10 nm to 50 nm. According to the studies of the present inventors, it is found that the non-magnetic layer containing carbon black having a pH of 5.0 or less tends to have high hardness. The present inventors consider that this point is preferable for stably performing a burnishing process which will be described later. The present inventors surmise that, in a case where the burnishing process is stably performed, the edge portion Ra and the central portion Ra can be easily controlled, and as a result, the Ra ratio can be easily controlled. From this point, the pH of the carbon black contained in the non-magnetic layer is preferably 5.0 or less, and more preferably 4.0 or less. The pH can be, for example, 1.0 or more, 2.0 or more, or 3.0 or more, or can be less than the value exemplified here. In the present invention and the present specification, the pH of carbon black is a value measured according to the standard test method ASTM D1512. The non-magnetic layer preferably contains at least one of the Fe-based inorganic oxide powder having an average particle volume of 2.0×10−6μm3or less and the carbon black having a pH of 5.0 or less, and more preferably contains both of them. A content of Fe-based inorganic oxide powder having an average particle volume of 2.0×10−6μm3or less with respect to 100 parts by mass of the total non-magnetic powder contained in the non-magnetic layer can be 50 parts by mass or more, 60 parts by mass or more, or 70 parts by mass or more, and can be, for example, 90 parts by mass or less. A content of carbon black having a pH of 5.0 or less with respect to 100 parts by mass of the total non-magnetic powder contained in the non-magnetic layer can be 10 parts by mass or more or 20 parts by mass or more, and can be, for example, 50 parts by mass or less, 40 parts by mass or less, or 30 parts by mass or less. The non-magnetic layer can include a binding agent and can also include additives. In regards to other details of a binding agent or additives of the non-magnetic layer, the well-known technology regarding the non-magnetic layer can be applied. In addition, in regards to the type and the content of the binding agent, and the type and the content of the additive, for example, the well-known technology regarding the magnetic layer can be applied. The non-magnetic layer of the magnetic tape also includes a substantially non-magnetic layer containing a small amount of ferromagnetic powder as impurities, or intentionally, together with the non-magnetic powder. Here, the substantially non-magnetic layer is a layer having a residual magnetic flux density equal to or smaller than 10 mT, a layer having coercivity equal to or smaller than 7.96 kA/m (100 Oe), or a layer having a residual magnetic flux density equal to or smaller than 10 mT and coercivity equal to or smaller than 7.96 kA/m (100 Oe). It is preferable that the non-magnetic layer does not have a residual magnetic flux density and coercivity. Non-Magnetic Support Next, the non-magnetic support will be described. As the non-magnetic support (hereinafter, also simply referred to as a “support”), well-known components such as polyethylene terephthalate, polyethylene naphthalate, polyamide, polyamide imide, aromatic polyamide subjected to biaxial stretching are used. Among these, polyethylene terephthalate, polyethylene naphthalate, and polyamide are preferable. Corona discharge, plasma treatment, easy-bonding treatment, or heat treatment may be performed with respect to these supports in advance. Back Coating Layer The tape may or may not include a back coating layer including a non-magnetic powder on a surface side of the non-magnetic support opposite to the surface side provided with the magnetic layer. The back coating layer preferably includes any one or both of carbon black and inorganic powder. The back coating layer can include a binding agent and can also include additives. For the details of the non-magnetic powder, the binding agent included in the back coating layer and various additives, a well-known technology regarding the back coating layer can be applied, and a well-known technology regarding the magnetic layer and/or the non-magnetic layer can also be applied. For example, for the back coating layer, descriptions disclosed in paragraphs 0018 to 0020 of JP2006-331625A and page 4, line 65, to page 5, line 38, of U.S. Pat. No. 7,029,774B can be referred to. Various Thicknesses Regarding a thickness (total thickness) of the magnetic tape, it has been required to increase recording capacity (increase in capacity) of the magnetic tape along with the enormous increase in amount of information in recent years. As a unit for increasing the capacity, a thickness of the magnetic tape is reduced and a length of the magnetic tape accommodated in one reel of the magnetic tape cartridge is increased. From this point, the thickness (total thickness) of the magnetic tape is preferably 5.6 μm or less, more preferably 5.5 μm or less, even more preferably 5.4 μm or less, still preferably 5.3 μm or less, and still more preferably 5.2 μm or less. In addition, from a viewpoint of ease of handling, the thickness of the magnetic tape is preferably 3.0 μm or more and more preferably 3.5 μm or more. The thickness (total thickness) of the magnetic tape can be measured by the following method. Ten tape samples (for example, length of 5 to 10 cm) are cut out from a random portion of the magnetic tape, these tape samples are overlapped, and the thickness is measured. A value which is one tenth of the measured thickness (thickness per one tape sample) is set as the tape thickness. The thickness measurement can be performed using a well-known measurement device capable of performing the thickness measurement at 0.1 μm order. A thickness of the non-magnetic support is preferably 3.0 to 5.0 μm. A thickness of the magnetic layer can be optimized according to a saturation magnetization amount of a magnetic head used, a head gap length, a recording signal band, and the like, is normally 0.01 μm to 0.15 μm, and is preferably 0.02 μm to 0.12 μm and more preferably 0.03 μm to 0.1 μm, from a viewpoint of realization of high-density recording. The magnetic layer may be at least single layer, the magnetic layer may be separated into two or more layers having different magnetic properties, and a configuration of a well-known multilayered magnetic layer can be applied. A thickness of the magnetic layer in a case where the magnetic layer is separated into two or more layers is the total thickness of the layers. A thickness of the non-magnetic layer is, for example, 0.1 to 1.5 μm and is preferably 0.1 to 1.0 μm. A thickness of the back coating layer is preferably equal to or smaller than 0.9 μm and more preferably 0.1 to 0.7 μm. Various thicknesses such as the thickness of the magnetic layer and the like can be obtained by the following method. A cross section of the magnetic tape in the thickness direction is exposed with an ion beam and the cross section observation of the exposed cross section is performed using a scanning electron microscope or a transmission electron microscope. Various thicknesses can be obtained as the arithmetic average of the thicknesses obtained at two random portions in the cross section observation. Alternatively, various thicknesses can be obtained as a designed thickness calculated under the manufacturing conditions. Manufacturing Method Preparation of Each Layer Forming Composition A step of preparing a composition for forming the magnetic layer, the non-magnetic layer or the back coating layer can generally include at least a kneading step, a dispersing step, and a mixing step provided before and after these steps, in a case where necessary. Each step may be divided into two or more stages. The component used in the preparation of each layer forming composition may be added at an initial stage or in a middle stage of each step. As the solvent, one kind or two or more kinds of various kinds of solvents usually used for producing a coating type magnetic recording medium can be used. For the solvent, a description disclosed in a paragraph 0153 of JP2011-216149A can be referred to, for example. In addition, each component may be separately added in two or more steps. For example, a binding agent may be separately added in a kneading step, a dispersing step, and a mixing step for adjusting viscosity after the dispersion. In order to manufacture the magnetic tape, a well-known manufacturing technology can be used in various steps. In the kneading step, an open kneader, a continuous kneader, a pressure kneader, or a kneader having a strong kneading force such as an extruder is preferably used. For details of the kneading processes, descriptions disclosed in JP1989-106338A (JP-H01-106338A) and JP1989-79274A (JP-H01-79274A) can be referred to. As a disperser, a well-known dispersion device can be used. The filtering may be performed by a well-known method in any stage for preparing each layer forming composition. The filtering can be performed by using a filter, for example. As the filter used in the filtering, a filter having a hole diameter of 0.01 to 3 μm (for example, filter made of glass fiber or filter made of polypropylene) can be used, for example. Coating Step The magnetic layer can be formed, for example, by directly applying the magnetic layer forming composition onto the non-magnetic support or performing multilayer coating of the magnetic layer forming composition with the non-magnetic layer forming composition in order or at the same time. In an case of performing an alignment process, while the coating layer of the magnetic layer forming composition is wet, the alignment process is performed with respect to the coating layer in an alignment zone. For the alignment process, various technologies disclosed in a paragraph 0052 of JP2010-24113A can be applied. For example, a homeotropic alignment process can be performed by a well-known method such as a method using a different polar facing magnet. In the alignment zone, a drying speed of the coating layer can be controlled by a temperature and an air flow of the dry air and/or a transporting rate in the alignment zone. In addition, the coating layer may be preliminarily dried before transporting to the alignment zone. The back coating layer can be formed by applying a back coating layer forming composition onto a side of the non-magnetic support opposite to the side provided with the magnetic layer (or to be provided with the magnetic layer). For details of the coating for forming each layer, a description disclosed in a paragraph 0066 of JP2010-231843A can be referred to. Other Steps After performing the coating step described above, a calendar process can usually be performed in order to improve surface smoothness of the magnetic tape. For calendar conditions, a calendar pressure is, for example, 200 to 500 kN/m and preferably 250 to 350 kN/m, a calendar temperature is, for example, 70° C. to 120° C. and preferably 80° C. to 100° C., and a calendar speed is, for example, 50 to 300 m/min and preferably 80 to 200 m/min. In addition, as a roll having a hard surface is used as a calendar roll, or as the number of stages is increased, the surface of the magnetic layer tends to be smoother. For various other steps for manufacturing a magnetic tape, a description disclosed in paragraphs 0067 to 0070 of JP2010-231843A can be referred to. Through various steps, a long magnetic tape raw material can be obtained. The obtained magnetic tape raw material is, for example, cut (slit) by a well-known cutter to have a width of a magnetic tape to be accommodated in the magnetic tape cartridge. The width can be determined according to the standard and is normally ½ inches. 1 inch=2.54 cm. Burnishing Process The burnishing process is a process of rubbing a surface of a process target with a member (for example, abrasive tape or a grinding tool such as a blade for grinding or a wheel for grinding). The burnishing process can be preferably performed by performing one or both of rubbing (polishing) of a surface of a coating layer which is a process target with an abrasive tape, and rubbing (grinding) of a surface of a coating layer which is a process target with a grinding tool. As the abrasive tape, a commercially available product may be used or an abrasive tape produced by a well-known method may be used. In addition, as the grinding tool, a well-known blade for grinding such as a fixed type blade, a diamond wheel, or a rotary blade, or a wheel for grinding can be used. Further, a wiping process of wiping the surface of the coating layer rubbed with the abrasive tape and/or the grinding tool with a wiping material may be performed. For details of the preferable abrasive tape, grinding tool, burnishing process, and wiping process, paragraphs 0034 to 0048,FIG.1, and examples of JP1994-052544A (JP-H06-052544A) can be referred to. The roughness of the surface to be treated can be controlled by the burnishing process, and as the burnishing process conditions are reinforced, the surface to be processed tends to be smooth. Examples of the burnishing process conditions include a tension applied in the longitudinal direction of the magnetic tape during the burnishing process (hereinafter, referred to as a “burnishing process tension”). The larger the value of the burnishing process tension, the smoother the surface to be processed tends to be. In order to control each of the edge portion Ra and the central portion Ra of the magnetic layer surface and to control the Ra ratio (central portion Ra/edge portion Ra) accordingly, it is preferable to perform the burnishing process on a central region of the surface of the magnetic layer and the burnishing process on a region in the vicinity of each edge (hereinafter, referred to as the “region in the vicinity of the edge”) under different process conditions, in a case of performing the burnishing process on the surface of the magnetic layer of the magnetic tape after the slitting. In the magnetic tape slit to a width of ½ inch, for example, the central region can be a region between the position of the inner side of 3 mm from the one edge and the position of the inner side of 3 mm from the other edge in the width direction of the magnetic tape, and a region other than such a central region can be referred to as the region in the vicinity of the edge. The central region and the region in the vicinity of the edge described with respect to the examples which will be described later are the regions described above. For example, by making a value of a burnishing process tension during the burnishing process on the central region larger than a value of a burnishing process tension during the burnishing process on the region in the vicinity of the edge, the central region can be made smoother than the region in the vicinity of the edge, and as a result, the value of the edge portion Ra can be made larger than the value of the central portion Ra. In both the burnishing process on the central region and the burnishing process on the region in the vicinity of the edge, the burnishing process tension can be in a range of, for example, 50 gf to 250 gf, and a difference therebetween (the burnishing process tension in the central region—the burnishing process tension in the region in the vicinity of the edge) can be, for example, 3 gf to 30 gf. However, the range described above is an example and does not limit the present invention. In terms of unit, “gf” represents gram weight and 1 N (Newton) is approximately 102 gf. Formation of Servo Pattern In the magnetic tape obtained by slitting, a servo pattern can be generally formed. The formation of the servo pattern can be performed, for example, after the burnishing process described above (or even after the wiping process). The “formation of the servo pattern” can be “recording of a servo signal”. The formation of the servo pattern will be described below. The servo pattern is generally formed along a longitudinal direction of the magnetic tape. As a system of control using a servo signal (servo control), timing-based servo (TBS), amplitude servo, or frequency servo is used. As shown in European Computer Manufacturers Association (ECMA)-319 (June 2001), a timing-based servo system is used in a magnetic tape based on a linear tape-open (LTO) standard (generally referred to as an “LTO tape”). In this timing-based servo system, the servo pattern is configured by continuously disposing a plurality of pairs of magnetic stripes (also referred to as “servo stripes”) not parallel to each other in a longitudinal direction of the magnetic tape. In the invention and the specification, the “timing-based servo pattern” refers to a servo pattern that enables head tracking in a servo system of a timing-based servo system. As described above, a reason for that the servo pattern is configured with one pair of magnetic stripes not parallel to each other is because a servo signal reading element passing on the servo pattern recognizes a passage position thereof. Specifically, one pair of the magnetic stripes are formed so that a gap thereof is continuously changed along the width direction of the magnetic tape, and a relative position of the servo pattern and the servo signal reading element can be recognized, by the reading of the gap thereof by the servo signal reading element. The information of this relative position can realize the tracking of a data track. Accordingly, a plurality of servo tracks are generally set on the servo pattern along the width direction of the magnetic tape. The servo band is configured of servo patterns continuous in the longitudinal direction of the magnetic tape. A plurality of servo bands are normally provided on the magnetic tape. For example, the number thereof is 5 in the LTO tape. A region interposed between two adjacent servo bands is a data band. The data band is configured of a plurality of data tracks and each data track corresponds to each servo track. In one embodiment, as shown in JP2004-318983A, information showing the number of servo band (also referred to as “servo band identification (ID)” or “Unique Data Band Identification Method (UDIM) information”) is embedded in each servo band. This servo band ID is recorded by shifting a specific servo stripe among the plurality of pair of servo stripes in the servo band so that the position thereof is relatively deviated in the longitudinal direction of the magnetic tape. Specifically, the position of the shifted specific servo stripe among the plurality of pair of servo stripes is changed for each servo band. Accordingly, the recorded servo band ID becomes unique for each servo band, and therefore, the servo band can be uniquely specified by only reading one servo band by the servo signal reading element. In a method of uniquely specifying the servo band, a staggered method as shown in ECMA-319 (June 2001) is used. In this staggered method, a plurality of the groups of one pair of magnetic stripes (servo stripe) not parallel to each other which are continuously disposed in the longitudinal direction of the magnetic tape is recorded so as to be shifted in the longitudinal direction of the magnetic tape for each servo band. A combination of this shifted servo band between the adjacent servo bands is set to be unique in the entire magnetic tape, and accordingly, the servo band can also be uniquely specified by reading of the servo pattern by two servo signal reading elements. In addition, as shown in ECMA-319 (June 2001), information showing the position in the longitudinal direction of the magnetic tape (also referred to as “Longitudinal Position (LPOS) information”) is normally embedded in each servo band. This LPOS information is recorded so that the position of one pair of servo stripes is shifted in the longitudinal direction of the magnetic tape, in the same manner as the UDIM information. However, unlike the UDIM information, the same signal is recorded on each servo band in this LPOS information. Other information different from the UDIM information and the LPOS information can be embedded in the servo band. In this case, the embedded information may be different for each servo band as the UDIM information, or may be common in all of the servo bands, as the LPOS information. In addition, as a method of embedding the information in the servo band, a method other than the method described above can be used. For example, a predetermined code may be recorded by thinning out a predetermined pair among the group of pairs of the servo stripes. A servo pattern forming head is also referred to as a servo write head. The servo write head generally includes pairs of gaps corresponding to the pairs of magnetic stripes by the number of servo bands. In general, a core and a coil are respectively connected to each of the pairs of gaps, and a magnetic field generated in the core can generate leakage magnetic field in the pairs of gaps, by supplying a current pulse to the coil. In a case of forming the servo pattern, by inputting a current pulse while causing the magnetic tape to run on the servo write head, the magnetic pattern corresponding to the pair of gaps is transferred to the magnetic tape, and the servo pattern can be formed. A width of each gap can be suitably set in accordance with a density of the servo pattern to be formed. The width of each gap can be set as, for example, equal to or smaller than 1 μm, 1 to 10 μm, or equal to or greater than 10 μm. Before forming the servo pattern on the magnetic tape, a demagnetization (erasing) process is generally performed on the magnetic tape. This erasing process can be performed by applying a uniform magnetic field to the magnetic tape by using a DC magnet and an AC magnet. The erasing process includes direct current (DC) erasing and alternating current (AC) erasing. The AC erasing is performed by slowing decreasing an intensity of the magnetic field, while reversing a direction of the magnetic field applied to the magnetic tape. Meanwhile, the DC erasing is performed by applying the magnetic field in one direction to the magnetic tape. The DC erasing further includes two methods. A first method is horizontal DC erasing of applying the magnetic field in one direction along a longitudinal direction of the magnetic tape. A second method is vertical DC erasing of applying the magnetic field in one direction along a thickness direction of the magnetic tape. The erasing process may be performed with respect to all of the magnetic tape or may be performed for each servo band of the magnetic tape. A direction of the magnetic field to the servo pattern to be formed is determined in accordance with the direction of erasing. For example, in a case where the horizontal DC erasing is performed to the magnetic tape, the formation of the servo pattern is performed so that the direction of the magnetic field and the direction of erasing is opposite to each other. Accordingly, the output of the servo signal obtained by the reading of the servo pattern can be increased. As disclosed in JP2012-53940A, in a case where the magnetic pattern is transferred to the magnetic tape subjected to the vertical DC erasing by using the gap, the servo signal obtained by the reading of the formed servo pattern has a unipolar pulse shape. Meanwhile, in a case where the magnetic pattern is transferred to the magnetic tape subjected to the horizontal DC erasing by using the gap, the servo signal obtained by the reading of the formed servo pattern has a bipolar pulse shape. Heat Treatment In one embodiment, the magnetic tape can be a magnetic tape manufactured through the following heat treatment. In another aspect, the magnetic tape can be manufactured without the following heat treatment. The heat treatment can be performed in a state where the magnetic tape slit and cut to have a width determined according to the standard is wound around a core member. In one embodiment, the heat treatment is performed in a state where the magnetic tape is wound around the core member for heat treatment (hereinafter, referred to as a “core for heat treatment”), the magnetic tape after the heat treatment is wound around a cartridge reel of the magnetic tape cartridge, and a magnetic tape cartridge in which the magnetic tape is wound around the cartridge reel can be manufactured. The core for heat treatment can be formed of metal, a resin, or paper. The material of the core for heat treatment is preferably a material having high stiffness, from a viewpoint of preventing the occurrence of a winding defect such as spoking or the like. From this viewpoint, the core for heat treatment is preferably formed of metal or a resin. In addition, as an index for stiffness, a bending elastic modulus of the material for the core for heat treatment is preferably equal to or greater than 0.2 GPa (gigapascal) and more preferably equal to or greater than 0.3 GPa. Meanwhile, since the material having high stiffness is normally expensive, the use of the core for heat treatment of the material having stiffness exceeding the stiffness capable of preventing the occurrence of the winding defect causes the cost increase. By considering the viewpoint described above, the bending elastic modulus of the material for the core for heat treatment is preferably equal to or smaller than 250 GPa. The bending elastic modulus is a value measured based on international organization for standardization (ISO) 178 and the bending elastic modulus of various materials is well known. In addition, the core for heat treatment can be a solid or hollow core member. In a case of a hollow shape, a wall thickness is preferably equal to or greater than 2 mm, from a viewpoint of maintaining the stiffness. In addition, the core for heat treatment may include or may not include a flange. The magnetic tape having a length equal to or greater than a length to be finally accommodated in the magnetic tape cartridge (hereinafter, referred to as a “final product length”) is prepared as the magnetic tape wound around the core for heat treatment, and it is preferable to perform the heat treatment by placing the magnetic tape in the heat treatment environment, in a state where the magnetic tape is wound around the core for heat treatment. The magnetic tape length wound around the core for heat treatment is equal to or greater than the final product length, and is preferably the “final product length+α”, from a viewpoint of ease of winding around the core for heat treatment. This a is preferably equal to or greater than 5 m, from a viewpoint of ease of the winding. The tension in a case of winding around the core for heat treatment is preferably equal to or greater than 0.1 N (newton). In addition, from a viewpoint of preventing the occurrence of excessive deformation during the manufacturing, the tension in a case of winding around the core for heat treatment is preferably equal to or smaller than 1.5 N and more preferably equal to or smaller than 1.0 N. An outer diameter of the core for heat treatment is preferably equal to or greater than 20 mm and more preferably equal to or greater than 40 mm, from viewpoints of ease of the winding and preventing coiling (curl in longitudinal direction). The outer diameter of the core for heat treatment is preferably equal to or smaller than 100 mm and more preferably equal to or smaller than 90 mm. A width of the core for heat treatment may be equal to or greater than the width of the magnetic tape wound around this core. In addition, after the heat treatment, in a case of detaching the magnetic tape from the core for heat treatment, it is preferable that the magnetic tape and the core for heat treatment are sufficiently cooled and magnetic tape is detached from the core for heat treatment, in order to prevent the occurrence of the tape deformation which is not intended during the detaching operation. It is preferable the detached magnetic tape is wound around another core temporarily (referred to as a “core for temporary winding”), and the magnetic tape is wound around a cartridge reel (generally, outer diameter is approximately 40 to 50 mm) of the magnetic tape cartridge from the core for temporary winding. Accordingly, a relationship between the inside and the outside with respect to the core for heat treatment of the magnetic tape in a case of the heat treatment can be maintained and the magnetic tape can be wound around the cartridge reel of the magnetic tape cartridge. Regarding the details of the core for temporary winding and the tension in a case of winding the magnetic tape around the core, the description described above regarding the core for heat treatment can be referred to. In an embodiment in which the heat treatment is subjected to the magnetic tape having a length of the “final product length+α”, the length corresponding to “+α” may be cut in any stage. For example, In one embodiment, the magnetic tape having the final product length may be wound around the cartridge reel of the magnetic tape cartridge from the core for temporary winding and the remaining length corresponding the “+α” may be cut. From a viewpoint of decreasing the amount of the portion to be cut out and removed, the α is preferably equal to or smaller than 20 m. The specific embodiment of the heat treatment performed in a state of being wound around the core member as described above is described below. An atmosphere temperature for performing the heat treatment (hereinafter, referred to as a “heat treatment temperature”) is preferably equal to or higher than 40° C. and more preferably equal to or higher than 50° C. On the other hand, from a viewpoint of preventing the excessive deformation, the heat treatment temperature is preferably equal to or lower than 75° C. and more preferably equal to or lower than 70° C. A weight absolute humidity of the atmosphere for performing the heat treatment is preferably equal to or greater than 0.1 g/kg Dry air and more preferably equal to or greater than 1 g/kg Dry air. The atmosphere in which the weight absolute humidity is in the range described above is preferable, because it can be prepared without using a special device for decreasing moisture. On the other hand, the weight absolute humidity is preferably equal to or smaller than 70 g/kg Dry air and more preferably equal to or smaller than 66 g/kg Dry air, from a viewpoint of preventing a deterioration in workability by dew condensation. The heat treatment time is preferably equal to or longer than 0.3 hours and more preferably equal to or longer than 0.5 hours. In addition, the heat treatment time is preferably equal to or shorter than 48 hours, from a viewpoint of production efficiency. Regarding the control of the standard deviation of the curvature described above, as any value of the heat treatment temperature, heat treatment time, bending elastic modulus of a core for the heat treatment, and tension at the time of winding around the core for the heat treatment is large, the value of the curvature tends to further decrease. <Vertical Squareness Ratio> In one embodiment, the vertical squareness ratio of the magnetic tape can be, for example, 0.55 or more, and is preferably 0.60 or more. It is preferable that the vertical squareness ratio of the magnetic tape is 0.60 or more, from a viewpoint of improving the electromagnetic conversion characteristics. In principle, an upper limit of the squareness ratio is 1.00 or less. The vertical squareness ratio of the magnetic tape can be 1.00 or less, 0.95 or less, 0.90 or less, 0.85 or less, or 0.80 or less. It is preferable that the value of the vertical squareness ratio of the magnetic tape is large from a viewpoint of improving the electromagnetic conversion characteristics. The vertical squareness ratio of the magnetic tape can be controlled by a well-known method such as performing a homeotropic alignment process. In the invention and the specification, the “vertical squareness ratio” is squareness ratio measured in the vertical direction of the magnetic tape. The “vertical direction” described with respect to the squareness ratio is a direction orthogonal to the surface of the magnetic layer, and can also be referred to as a thickness direction. In the invention and the specification, the vertical squareness ratio is obtained by the following method. A sample piece having a size that can be introduced into an oscillation sample type magnetic-flux meter is cut out from the magnetic tape to be measured. Regarding the sample piece, using the oscillation sample type magnetic-flux meter, a magnetic field is applied to a vertical direction of a sample piece (direction orthogonal to the surface of the magnetic layer) with a maximum applied magnetic field of 3979 kA/m, a measurement temperature of 296 K, and a magnetic field sweep speed of 8.3 kA/m/sec, and a magnetization strength of the sample piece with respect to the applied magnetic field is measured. The measured value of the magnetization strength is obtained as a value after diamagnetic field correction and a value obtained by subtracting magnetization of a sample probe of the oscillation sample type magnetic-flux meter as background noise. In a case where the magnetization strength at the maximum applied magnetic field is Ms and the magnetization strength at zero applied magnetic field is Mr, the squareness ratio SQ is a value calculated as SQ=Mr/Ms. The measurement temperature is referred to as a temperature of the sample piece, and by setting the ambient temperature around the sample piece to a measurement temperature, the temperature of the sample piece can be set to the measurement temperature by realizing temperature equilibrium. Magnetic Tape Cartridge According to another aspect of the invention, there is provided a magnetic tape cartridge comprising the magnetic tape described above. The details of the magnetic tape included in the magnetic tape cartridge are as described above. In the magnetic tape cartridge, the magnetic tape is generally accommodated in a cartridge main body in a state of being wound around a reel. The reel is rotatably provided in the cartridge main body. As the magnetic tape cartridge, a single reel type magnetic tape cartridge including one reel in a cartridge main body and a twin reel type magnetic tape cartridge including two reels in a cartridge main body are widely used. In a case where the single reel type magnetic tape cartridge is mounted in the magnetic tape device in order to record and/or reproduce data on the magnetic tape, the magnetic tape is drawn from the magnetic tape cartridge and wound around the reel on the magnetic tape device side. A magnetic head is disposed on a magnetic tape transportation path from the magnetic tape cartridge to a winding reel. Feeding and winding of the magnetic tape are performed between a reel (supply reel) on the magnetic tape cartridge side and a reel (winding reel) on the magnetic tape device side. In the meantime, for example, the magnetic head comes into contact with and slides on the surface of the magnetic layer of the magnetic tape, and accordingly, the recording and/or reproducing of data is performed. With respect to this, in the twin reel type magnetic tape cartridge, both reels of the supply reel and the winding reel are provided in the magnetic tape cartridge. In one embodiment, the magnetic tape cartridge can include a cartridge memory. The cartridge memory can be, for example, a non-volatile memory, and in one embodiment, head tilt angle adjustment information is recorded in advance or head tilt angle adjustment information is recorded. The head tilt angle adjustment information is information for adjusting the head tilt angle during the running of the magnetic tape in the magnetic tape device. For example, as the head tilt angle adjustment information, a value of the servo band spacing at each position in the longitudinal direction of the magnetic tape at the time of data recording can be recorded. For example, in a case where the data recorded on the magnetic tape is reproduced, the value of the servo band spacing is measured at the time of the reproducing, and the head tilt angle can be changed by the control device of the magnetic tape device so that an absolute value of a difference of the servo band spacing at the time of recording at the same longitudinal position recorded in the cartridge memory is close to 0. The head tilt angle can be, for example, the angle θ described above. The magnetic tape and the magnetic tape cartridge can be suitably used in the magnetic tape device (that is, magnetic recording and reproducing system) for performing recording and/or reproducing data at different head tilt angles. In such a magnetic tape device, in one embodiment, it is possible to perform the recording and/or reproducing of data by changing the head tilt angle during running of a magnetic tape. For example, the head tilt angle can be changed according to dimensional information of the magnetic tape in the width direction obtained while the magnetic tape is running. In addition, for example, in a usage aspect, a head tilt angle during the recording and/or reproducing at a certain time and a head tilt angle during the recording and/or reproducing at the next time and subsequent times are changed, and then the head tilt angle may be fixed without changing during the running of the magnetic tape for the recording and/or reproducing of each time. In any usage aspect, a magnetic tape having high running stability in a case of performing the recording and/or reproducing of data at different head tilt angles is preferable. Magnetic Tape Device According to still another aspect of the invention, there is provided a magnetic tape device comprising the magnetic tape. In the magnetic tape device, the recording of data on the magnetic tape and/or the reproducing of data recorded on the magnetic tape can be performed by bringing the surface of the magnetic layer of the magnetic tape into contact with the magnetic head and sliding. For example, the magnetic tape device can attachably and detachably include the magnetic tape cartridge according to one embodiment of the invention. The magnetic tape cartridge can be attached to a magnetic tape device provided with a magnetic head and used for performing the recording and/or reproducing of data. In the invention and the specification, the “magnetic tape device” means a device capable of performing at least one of the recording of data on the magnetic tape or the reproducing of data recorded on the magnetic tape. Such a device is generally called a drive. Magnetic Head The magnetic tape device can include a magnetic head. The configuration of the magnetic head and the angle θ, which is the head tilt angle, are as described above with reference toFIGS.1to3. In a case where the magnetic head includes a reproducing element, as the reproducing element, a magnetoresistive (MR) element capable of reading information recorded on the magnetic tape with excellent sensitivity is preferable. As the MR element, various well-known MR elements (for example, a Giant Magnetoresistive (GMR) element, or a Tunnel Magnetoresistive (TMR) element) can be used. Hereinafter, the magnetic head which records data and/or reproduces the recorded data is also referred to as a “recording and reproducing head”. The element for recording data (recording element) and the element for reproducing data (reproducing element) are collectively referred to as a “magnetic head element”. By reproducing data using the reproducing element having a narrow reproducing element width as the reproducing element, the data recorded at high density can be reproduced with high sensitivity. From this viewpoint, the reproducing element width of the reproducing element is preferably 0.8 μm or less. The reproducing element width of the reproducing element can be, for example, 0.3 μm or more. However, it is also preferable to fall below this value from the above viewpoint. Here, the “reproducing element width” refers to a physical dimension of the reproducing element width. Such physical dimensions can be measured with an optical microscope, a scanning electron microscope, or the like. In a case of recording data and/or reproducing recorded data, first, tracking using a servo signal can be performed. That is, as the servo signal reading element follows a predetermined servo track, the magnetic head element can be controlled to pass on the target data track. The movement of the data track is performed by changing the servo track to be read by the servo signal reading element in the tape width direction. In addition, the recording and reproducing head can perform the recording and/or reproducing with respect to other data bands. In this case, the servo signal reading element is moved to a predetermined servo band by using the UDIM information described above, and the tracking with respect to the servo band may be started. FIG.5shows an example of disposition of data bands and servo bands. InFIG.5, a plurality of servo bands1are disposed to be interposed between guide bands3in a magnetic layer of a magnetic tape MT. A plurality of regions2each of which is interposed between two servo bands are data bands. The servo pattern is a magnetized region and is formed by magnetizing a specific region of the magnetic layer by a servo write head. The region magnetized by the servo write head (position where a servo pattern is formed) is determined by standards. For example, in an LTO Ultrium format tape which is based on a local standard, a plurality of servo patterns tilted in a tape width direction as shown inFIG.6are formed on a servo band, in a case of manufacturing a magnetic tape. Specifically, inFIG.6, a servo frame SF on the servo band1is configured with a servo sub-frame1(SSF1) and a servo sub-frame2(SSF2). The servo sub-frame1is configured with an A burst (inFIG.6, reference numeral A) and a B burst (inFIG.6, reference numeral B). The A burst is configured with servo patterns A1to A5and the B burst is configured with servo patterns B1to B5. Meanwhile, the servo sub-frame2is configured with a C burst (inFIG.6, reference numeral C) and a D burst (inFIG.6, reference numeral D). The C burst is configured with servo patterns C1to C4and the D burst is configured with servo patterns D1to D4. Such 18 servo patterns are disposed in the sub-frames in the arrangement of 5, 5, 4, 4, as the sets of 5 servo patterns and 4 servo patterns, and are used for recognizing the servo frames.FIG.6shows one servo frame for explaining. However, in practice, in the magnetic layer of the magnetic tape in which the head tracking in the timing-based servo system is performed, a plurality of servo frames are disposed in each servo band in a running direction. InFIG.6, an arrow shows a magnetic tape running direction. For example, an LTO Ultrium format tape generally includes 5,000 or more servo frames per a tape length of 1 m, in each servo band of the magnetic layer. In the magnetic tape device, the head tilt angle can be changed while the magnetic tape is running in the magnetic tape device. The head tilt angle is, for example, an angle θ formed by the axis of the element array with respect to the width direction of the magnetic tape. The angle θ is as described above. For example, by providing an angle adjustment unit for adjusting the angle of the module of the magnetic head in the recording and reproducing head unit of the magnetic head, the angle θ can be variably adjusted during the running of the magnetic tape. Such an angle adjustment unit can include, for example, a rotation mechanism for rotating the module. For the angle adjustment unit, a well-known technology can be applied. Regarding the head tilt angle during the running of the magnetic tape, in a case where the magnetic head includes a plurality of modules, the angle θ described with reference toFIGS.1to3can be specified for the randomly selected module. The angle θ at the start of running of the magnetic tape, θinitial, can be set to 0° or more or more than 0°. As the θinitialis large, a change amount of the effective distance between the servo signal reading elements with respect to a change amount of the angle θ increases, and accordingly, it is preferable from a viewpoint of adjustment ability for adjusting the effective distance between the servo signal reading elements according to the dimension change of the width direction of the magnetic tape. From this viewpoint, the ° initial is preferably 1° or more, more preferably 5° or more, and even more preferably 10° or more. Meanwhile, regarding an angle (generally referred to as a “lap angle”) formed by a surface of the magnetic layer and a contact surface of the magnetic head in a case where the magnetic tape runs and comes into contact with the magnetic head, a deviation in a tape width direction which is kept small is effective in improving uniformity of friction in the tape width direction which is generated by the contact between the magnetic head and the magnetic tape during the running of the magnetic tape. In addition, it is desirable to improve the uniformity of the friction in the tape width direction from a viewpoint of position followability and the running stability of the magnetic head. From a viewpoint of reducing the deviation of the lap angle in the tape width direction, θinitialis preferably 45° or less, more preferably 40° or less, and even more preferably 35° or less. Regarding the change of the angle θ during the running of the magnetic tape, while the magnetic tape is running in the magnetic tape device in order to record data on the magnetic tape and/or to reproduce data recorded on the magnetic tape, in a case where the angle θ of the magnetic head changes from θinitialat the start of running, a maximum change amount Δθ of the angle θ during the running of the magnetic tape is a larger value among Δθmaxand Δθmincalculated by the following equation. A maximum value of the angle θ during the running of the magnetic tape is θmax, and a minimum value thereof is θmin. In addition, “max” is an abbreviation for maximum, and “min” is an abbreviation for minimum. Δθmax=θmax−θinitial Δθmin=θinitial−θmin In one embodiment, the Δθ can be more than 0.000°, and is preferably 0.001° or more and more preferably 0.010° or more, from a viewpoint of adjustment ability for adjusting the effective distance between the servo signal reading elements according to the dimension change in the width direction of the magnetic tape. In addition, from a viewpoint of ease of ensuring synchronization of recorded data and/or reproduced data between a plurality of magnetic head elements during data recording and/or reproducing, the Δθ is preferably 1.000° or less, more preferably 0.900° or less, even more preferably 0.800° or less, still preferably 0.700° or less, and still more preferably 0.600° or less. In the examples shown inFIGS.2and3, the axis of the element array is tilted toward a magnetic tape running direction. However, the present invention is not limited to such an example. The present invention also includes an embodiment in which the axis of the element array is tilted in a direction opposite to the magnetic tape running direction in the magnetic tape device. The head tilt angle θinitialat the start of the running of the magnetic tape can be set by a control device or the like of the magnetic tape device. Regarding the head tilt angle during the running of the magnetic tape,FIG.7is an explanatory diagram of a method for measuring the angle θ during the running of the magnetic tape. The angle θ during the running of the magnetic tape can be obtained, for example, by the following method. In a case where the angle θ during traveling on the magnetic tape is obtained by the following method, the angle θ is changed in a range of 0° to 90° during the running of the magnetic tape. That is, in a case where the axis of the element array is tilted toward the magnetic tape running direction at the start of running of the magnetic tape, the element array is not tilted so that the axis of the element array tilts toward a direction opposite to the magnetic tape running direction at the start of the running of the magnetic tape, during the running of the magnetic tape, and in a case where the axis of the element array is tilted toward the direction opposite to the magnetic tape running direction at the start of running of the magnetic tape, the element array is not tilted so that the axis of the element array tilts toward the magnetic tape running direction at the start of the running of the magnetic tape, during the running of the magnetic tape. A phase difference (that is, time difference) ΔT of reproduction signals of the pair of servo signal reading elements1and2is measured. The measurement of ΔT can be performed by a measurement unit provided in the magnetic tape device. A configuration of such a measurement unit is well known. A distance L between a central portion of the servo signal reading element1and a central portion of the servo signal reading element2can be measured with an optical microscope or the like. In a case where a running speed of the magnetic tape is defined as a speed v, the distance in the magnetic tape running direction between the central portions of the two servo signal reading elements is set to L sin θ, and a relationship of L sin θ=v×ΔT is satisfied. Therefore, the angle θ during the running of the magnetic tape can be calculated by a formula “θ=arcsin (vΔT/L)”. The right drawing ofFIG.7shows an example in which the axis of the element array is tilted toward the magnetic tape running direction. In this example, the phase difference (that is, time difference) ΔT of a phase of the reproduction signal of the servo signal reading element2with respect to a phase of the reproduction signal of the servo signal reading element1is measured. In a case where the axis of the element array is tilted toward the direction opposite to the running direction of the magnetic tape, θ can be obtained by the method described above, except for measuring ΔT as the phase difference (that is, time difference) of the phase of the reproduction signal of the servo signal reading element1with respect to the phase of the reproduction signal of the servo signal reading element2. For a measurement pitch of the angle θ, that is, a measurement interval of the angle θ in a tape longitudinal direction, a suitable pitch can be selected according to a frequency of tape width deformation in the tape longitudinal direction. As an example, the measurement pitch can be, for example, 250 μm. Configuration of Magnetic Tape Device A magnetic tape device10shown inFIG.8controls a recording and reproducing head unit12in accordance with a command from a control device11to record and reproduce data on a magnetic tape MT. The magnetic tape device10has a configuration of detecting and adjusting a tension applied in a longitudinal direction of the magnetic tape from spindle motors17A and17B and driving devices18A and18B which rotatably control a magnetic tape cartridge reel and a winding reel. The magnetic tape device10has a configuration in which the magnetic tape cartridge13can be mounted. The magnetic tape device10includes a cartridge memory read and write device14capable of performing reading and writing with respect to the cartridge memory131in the magnetic tape cartridge13. An end portion or a leader pin of the magnetic tape MT is pulled out from the magnetic tape cartridge13mounted on the magnetic tape device10by an automatic loading mechanism or manually and passes on a recording and reproducing head through guide rollers15A and15B so that a surface of a magnetic layer of the magnetic tape MT comes into contact with a surface of the recording and reproducing head of the recording and reproducing head unit12, and accordingly, the magnetic tape MT is wound around the winding reel16. The rotation and torque of the spindle motor17A and the spindle motor17B are controlled by a signal from the control device11, and the magnetic tape MT runs at random speed and tension. A servo pattern previously formed on the magnetic tape can be used to control the tape speed and control the head tilt angle. A tension detection mechanism may be provided between the magnetic tape cartridge13and the winding reel16to detect the tension. The tension may be controlled by using the guide rollers15A and15B in addition to the control by the spindle motors17A and17B. The cartridge memory read and write device14is configured to be able to read and write information of the cartridge memory131according to commands from the control device11. As a communication system between the cartridge memory read and write device14and the cartridge memory131, for example, an international organization for standardization (ISO)14443system can be used. The control device11includes, for example, a controller, a storage unit, a communication unit, and the like. The recording and reproducing head unit12is composed of, for example, a recording and reproducing head, a servo tracking actuator for adjusting a position of the recording and reproducing head in a track width direction, a recording and reproducing amplifier19, a connector cable for connecting to the control device11. The recording and reproducing head is composed of, for example, a recording element for recording data on a magnetic tape, a reproducing element for reproducing data of the magnetic tape, and a servo signal reading element for reading a servo signal recorded on the magnetic tape. For example, one or more of each of the recording elements, the reproducing element, and the servo signal reading element are mounted in one magnetic head. Alternatively, each element may be separately provided in a plurality of magnetic heads according to a running direction of the magnetic tape. The recording and reproducing head unit12is configured to be able to record data on the magnetic tape MT according to a command from the control device11. In addition, the data recorded on the magnetic tape MT can be reproduced according to a command from the control device11. The control device11has a mechanism of controlling the servo tracking actuator so as to obtain a running position of the magnetic tape from a servo signal read from a servo band during the running of the magnetic tape MT and position the recording element and/or the reproducing element at a target running position (track position). The control of the track position is performed by feedback control, for example. The control device11has a mechanism of obtaining a servo band spacing from servo signals read from two adjacent servo bands during the running of the magnetic tape MT. The control device11can store the obtained information of the servo band spacing in the storage unit inside the control device11, the cartridge memory131, an external connection device, and the like. In addition, the control device11can change the head tilt angle according to the dimensional information in the width direction of the magnetic tape during the running. Accordingly, it is possible to bring the effective distance between the servo signal reading elements closer to or match the spacing of the servo bands. The dimensional information can be obtained by using the servo pattern previously formed on the magnetic tape. For example, by doing so, the angle θ formed by the axis of the element array with respect to the width direction of the magnetic tape can be changed during the running of the magnetic tape in the magnetic tape device according to dimensional information of the magnetic tape in the width direction obtained during the running. The head tilt angle can be adjusted, for example, by feedback control. Alternatively, for example, the head tilt angle can also be adjusted by a method disclosed in JP2016-524774A or US2019/0164573A1. EXAMPLES Hereinafter, the invention will be described with reference to examples. However, the invention is not limited to the embodiments shown in the examples. “Parts” and “%” described below indicate “parts by mass” and “% by mass”. In addition, steps and evaluations described below are performed in an environment of an atmosphere temperature of 23° C.±1° C., unless otherwise noted. “eq” described below indicates equivalent and a unit not convertible into SI unit. Ferromagnetic Powder In Table 1, “BaFe” is a hexagonal barium ferrite powder (coercivity Hc: 196 kA/m, an average particle size (average plate diameter): 24 nm). In Table 1, “SrFe1” is a hexagonal strontium ferrite powder produced by the following method. 1,707 g of SrCO3, 687 g of H3BO3, 1,120 g of Fe2O3, 45 g of Al(OH)3, 24 g of BaCO3, 13 g of CaCO3, and 235 g of Nd2O3were weighed and mixed in a mixer to obtain a raw material mixture. The obtained raw material mixture was dissolved in a platinum crucible at a melting temperature of 1,390° C., and a tap hole provided on the bottom of the platinum crucible was heated while stirring the melt, and the melt was tapped in a rod shape at approximately 6 g/sec. The tap liquid was rolled and cooled with a water cooling twin roller to prepare an amorphous body. 280 g of the prepared amorphous body was put into an electronic furnace, heated to 635° C. (crystallization temperature) at a rate of temperature rise of 3.5° C./min, and held at the same temperature for 5 hours, and hexagonal strontium ferrite particles were precipitated (crystallized). Then, the crystallized material obtained as described above including the hexagonal strontium ferrite particles was coarse-pulverized with a mortar, 1000 g of zirconia beads having a particle diameter of 1 mm and 800 ml of an acetic acid aqueous solution having a concentration of 1% were added to a glass bottle, and a dispersion process was performed in a paint shaker for 3 hours. After that, the obtained dispersion liquid and the beads were separated and put in a stainless still beaker. The dispersion liquid was left at a liquid temperature of 100° C. for 3 hours, subjected to a dissolving process of a glass component, precipitated with a centrifugal separator, decantation was repeated for cleaning, and drying was performed in a heating furnace at a furnace inner temperature of 110° C. for 6 hours, to obtain hexagonal strontium ferrite powder. Regarding the hexagonal strontium ferrite powder obtained as described above, an average particle size was 18 nm, an activation volume was 902 nm3, an anisotropy constant Ku was 2.2×105J/m3, and a mass magnetization σs was 49 A·m2/kg. 12 mg of a sample powder was collected from the hexagonal strontium ferrite powder obtained as described above, the element analysis of a filtrate obtained by the partial dissolving of this sample powder under the dissolving conditions described above was performed by the ICP analysis device, and a surface layer portion content of a neodymium atom was obtained. Separately, 12 mg of a sample powder was collected from the hexagonal strontium ferrite powder obtained as described above, the element analysis of a filtrate obtained by the total dissolving of this sample powder under the dissolving conditions described above was performed by the ICP analysis device, and a bulk content of a neodymium atom was obtained. The content (bulk content) of the neodymium atom in the hexagonal strontium ferrite powder obtained as described above with respect to 100 atom % of iron atom was 2.9 atom %. In addition, the surface layer portion content of the neodymium atom was 8.0 atom %. A ratio of the surface layer portion content and the bulk content, “surface layer portion content/bulk content” was 2.8 and it was confirmed that the neodymium atom is unevenly distributed on the surface layer of the particles. A crystal structure of the hexagonal ferrite shown by the powder obtained as described above was confirmed by scanning CuKα ray under the conditions of a voltage of 45 kV and intensity of 40 mA and measuring an X-ray diffraction pattern under the following conditions (X-ray diffraction analysis). The powder obtained as described above showed a crystal structure of magnetoplumbite type (M type) hexagonal ferrite. In addition, a crystal phase detected by the X-ray diffraction analysis was a magnetoplumbite type single phase. PANalytical X'Pert Pro diffractometer, PIXcel detector Soller slit of incident beam and diffraction beam: 0.017 radians Fixed angle of dispersion slit: ¼ degrees Mask: 10 mm Scattering prevention slit: ¼ degrees Measurement mode: continuous Measurement time per 1 stage: 3 seconds Measurement speed: 0.017 degrees per second Measurement step: 0.05 degree In Table 1, “SrFe2” is a hexagonal strontium ferrite powder produced by the following method. 1,725 g of SrCO3, 666 g of H3BO3, 1,332 g of Fe2O3, 52 g of Al(OH)3, 34 g of CaCO3, and 141 g of BaCO3were weighed and mixed in a mixer to obtain a raw material mixture. The obtained raw material mixture was dissolved in a platinum crucible at a melting temperature of 1,380° C., and a tap hole provided on the bottom of the platinum crucible was heated while stirring the melt, and the melt was tapped in a rod shape at approximately 6 g/sec. The tap liquid was rolled and cooled with a water cooling twin roller to prepare an amorphous body. 280 g of the obtained amorphous body was put into an electronic furnace, heated to 645° C. (crystallization temperature), and held at the same temperature for 5 hours, and hexagonal strontium ferrite particles were precipitated (crystallized). Then, the crystallized material obtained as described above including the hexagonal strontium ferrite particles was coarse-pulverized with a mortar, 1000 g of zirconia beads having a particle diameter of 1 mm and 800 ml of an acetic acid aqueous solution having a concentration of 1% were added to a glass bottle, and a dispersion process was performed in a paint shaker for 3 hours. After that, the obtained dispersion liquid and the beads were separated and put in a stainless still beaker. The dispersion liquid was left at a liquid temperature of 100° C. for 3 hours, subjected to a dissolving process of a glass component, precipitated with a centrifugal separator, decantation was repeated for cleaning, and drying was performed in a heating furnace at a furnace inner temperature of 110° C. for 6 hours, to obtain hexagonal strontium ferrite powder. Regarding the hexagonal strontium ferrite powder obtained as described above, an average particle size was 19 nm, an activation volume was 1,102 nm3, an anisotropy constant Ku was 2.0×105J/m3, and a mass magnetization σs was 50 A×m2/kg. In Table 1, “ε-iron oxide” is an ε-iron oxide powder produced by the following method. 4.0 g of ammonia aqueous solution having a concentration of 25% was added to a material obtained by dissolving 8.3 g of iron (III) nitrate nonahydrate, 1.3 g of gallium (III) nitrate octahydrate, 190 mg of cobalt (II) nitrate hexahydrate, 150 mg of titanium (IV) sulfate, and 1.5 g of polyvinyl pyrrolidone (PVP) in 90 g of pure water, while stirring by using a magnetic stirrer, in an atmosphere under the conditions of an atmosphere temperature of 25° C., and the mixture was stirred for 2 hours still under the temperature condition of the atmosphere temperature of 25° C. A citric acid aqueous solution obtained by dissolving 1 g of citric acid in 9 g of pure water was added to the obtained solution and stirred for 1 hour. The powder precipitated after the stirring was collected by centrifugal separation, washed with pure water, and dried in a heating furnace at a furnace inner temperature of 80° C. 800 g of pure water was added to the dried powder and the powder was dispersed in water again, to obtain a dispersion liquid. The obtained dispersion liquid was heated to a liquid temperature of 50° C., and 40 g of ammonia aqueous solution having a concentration of 25% was added dropwise while stirring. The stirring was performed for 1 hour while holding the temperature of 50° C., and 14 mL of tetraethoxysilane (TEOS) was added dropwise and stirred for 24 hours. 50 g of ammonium sulfate was added to the obtained reaction solution, the precipitated powder was collected by centrifugal separation, washed with pure water, and dried in a heating furnace at a furnace inner temperature of 80° C. for 24 hours, and a precursor of ferromagnetic powder was obtained. The heating furnace at a furnace inner temperature of 1,000° C. was filled with the obtained precursor of ferromagnetic powder in the atmosphere and subjected to heat treatment for 4 hours. The heat-treated precursor of ferromagnetic powder was put into sodium hydroxide (NaOH) aqueous solution having a concentration of 4 mol/L, the liquid temperature was held at 70° C., stirring was performed for 24 hours, and accordingly, a silicon acid compound which was an impurity was removed from the heat-treated precursor of ferromagnetic powder. After that, by the centrifugal separation process, ferromagnetic powder obtained by removing the silicon acid compound was collected and washed with pure water, and ferromagnetic powder was obtained. The composition of the obtained ferromagnetic powder was confirmed by Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES), and Ga, Co, and Ti substitution type ε-iron oxide (ε-Ga0.28Co0.05Ti0.05Fe1.62O3) was obtained. In addition, the X-ray diffraction analysis was performed under the same conditions as disclosed regarding SrFe1 above, and it was confirmed that the obtained ferromagnetic powder has a crystal structure of a single phase which is an E phase not including a crystal structure of an α phase and a γ phase (ε-iron oxide type crystal structure) from the peak of the X-ray diffraction pattern. Regarding the obtained (ε-iron oxide powder, an average particle size was 12 nm, an activation volume was 746 nm3, an anisotropy constant Ku was 1.2×105J/m3, and a mass magnetization σs was 16 A×m2/kg. The activation volume and the anisotropy constant Ku of the hexagonal strontium ferrite powder and the ε-iron oxide powder are values obtained by the method described above regarding each ferromagnetic powder by using an oscillation sample type magnetic-flux meter (manufactured by Toei Industry Co., Ltd.). The mass magnetization σs is a value measured using a vibrating sample magnetometer (manufactured by Toei Industry Co., Ltd.) at a magnetic field strength of 15 kOe. Example 1 Preparation of Alumina Dispersion 3.0 parts of 2,3-dihydroxynaphthalene (manufactured by Tokyo Chemical Industry Co., Ltd.), 31.3 parts of a 32% solution (solvent is a mixed solvent of methyl ethyl ketone and toluene) of a polyester polyurethane resin including a SO3Na group as a polar group (UR-4800 manufactured by Toyobo Co., Ltd. (polar group amount: 80 meq/kg)), and 570.0 parts of a mixed solution of methyl ethyl ketone and cyclohexanone (mass ratio of 1:1) as a solvent were mixed with 100.0 parts of alumina powder (HIT-80 manufactured by Sumitomo Chemical Co., Ltd.) having a gelatinization ratio of 65% and a Brunauer-Emmett-Teller (BET) specific surface area of 20 m2/g, and dispersed in the presence of zirconia beads by a paint shaker for 5 hours. After the dispersion, the dispersion liquid and the beads were separated by a mesh and an alumina dispersion was obtained. Magnetic Layer Forming Composition Magnetic Liquid Ferromagnetic powder (see Table 1): 100.0 parts SO3Na group-containing polyurethane resin: 14.0 parts Weight-average molecular weight: 70,000, SO3Na group: 0.2 meq/g Cyclohexanone: 150.0 parts Methyl ethyl ketone: 150.0 parts Abrasive solution Alumina dispersion prepared above: 6.0 parts Silica Sol (Projection Formation Agent Liquid) Colloidal silica (average particle size: 120 nm): 2.0 parts Methyl ethyl ketone: 1.4 parts Other Components Stearic acid: 2.0 parts Stearic acid amide: 0.2 parts Butyl stearate: 2.0 parts Polyisocyanate (CORONATE (registered product) L manufactured by Tosoh Corporation): 2.5 parts Finishing Additive Solvent Cyclohexanone: 200.0 parts Methyl ethyl ketone: 200.0 parts Non-Magnetic Layer Forming Composition α-Iron oxide powder (average particle volume: see Table 1): 80.0 parts Carbon black (average particle size: 20 nm, pH: see Table 1): 20.0 parts Electron ray curable vinyl chloride copolymer: 13.0 parts Electron beam curable polyurethane resin: 6.0 parts Phenylphosphonic acid: 3.0 parts Cyclohexanone: 140.0 parts Methyl ethyl ketone: 170.0 parts Butyl stearate: 2.0 parts Stearic acid: 1.0 part Back Coating Layer Forming Composition Non-magnetic inorganic powder (α-iron oxide powder): 80.0 parts (Average particle size: 0.15 μm, average acicular ratio: 7, BET specific surface area: 52 m2/g) Carbon black (average particle size: 20 nm): 20.0 parts Carbon black (average particle size: 100 nm): 3.0 parts A vinyl chloride copolymer: 13.0 parts A sulfonic acid group-containing polyurethane resin: 6.0 parts Phenylphosphonic acid: 3.0 parts Cyclohexanone: 140.0 parts Methyl ethyl ketone: 170.0 parts Stearic acid: 3.0 parts Polyisocyanate (CORONATE (registered product) L manufactured by Tosoh Corporation): 5.0 parts Methyl ethyl ketone: 400.0 parts Preparation of Each Layer Forming Composition The magnetic layer forming composition was prepared by the following method. The magnetic liquid was prepared by dispersing (beads-dispersing) the component by using a batch type vertical sand mill for 24 hours. As dispersion beads, zirconia beads having a bead diameter of 0.5 mm were used. The prepared magnetic liquid, the abrasive solution, and other components (silica sol, other components, and finishing additive solvent) were mixed with each other and beads-dispersed for 5 minutes by using the sand mill, and the treatment (ultrasonic dispersion) was performed with a batch type ultrasonic device (20 kHz, 300 W) for 0.5 minutes. After that, the obtained mixed solution was filtered by using a filter having a hole diameter of 0.5 μm, and the magnetic layer forming composition was prepared. The non-magnetic layer forming composition was prepared by the following method. The components excluding the lubricant (butyl stearate and stearic acid) were kneaded and diluted with an open kneader, and then dispersed with a transverse beads mill disperser. Then, the lubricant (butyl stearate and stearic acid) was added, and the mixture was stirred and mixed with a dissolver stirrer to prepare a non-magnetic layer forming composition. The back coating layer forming composition was prepared by the following method. The components excluding the lubricant (stearic acid), polyisocyanate, and methyl ethyl ketone (400.0 parts) were kneaded and diluted with an open kneader, and then dispersed with a transverse beads mill disperser. Then, the lubricant (stearic acid), polyisocyanate, and methyl ethyl ketone (400.0 parts) were added, and the mixture was stirred and mixed with a dissolver stirrer to prepare a back coating layer forming composition. Manufacturing of Magnetic Tape and Magnetic Tape Cartridge The non-magnetic layer forming composition was applied to a biaxial stretching support made of polyethylene naphthalate having a thickness of 4.1 μm so that the thickness after the drying is 0.7 μm and was dried to emit an electron ray to have energy of 40 kGy at an acceleration voltage of 125 kV. The magnetic layer forming composition was applied onto that so that the thickness after the drying is 0.1 μm and dried, and the back coating layer forming composition was applied to a surface of the support opposite to the surface where the non-magnetic layer and the magnetic layer are formed, so that the thickness after the drying is 0.3 μm and dried. Then, a calender process was performed by using a 7-stage calender roll configured of only a metal roll, at a calendar speed of 80 m/min, linear pressure of 294 kN/m, and a calender temperature (surface temperature of a calender roll) of 80° C. Then, the heat treatment was performed in the environment of the ambient temperature of 70° C. for 36 hours. After the heat treatment, slitting is performed to have a width of ½ inches. The surface of the magnetic layer of the magnetic tape having a width of ½ inches thus obtained was subjected to the burnishing process and the wiping process. The burnishing process and the wiping process were performed in a process device having a configuration shown in FIG. 1 of JP-H06-52544A, by using a commercially available abrasive tape (product name MA22000 manufactured by Fujifilm Holdings Corporation, abrasive: diamond/Cr2O3/α-iron oxide) as an abrasive tape, by using a commercially available sapphire blade (manufactured by Kyocera Corporation, width of 5 mm, length of 35 mm, an angle of a distal end of 60 degrees) as a blade for grinding, and by using a commercially available wiping material (product name WRP736 manufactured by Kuraray Co., Ltd.) as a wiping material. For the process conditions, the process conditions in Example 12 of JP-H06-52544A was used, except that the burnishing process tensions during the burnishing process on the central region and the region in the vicinity of the edge of the surface of the magnetic layer were set to values described in Table 1. After that, by recording a servo signal on a magnetic layer of the obtained magnetic tape with a commercially available servo writer, and including a servo pattern (timing-based servo pattern) having the disposition and shape according to the linear tape-open (LTO) Ultrium format on the servo band was obtained. Accordingly, a magnetic tape including data bands, servo bands, and guide bands in the disposition according to the LTO Ultrium format in the magnetic layer, and including servo patterns (timing-based servo pattern) having the disposition and the shape according to the LTO Ultrium format on the servo band was obtained. The servo pattern formed by doing so is a servo pattern disclosed in Japanese Industrial Standards (JIS) X6175:2006 and Standard ECMA-319 (June 2001). The total number of servo bands is five, and the total number of data bands is four. A magnetic tape (length of 960 m) on which the servo signal was recorded as described above was wound around the reel of the magnetic tape cartridge (LTO Ultrium 8 data cartridge), and a leader tape according to Article 9 of Section 3 of standard European Computer Manufacturers Association (ECMA)-319 (June 2001) was bonded to an end thereof by using a commercially available splicing tape. By doing so, a magnetic tape cartridge in which the magnetic tape was wound around the reel was manufactured. Examples 2 to 24 and Comparative Examples 1 to 10 A magnetic tape and a magnetic tape cartridge were obtained by the method described in Example 1, except that the items shown in Table 1 were changed as shown in Table 1. For Examples 18 to 24, the step after recording the servo signal was changed as follows. That is, the heat treatment was performed after recording the servo signal. On the other hand, in Examples 1 to 17 and Comparative Examples 1 to 10, since such heat treatment was not performed, “None” was shown in the column of “heat treatment condition” in Table 1. For Examples 18 to 24, the magnetic tape (length of 970 m) after recording the servo signal as described in Example 1 was wound around a core for the heat treatment and heat-treated in a state of being wound around the core. As the core for heat treatment, a solid core member (outer diameter: 50 mm) formed of a resin and having a value of a bending elastic modulus shown in Table 1 was used, and the tension in a case of the winding was set as a value shown in Table 1. The heat treatment temperature and heat treatment time in the heat treatment were set to values shown in Table 1. The weight absolute humidity in the atmosphere in which the heat treatment was performed was 10 g/kg Dry air. After the heat treatment, the magnetic tape and the heat treatment core were sufficiently cooled, and then the magnetic tape was detached from the heat treatment core and wound around a core for temporary winding. As the core for temporary winding, a solid core member having the same outer diameter and formed of the same material as the core for heat treatment was used, and the tension at the time of winding was set as 0.6 N. After that, the magnetic tape having the final product length (960 m) was wound around the reel of the magnetic tape cartridge (LTO Ultrium 8 data cartridge) from the core for temporary winding, the remaining length of 10 m was cut out, and a leader tape according to Article 9 of Section 3 of standard European Computer Manufacturers Association (ECMA)-319 (June 2001) was bonded to an end of a cut side thereof by using a commercially available splicing tape. By doing so, a magnetic tape cartridge in which the magnetic tape was wound around the reel was manufactured. For each of the Examples and Comparative Examples, four magnetic tape cartridges were manufactured, one was used for the evaluation of running stability below, and the other three were used for the evaluations (1) to (3) of the magnetic tape. Evaluation of Running Stability In an environment with a temperature of 40° C. and a relative humidity of 10%, the running stability was evaluated by the following method. The temperature and the humidity described above are examples of the temperature and the humidity in the high temperature and low humidity environment. In addition, the following head tilt angle is used as an exemplary value of an angle that can be used in a case of performing the recording and/or reproducing of data at different head tilt angles. Therefore, the temperature and the humidity of the environment and the head tilt angle in a case of performing the recording of the data on the magnetic tape and the reproducing of the recorded data according to one aspect of the present invention are not limited to the above values and the following values. Using each of the magnetic tape cartridges of the examples and the comparative examples, data recording and reproducing were performed using the magnetic tape device having the configuration shown inFIG.8. The arrangement order of the modules included in the recording and reproducing head mounted on the recording and reproducing head unit is “recording module-reproducing module-recording module” (total number of modules: 3). The number of magnetic head elements in each module is 32 (Ch0to Ch31), and the element array is configured by sandwiching these magnetic head elements between the pair of servo signal reading elements. The reproducing element width of the reproducing element included in the reproducing module is 0.8 μm. By the following method, the performing of the recording and reproducing of data and evaluating of the running stability during the reproducing were performed four times in total by sequentially changing the head tilt angle in the order of 0°, 15°, 30°, and 45°. The head tilt angle is an angle θ formed by the axis of the element array of the reproducing module with respect to the width direction of the magnetic tape at the start of each time of running. The angle θ was set by the control device of the magnetic tape device at the start of each time of running of the magnetic tape, and the head tilt angle was fixed during each time of running of the magnetic tape. The magnetic tape cartridge was set in the magnetic tape device and the magnetic tape was loaded. Next, while performing servo tracking, the recording and reproducing head unit records pseudo random data having a specific data pattern on the magnetic tape. The tension applied in the tape longitudinal direction at that time is a constant value. At the same time with the recording of the data, the value of the servo band spacing of the entire tape length was measured every 1 m of the longitudinal position and recorded in the cartridge memory. Next, while performing servo tracking, the recording and reproducing head unit reproduces the data recorded on the magnetic tape. The tension applied in the tape longitudinal direction at that time is a constant value. The running stability was evaluated using a standard deviation of a reading position PES (Position Error Signal) in the width direction based on the servo signal obtained by the servo signal reading element during the reproducing (hereinafter, referred to as “σPES”) as an indicator. PES is obtained by the following method. In order to obtain the PES, the dimensions of the servo pattern are required. The standard of the dimension of the servo pattern varies depending on generation of LTO. Therefore, first, an average distance AC between the corresponding four stripes of the A burst and the C burst and an azimuth angle α of the servo pattern are measured using a magnetic force microscope or the like. An average time between 5 stripes corresponding to the A burst and the B burst over the length of 1 LPOS word is defined as a. An average time between 4 stripes corresponding to the A burst and the C burst over the length of 1 LPOS word is defined as b. At this time, the value defined by AC×(½−a/b)/(2×tan(α)) represents a reading position PES (Position Error Signal) in the width direction based on the servo signal obtained by the servo signal reading element over the length of 1 LPOS word. Regarding the magnetic tape, an end on a side wound around a reel of the magnetic tape cartridge is referred to as an inner end, an end on the opposite side thereof is referred to as an outer end, the outer end is set to 0 m, and in a region in a tape longitudinal direction over a length of 30 m to 200 m, the standard deviation of PES (σPES) obtained by the method described above was calculated. The arithmetic average of σPES obtained during four times of recording and reproducing in total is shown in the column of “σPES” in Table 1. In a case where the σPES is less than 70 nm, it can be determined that the running stability is excellent. Evaluation of Magnetic Tape (1) Edge Portion Ra, Central Portion Ra, Ra Ratio (Central Portion Ra/Edge Portion Ra) The magnetic tape was extracted from each of the magnetic tape cartridges of Examples and Comparative Examples, the edge portion Ra and the central portion Ra were obtained by the method described above, and the Ra ratio (central portion Ra/edge portion Ra) was calculated from the obtained values. (2) Standard Deviation of Curvature of Magnetic Tape in Longitudinal Direction The magnetic tape was taken out from the magnetic tape cartridge, and the standard deviation of the curvature of the magnetic tape in the longitudinal direction was determined by the method described above. (3) Tape Thickness 10 tape samples (length: 5 cm) were cut out from any part of the magnetic tape extracted from each of the magnetic tape cartridges of Examples and Comparative Examples, and these tape samples were stacked to measure the thickness. The thickness was measured using a digital thickness gauge of a Millimar 1240 compact amplifier manufactured by MARH and a Millimar 1301 induction probe. The value (thickness per tape sample) obtained by calculating 1/10 of the measured thickness was defined as the tape thickness. For all of the magnetic tape, the tape thickness was 5.2 μm. The result described above is shown in Table 1 (Tables 1-1 and 1-2). TABLE 1-1Magnetic layerforming compositionRaHeat treatment conditionsStan-BurnishingratioNon-magnetic layerTen-dardprocessCentralα-ironbend-siondevi-tension [gf]Ra (nm)portionoxideingduringationRegionCen-EdgeRa/powderCar-elasticwind-of cur-Ferro-Cen-intralpor-edgeAveragebonmod-ingvaturemagnetictralvicinityportiontionportionparticleblackTemper-ulusaround(mm/σPESpowderregionof edgeRaRaRavolumepHatureTimeof corecorem)(nm)Example 1BaFe1221001.121.500.752.0 × 10−6μm35.0NoneNoneNoneNone654Example 2BaFe1111001.301.500.872.0 × 10−6μm35.0NoneNoneNoneNone653Example 3BaFe1111071.301.370.952.0 × 10−6μm35.0NoneNoneNoneNone654Example 4BaFe1241111.101.300.852.0 × 10−6μm35.0NoneNoneNoneNone653Example 5BaFe1311171.001.200.832.0 × 10−6μm35.0NoneNoneNoneNone653Example 6BaFe1561310.751.000.752.0 × 10−6μm35.0NoneNoneNoneNone652Example 7BaFe1561400.750.900.832.0 × 10−6μm35.0NoneNoneNoneNone653Example 8BaFe1561510.750.790.952.0 × 10−6μm35.0NoneNoneNoneNone652Example 9BaFe2332100.300.400.752.0 × 10−6μm35.0NoneNoneNoneNone653Example 10BaFe2332280.300.320.942.0 × 10−6μm35.0NoneNoneNoneNone652Example 11BaFe1501310.700.900.781.0 × 10−6μm35.0NoneNoneNoneNone646Example 12BaFe1561400.650.800.811.0 × 10−6μm35.0NoneNoneNoneNone644Example 13BaFe1561510.650.690.941.0 × 10−6μm35.0NoneNoneNoneNone645Example 14BaFe1561400.650.800.811.0 × 10−6μm35.0NoneNoneNoneNone642Example 15BaFe1561510.650.690.941.0 × 10−6μm35.0NoneNoneNoneNone643Example 16BaFe1561400.600.750.801.0 × 10−6μm33.5NoneNoneNoneNone635Example 17BaFe1451410.700.740.951.0 × 10−6μm33.5NoneNoneNoneNone634Example 18BaFe1561400.600.750.801.0 × 10−6μm33.550° C.50.80.6N533hoursGPaExample 19BaFe1561400.600.750.801.0 × 10−6μm33.560° C.50.80.6N433hoursGPaExample 20BaFe1561400.600.750.801.0 × 10−6μm33.570° C.50.80.6N332hoursGPaExample 21BaFe1561400.600.750.801.0 × 10−6μm33.570° C.150.80.8N230hoursGPaExample 22SrFe11111001.151.350.851.0 × 10−6μm33.570° C.150.80.8N230hoursGPaExample 23SrFe21111001.151.350.801.0 × 10−6μm33.570° C.150.80.8N230hoursGPaExample 24ε-iron1111001.151.350.851.0 × 10−6μm33.570° C.150.80.8N230oxidehoursGPapowder TABLE 1-2Magnetic layerforming compositionRaHeat treatment conditionsStan-BurnishingratioNon-magnetic layerTen-dardprocessCentralα-ironbend-siondevi-tension [gf]Ra (nm)portionoxideingduringationRegionCen-EdgeRa/powderCar-elasticwind-of cur-Ferro-Cen-intralpor-edgeAveragebonmod-ingvaturemagnetictralvicinityportiontionportionparticleblackTemper-ulusaround(mm/σPESpowderregionof edgeRaRaRavolumepHatureTimeof corecorem)(nm)ComparativeBaFe1001001.601.601.005.0 × 10−6μm37.5NoneNoneNoneNone680Example 1ComparativeBaFe1051051.501.501.005.0 × 10−6μm37.5NoneNoneNoneNone686Example 2ComparativeBaFe1171171.301.301.005.0 × 10−6μm37.5NoneNoneNoneNone690Example 3ComparativeBaFe1001171.601.301.235.0 × 10−6μm37.5NoneNoneNoneNone687Example 4ComparativeBaFe1111171.401.301.085.0 × 10−6μm37.5NoneNoneNoneNone688Example 5ComparativeBaFe1171001.301.600.815.0 × 10−6μm37.5NoneNoneNoneNone683Example 6ComparativeBaFe1401621.000.801.255.0 × 10−6μm37.5NoneNoneNoneNone685Example 7ComparativeBaFe1751750.700.701.005.0 × 10−6μm37.5NoneNoneNoneNone686Example 8ComparativeBaFe1751400.701.000.705.0 × 10−6μm37.5NoneNoneNoneNone690Example 9ComparativeBaFe3002800.200.250.805.0 × 10−6μm37.5NoneNoneNoneNone693Example 10 From the results shown in Table 1, it can be confirmed that the magnetic tape of the examples showed excellent running stability in a case where the magnetic tape was caused to run at different head tilt angles in the high temperature and low humidity environment. The magnetic tape cartridge was manufactured by the method described in Example 1, except that, during the manufacturing of the magnetic tape, after forming the coating layer by applying the magnetic layer forming composition, a homeotropic alignment process was performed by applying a magnetic field having a magnetic field strength of 0.3 T in a vertical direction with respect to a surface of a coating layer, while the coating layer of the magnetic layer forming composition is wet, and then, the drying was performed to form the magnetic layer. A sample piece was cut out from the magnetic tape taken out from the magnetic tape cartridge. For this sample piece, a vertical squareness ratio was obtained by the method described above using a TM-TRVSM5050-SMSL type manufactured by Tamagawa Seisakusho Co., Ltd. as an oscillation sample type magnetic-flux meter and it was 0.60. The magnetic tape was also taken out from the magnetic tape cartridge of Example 1, and the vertical squareness ratio was obtained in the same manner for the sample piece cut out from the magnetic tape, and it was 0.55. The magnetic tapes taken out from the above two magnetic tape cartridges were attached to each of the ½-inch reel testers, and the electromagnetic conversion characteristics (signal-to-noise ratio (SNR)) were evaluated by the following methods. As a result, regarding the magnetic tape manufactured by performing the homeotropic alignment process, a value of SNR 2 dB higher than that of the magnetic tape manufactured without the homeotropic alignment process was obtained. In an environment of a temperature of 23° C. and a relative humidity of 50%, a tension of 0.7 N was applied in the longitudinal direction of the magnetic tape, and recording and reproduction were performed for 10 passes. A relative speed of the magnetic head and the magnetic tape was set as 6 m/sec. The recording was performed by using a metal-in-gap (MIG) head (gap length of 0.15 μm, track width of 1.0 μm) as the recording head and by setting a recording current as an optimal recording current of each magnetic tape. The reproduction was performed using a giant-magnetoresistive (GMR) head (element thickness of 15 nm, shield interval of 0.1 μm, reproducing element width of 0.8 μm) as the reproduction head. The head tilt angle was set to 0°. A signal having a linear recording density of 300 kfci was recorded, and the reproduction signal was measured with a spectrum analyzer manufactured by ShibaSoku Co., Ltd. In addition, the unit kfci is a unit of linear recording density (cannot be converted to SI unit system). As the signal, a sufficiently stabilized portion of the signal after starting the running of the magnetic tape was used. One aspect of the invention is advantageous in a technical field of various data storages.
159,201
11862215
DETAILED DESCRIPTION The following disclosure describes various embodiments for spike current suppression in a memory array. At least some embodiments herein relate to a memory device having a memory array that uses a cross-point architecture. In one example, the memory array is a resistive RAM (RRAM) cross-point memory array, or a ferroelectric RAM (FeRAM) cross-point memory array. Other memory types may be used. In one example, the memory device stores data used by a host device (e.g., a computing device of an autonomous vehicle, an artificial intelligence (AI) engine, or other computing device that accesses data stored in the memory device). In one example, the memory device is a solid-state drive mounted in an electric vehicle. In some memory arrays (e.g., a cross-point memory array), current discharges through a memory cell may result in current spikes (e.g., relatively high current discharge through the memory cell in a relatively short time period), which may cause damage to the memory cell. For example, current discharge that occurs when a chalcogenide memory cell snaps can result in amorphization of the memory cell. Such spikes may result from internal discharge within the memory array. In one example, this is the discharge of parasitic capacitances within the memory array. Current spikes due to internal discharge may be particularly problematic. In one example, memory cells are selected by generating voltages on word and bit lines of the memory array. When the memory cell is selected, a large current spike can flow through the cell. The spike is caused by parasitic capacitances that have accumulated charge during operation of the memory device. The charge is discharged as a current spike that can cause damage to the memory cell. In one example, the memory cell is a chalcogenide-based self-selecting memory cell that snaps when selected (e.g., the cell is in a SET state). A selection spike results from discharge of parasitic capacitances coupled to the word and/or bit line that are used to select the memory cell. Memory cells that use both a select device and a memory storage element (e.g., phase change memory) can suffer from similar problems. This selection spike can be a root cause of several reliability mechanisms. This is particularly true for memory cells that are located near a decoder, for which spike current is typically greater. For example, the selection spikes cause reliability mechanisms such as read disturb and/or endurance degradation. In one example, various voltages of a memory array may be altered to perform access operations. The various voltage alterations may cause charge in the memory array to build up, for example, in the parasitic capacitances associated with the array (e.g., the parasitic capacitances of the access lines of the memory array). In some cases, the built-up charge may discharge through a selected memory cell. For example, a memory cell may become conductive based on being selected (e.g., when accessed, such as when a voltage across the memory cell crosses a threshold voltage of the memory cell), which may allow built-up charge on the access lines coupled with the memory cell to discharge through the cell in a current spike (e.g., a current spike having a peak magnitude of at least 100 microamps, such as 200-300 microamps). The memory cell may be degraded or worn out in proportion to the number and magnitude of current spikes experienced by the memory cell over time. In one example, a memory array uses self-selecting chalcogenide memory cells. As cells are selected, word and bit lines are charged up to select the cells. This can cause capacitive coupling to adjacent word or bit lines for adjacent cells. Over time, this capacitive coupling causes electrical charge to accumulate in various parasitic capacitances (e.g., such as mentioned above). When the memory cell is selected and snaps (e.g., during a read operation), the accumulated electrical charge flows through the memory cell as a current spike. In some cases, current spikes may be higher for memory cells located close or near to a via that connects to an access line driver (e.g., a near electrical distance (ED)) than for memory cells located far from the via/driver (e.g., a far ED). For example, discharge through a memory cell with a near ED may be more severe due to a relatively lower resistance path between the memory cell and the charge built up in parasitic capacitances along the entire length of the access line, which may result in a higher amount of current through the memory cell when the memory cell becomes conductive (e.g., a relatively higher magnitude current spike) than for memory cells with far ED, which may be more separated from charge built up along farther away portions of the access line (e.g., charge built up far along the access line on the other side of the via). To address these and other technical problems, one or more resistors are used to screen electrical discharge from portions of an access line other than the portion being used to access a memory cell. The screening of the electrical discharge by the one or more resistors reduces the extent of electrical discharge that would occur in the absence of the resistors (e.g., the lack of such resistors in prior devices). The physical configuration of the resistors can be customized depending, for example, on the location of the access line in a memory array. In one example, each resistor is a portion of a resistive film located between the access line and a via that is electrically connected to a driver used to drive voltages on the access line when selecting the memory cell. In one example, the access line is a word line of a cross-point memory array. The one or more resistors are configured to increase the resistance of a circuit path through which parasitic capacitance(s) of the cross-point memory array may discharge so that the magnitude of any current spike is reduced. The magnitude of the current spike is lower as compared to prior approaches in which the resistors are not used (e.g., the resistors increase the resistance of the RC discharge circuit, which reduces the current spike). Also, the use of the one or more resistors has minimal impact on the ability to bias and deliver current to the word line for normal memory cell operations such as reading, writing, etc. In one embodiment, an access line is split into left and right portions (e.g., left and right word line or bit line portions). Each portion is electrically connected to a via, which a driver uses to generate a voltage on the access line. To reduce electrical discharge associated with current spikes, a first resistor is located between the left portion and the via, and a second resistor is located between the right portion and the via. In some embodiments, spike current suppression is implemented by using a socket structure that is formed in an access line, as discussed in more detail below (see, e.g.,FIG.14and the related discussion below). In one embodiment, a socket of the access line is filled with a conductive layer, and two resistive films are formed in the access line on each side of the conductive layer. In other embodiments, spike current suppression is implemented by using one or more charge screening structures, as discussed in more detail below (see, e.g.,FIGS.26and27, and the related discussion below). In one embodiment, the charge screening structures are formed by integrating insulating layers into interior regions of an access line (e.g., insulating layers extending laterally in the middle of certain portions of the access line). The insulating layers vertically split the access line into top and bottom conductive portions. For those memory cells that are located overlying and/or underlying one of the insulating layers, the resistance of the electrical path to each memory cell is increased because the thickness of the top or bottom conductive portion is less than the thickness of those portions of the access line where the insulating layers are not present. Thus, during a spike discharge, charge is choked by the higher resistance path to the memory cell. For example, this suppresses spike current that may occur when one of the overlying and/or underlying memory cells is selected (e.g., a chalcogenide memory cell snaps). One advantage of the charge screening structure is that via resistance does not need to be increased, so that delivery of current to memory cells located far from the via is minimally affected. For example, the top and bottom conductive portions are both used for far current delivery, such that the combined electrical paths have a resistance that is substantially similar to that of the portions of the access line without insulating layers. In one embodiment, a memory device includes a memory array having a cross-point memory architecture. The memory array has an access line configured to access memory cells of the memory array. The access line has a first portion and a second portion on opposite sides of a central region of the access line. The first portion is configured to access a first memory cell, and the second portion configured to access a second memory cell. In one example, the access line is a word line or bit line, and the central region is in the middle of the word or bit line. In one example, the access line is split into left and right portions as mentioned above. One or more vias are electrically connected at the central region to the first portion and the second portion. In one example, a single via is used. In other examples, multiple vias can be used. A first resistor is located between the first portion of the access line and the via. The first resistor is configured so as to screen electrical discharge from the first portion when accessing the second memory cell. A second resistor is located between the second portion and the via. The second resistor is configured to screen electrical discharge from the second portion when accessing the first memory cell. A driver is electrically connected to the one or more vias. The driver is configured to generate a voltage on the first portion when accessing the first memory cell. The driver generates a voltage on the second portion when accessing the second memory cell. In one example, the driver is a word line or bit line driver. In one example, the driver is electrically connected to a single via in the middle of a word line, and a voltage is generated on both the first and second portions when accessing a single memory cell. The memory cell can be located on either the first or second portion. Various advantages are provided by embodiments described herein. In one advantage, current spikes that result during selection of a memory cell are suppressed by screening charge from far capacitances in a memory array (e.g., charge from far cells on a left portion of an access line in a left half tile used to access a near memory cell, and/or charge from a right portion of the access line in the right half tile). In one advantage, the resistors above can readily be added on an existing quilt architecture. In one advantage, use of the resistors above can be varied for different locations of the memory array. The layers used to form the memory cell stack can be the same for all portions of the memory array. Thus, the use of the spike current suppression as described herein can be transparent to memory cell structure. In one advantage, for a given level of tolerable current spike, tile size and thus memory density can be increased. In one advantage, various different resistor configurations can be combined and varied as desired for different portions of a memory array. In one advantage, the spike current suppression can be generally used for any cross-point technology. FIG.1shows a memory device101that implements spike current suppression in a memory array102of memory device101, in accordance with some embodiments. Memory device101includes memory controller120, which controls sensing circuitry122and bias circuitry124. Memory controller120includes processing device116and memory118. In one example, memory118stores firmware that executes on processing device116to perform various operations for memory device101. In one example, the operations include reading and writing to various memory cells of memory array102. The memory cells of memory array102include memory cells110and memory cells112. In one example, memory cells110are located in a left half tile and memory cells112are located in a right half tile of the memory array. Access lines130of memory array102are used to access memory cells110,112. In one example, access lines130are word lines and/or bit lines. In one example, each access line130is split in a central region (e.g., the middle of the access line) to have a left portion that accesses memory cells110and a right portion that accesses memory cells112. Bias circuitry124is used to generate voltages on access lines130. Vias134are used to electrically connect access lines130to bias circuitry124. In one example, a single via134is used to electrically connect a left portion and a right portion of each access line130to a word or bit line driver of bias circuitry124. In one example, a voltage is driven on a left portion of an access line130to access a memory cell110. In one example, the voltage is driven as part of a read or write operation performed in response to a command received from host device126. Sensing circuitry122is used to sense current flowing through memory cells110,112. In one example, sensing circuitry122senses a current that results from applying a voltage to a memory cell110during a read operation. In one embodiment, in order to suppress spike currents in memory array102, various resistors132are located between access lines130and vias134. The resistors132screen electrical discharge (e.g., as described above) from certain portions of access lines130that can occur when a memory cell110,112is accessed (e.g., when a chalcogenide memory cell snaps). In one embodiment, memory device101selects write voltages for applying to memory cells110,112when performing write operations. In one embodiment, bias circuitry124is implemented by one or more voltage drivers. Bias circuitry124may further be used to generate read voltages for read operations performed on memory array102(e.g., in response to a read command from host device126). In one embodiment, sensing circuitry122is used to sense a state of each memory cell in memory array102. In one example, sensing circuitry122includes current sensors (e.g., sense amplifiers) used to detect a current caused by applying various read voltages to memory cells in memory array102. Sensing circuitry122senses a current associated with each of the memory cells110caused by applying the voltage. In one example, if sensing circuitry122determines that the respective current resulting from applying a read voltage to the memory cell is greater than a respective fixed threshold (e.g., a predetermined level of current or threshold current), then memory controller120determines that the memory cell has snapped. In one embodiment, memory cells110,112can be of different memory types (e.g., single level cell, or triple level cell). In one embodiment, memory controller120receives a write command from a host device126. The write command is accompanied by data (e.g., user data of a user of host device126) to be written to memory array102. In response to receiving the write command, controller120initiates a programming operation by applying voltages to memory cells110. Controller120determines respective currents resulting from applying the voltages. In one embodiment, controller120determines whether the existing programming state (e.g., logic state zero) and the target programming state (e.g., logic state zero) for each cell are equal. If the existing and target programming states are equal, then no write voltage is applied (e.g., this is a normal write mode). If the existing and target programming states are different, then a write voltage is applied to that particular memory cell. In one example, the write voltage is 3-8 volts applied across the memory cell by applying voltage biases to the word line and bit line used to select the cell. In one example, controller120may use write voltages (e.g., write pulses) to write a logic state to a memory cell, such as memory cell110,112during the write operation. The write pulses may be applied by providing a first voltage to a bit line and providing a second voltage to a word line to select the memory cell. Circuits coupled to access lines to which memory cells may be coupled may be used to provide the write voltages (e.g., access line drivers included in decoder circuits). The circuits may be controlled by internal control signals provided by a control logic (e.g., controller120). The resulting voltage applied to the memory cell is the difference between the first and second voltages. In some cases, the memory cell (e.g., a PCM cell) includes a material that changes its crystallographic configuration (e.g., between a crystalline phase and an amorphous phase), which in turn, determines a threshold voltage of the memory cell to store information. In other cases, the memory cell includes a material that remains in a crystallographic configuration (e.g., an amorphous phase) that may exhibit variable threshold voltages to store information. FIG.2shows resistors210,212used to implement spike current suppression for an access line of a memory array, in accordance with some embodiments. The access line has a first portion202and a second portion204(e.g., left and right portions as described above). The access line ofFIG.2is an example of an access line130of memory array102. Portion202is used to access memory cell206, and portion204is used to access memory cell208. Each portion202,204is typically used to access multiple memory cells (e.g., memory cells located in the memory array above and below the respective portion). Access line portions202,204are electrically connected to via214by resistors210,212. In one example, access line portions202,204are portions of a conductive layer in a memory array. In one example, resistors210,212are portions of a resistive film formed underlying the conductive layer and overlying via214. In one example, via214is a single via. In one example, via214is provided by multiple vias. Via214electrically connects driver216to access line portions202,204. Driver216is an example of bias circuitry124. In one example, driver216generates a read voltage on portion202in order to determine a state of memory cell206. In one example, driver216generates a read voltage on portion204in order to determine a state of memory cell208. Memory cells206,208may be formed using various memory cell types. In one example, the memory cell includes chalcogenide. In one example, the memory cell includes a select device, and a phase change material as a memory element. In one example, the memory cell is a self-selecting memory cell including chalcogenide. In one example, the memory cell is a resistive memory cell. FIG.3shows an access line split into left and right portions302,304for spike current suppression, in accordance with some embodiments. Left portion302is used to access memory cell308, and right portion304is used to access memory cell310. The access line provided by portions302,304is an example of an access line130ofFIG.1, or the access line ofFIG.2. In one embodiment, a split in the access line is provided in a central region306of the access line. In one example, the split is formed in the middle of the access line so that portions302and304are patterned to have substantially equal or identical lengths. In one example, portions302and304are patterned to have different lengths. Left and right portions302,304are electrically connected to via312by a resistive film318. Resistive film318has a section320located between left portion302of the access line and via312. Resistive film318has a section322located between right portion304of the access line and via312. In one example, each of sections320,322has a thickness of 1 to 20 nanometers. In one example, each of sections320,322, has a width of 10 to 200 nanometers. The width is indicated inFIG.3by the arrows corresponding to reference numbers320,322. In one example, resistive film318includes tungsten silicon nitride. In one example, resistive film318includes one or more of tungsten silicon nitride, titanium silicide nitride, tungsten nitride, titanium nitride, tungsten silicide, or cobalt silicide. The proportions of the foregoing materials can be varied for different memory arrays. In one embodiment, the split is a gap that physically separates portions302,304. In one example, the split includes a non-conductive material formed in central region306between portions302and304. In one example, the non-conductive material is an insulating oxide. In one example, the split is an unfilled space between portions302,304. Via312is electrically connected to transistor circuitry316, which is formed in a semiconductor substrate314. In one example, transistor circuitry316includes bias circuitry124. In one example, transistor circuitry316includes one or more voltage drivers to generate voltages on portions302,304of the access line shown inFIG.3. In one example, transistor circuitry316is formed using CMOS transistors. FIG.4shows a memory array in a cross-point architecture including various word line and bit line layers that provide access to memory cells arranged in multiple stacked decks, in accordance with some embodiments. The memory array includes various word lines and bit lines arranged orthogonally (e.g., perpendicularly) to one another. For example, word lines412,414are arranged perpendicularly to bit lines406,408. Word lines412,414are an example of access lines130ofFIG.1. Additionally and/or alternatively, bit lines406,408are an example of access lines130. The memory array includes various memory cells arranged in various decks (e.g., Decks0-3). Each deck includes memory cells. For example, Deck0includes memory cells402, and Deck1includes memory cells404. Memory cells402,404are an example of memory cells110. In one embodiment, each bit line406provides access to memory cells402,404, which are located above and below the respective bit line. Although not shown for purposes of simplified illustration, each of word lines412,414may incorporate resistors210,212described above. In one example, each of word lines412,414is split to have a left portion302and a right portion304, similarly as discussed above. In one example, each word line and/or bit line for any or all of the Decks0-3can include a split, such as discussed above forFIG.3. In one example, various configurations of resistors210,212can be used for different word lines and/or bit lines. In one example, the configuration for resistors210,212is determined based on an extent of electrical discharge associated with a given region of the memory array. In one embodiment, word line412is electrically connected to word line414by via410. Via410is an example of via134,214,312. Although not shown for purposes of simplified illustration, via410is electrically connected to a driver used to generate a voltage on word lines412,414. In one example, the driver is bias circuitry124or driver216. FIG.5shows word lines in a memory array electrically connected by a via, in accordance with some embodiments. In one embodiment, a word line that provides access to memory cells in a top deck of the memory array has left and right portions502,504, which are separated by a split506. Left and right portions502,504are an example of left and right portions302,304. Word line520provides access to memory cells in a bottom deck of the memory array. In one embodiment, a via electrically connects left and right portions502,504to word line520. In one example, the via includes conductive portions508,510,512, which are electrically connected by via514to a driver (not shown). In one example, each of conductive portions508,510,512corresponds to a conductive layer that is patterned and formed using, for example, a photoresist layer when manufacturing the memory array. In one example, conductive portion510is a landing pad for conductive portion508. In one embodiment, resistive film530electrically connects left and right portions502,504to conductive portion508. Resistive film530is an example of resistive film318. In one embodiment, a split (not shown) may be formed above via514in central region522of word line520. Word line520is an example of word line414. FIG.6shows a memory device configured with drivers to generate voltages on access lines of a memory array333, in accordance with some embodiments. For example, memory cells206,208illustrated inFIG.2can be used in the memory cell array333. The memory device ofFIG.6includes a controller331that operates bit line drivers337and word line drivers335to access the individual memory cells (e.g.,206,208) in the array333. Controller331is an example of memory controller120. Memory array333is an example of memory array102. The bit line drivers337and/or the word line drivers335can be implemented by bias circuitry124. In one example, each memory cell (e.g.,206,208) in the array333can be accessed via voltages driven by a pair of a bit line driver and a word line driver, as illustrated inFIG.7. FIG.7shows a memory cell401with a bit line driver447to generate a voltage on a bit line (wire441), and a word line driver445to generate a voltage on a word line (wire443), in accordance with some embodiments. For example, the bit line driver447drives a first voltage applied to a row of memory cells in the array333; and the word line driver445drives a second voltage applied to a column of memory cells in the array333. A memory cell401in the row and column of the memory cell array333is subjected to the voltage difference between the first voltage driven by the bit line driver447and the second voltage driven by the word line driver445. When the first voltage is higher than the second voltage, the memory cell401is subjected to one voltage polarity (e.g., positive polarity); and when the first voltage is lower than the second voltage, the memory cell401is subjected to an opposite voltage polarity (e.g., negative polarity). For example, when the memory cell401is configured to be read with positive voltage polarity, the bit line driver447can be configured to drive a positive voltage. For example, when the memory cell401is configured to be read with negative voltage polarity, the word line driver445can be configured to drive a positive voltage. For example, during the write operation, both the bit line driver447and the word line driver445can drive voltages of differing magnitudes (e.g., to perform read or write steps). For example, the bit line driver447can be configured to drive a positive voltage with differing magnitudes; and the word line driver445can be configured to drive a negative voltage with differing magnitudes. The difference between the voltage driven by the bit line driver447and the voltage driven the word line driver445corresponds to the voltage applied on the memory cell401. In one example, the bit line drivers337can be used to drive parallel wires (e.g.,441) arranged in one direction and disposed in one layer of cross-point memory; and the word line drivers435can be used to drive parallel wires (e.g.,443) arranged in another direction and disposed in another layer of a cross-point memory. The wires (e.g.,441) connected to the bit line drivers (e.g.,447) and the wires (e.g.,443) connected to the word line drivers (e.g.,445) run in the two layers in orthogonal directions. The memory cell array333is sandwiched between the two layers of wires; and a memory cell (e.g.,401) in the array333is formed at a cross point of the two wires (e.g.,441and443) in the integrated circuit die of the cross-point memory. FIG.8shows an example of a memory cell that includes a select device610, in accordance with some embodiments. In one example, select device610includes a chalcogenide. Memory cell602is an example of memory cells110,112; or memory cells206,208. Top electrode608conductively connects select device610to bit line604, and bottom electrode612conductively connects select device610to word line606. In one example, electrodes608,612are formed of a carbon material. Bit line604and word line606are each an example of an access line130. In one example, word line606and/or bit line604is split into left and right portions302,304as described herein. In one example, select device610includes a chalcogenide (e.g., chalcogenide material and/or chalcogenide alloy). Threshold voltage properties of the select device may be based on the voltage polarities applied to the memory cell. In one example, a logic state may be written to memory cell602, which may correspond to one or more bits of data. A logic state may be written to the memory cell by applying voltages of different polarities at different voltage and/or current magnitudes. The memory cell may be read by applying voltages of a single polarity. The writing and reading protocols may take advantage of different threshold voltages of the select device that result from the different polarities. The chalcogenide material of the select device may or may not undergo a phase change during reading and/or writing. In some cases, the chalcogenide material may not be a phase change material. In one embodiment, an apparatus includes: a memory array (e.g.,102,333) including an access line (e.g.,130) configured to access memory cells (e.g.,206,208;308,310) of the memory array, the access line having a first portion (e.g.,202,302) and a second portion (e.g.,204,304) on opposite sides of a central region (e.g.,306) of the access line, where the first portion is configured to access a first memory cell, and the second portion configured to access a second memory cell; at least one via (e.g.,214,312) electrically connected at the central region to the first portion and the second portion; a first resistor (e.g.,210) located between the first portion and the via, where the first resistor is configured to screen electrical discharge from the first portion when accessing the second memory cell; a second resistor (e.g.,212) located between the second portion and the via, where the second resistor is configured to screen electrical discharge from the second portion when accessing the first memory cell; and a driver (e.g.,216) electrically connected to the via, where the driver is configured to generate a voltage on the first portion to access the first memory cell, and to generate a voltage on the second portion to access the second memory cell. In one embodiment, the at least one via is a single via; the access line is a bit line or a word line; and the driver is a bit line driver or a word line driver. In one embodiment, the first resistor is provided by a first section (e.g.,320) of a resistive film (e.g.,318) overlying the via; and the second resistor is provided by a second section (e.g.,322) of the resistive film overlying the via. The central region includes a split in the access line overlying the via and between the first and second portions of the access line. In one embodiment, the resistive film includes tungsten silicon nitride. In one embodiment, the split is formed by removing a third portion of the access line to physically separate the first portion from the second portion; and prior to removing the third portion, the third portion is located between the first portion and the second portion. In one embodiment, the split includes: a non-conductive material configured to inhibit current discharge from flowing directly between the first and second portions of the access line; or an unfilled space between the first portion and the second portion. In one embodiment, the memory array is part of a memory device (e.g.,101); the access line is associated with a physical address within the memory array; and an access operation by a controller (e.g.,120) of the memory device to select the first memory cell addresses both the first and second portions of the access line. In one embodiment, an apparatus includes: an access line having a first portion (e.g.,302) and a second portion (e.g.,304), where the first portion is configured to access a memory cell (e.g.,308) of a memory array, and a gap physically separates the first portion and the second portion; a via (e.g.,312) electrically connected to the first portion and the second portion; and a resistive film (e.g.,318) having a first section between the first portion and the via, and a second section between the second portion and the via. In one embodiment, the apparatus further includes a driver (e.g., a driver in transistor circuitry316) electrically connected to the via, where the driver is configured to generate a voltage on the first portion to access the memory cell. In one embodiment, the gap is a split in the access line formed by removing a third portion of the access line to physically separate the first portion of the access line from the second portion. In one embodiment, a material forming the resistive film has a higher resistivity than a material forming the first and second portions of the access line. In one embodiment, the resistive film includes at least one of: tungsten silicon nitride; titanium silicide nitride; tungsten nitride; titanium nitride; tungsten silicide; or cobalt silicide. In one embodiment, each of the first and second portions is configured to access memory cells located above and below the respective portion. In one embodiment, the memory array has a cross-point architecture, and the memory cell is: a memory cell including chalcogenide; a memory cell including a select device, and a phase change material as a memory element; a self-selecting memory cell including chalcogenide (e.g., memory cell602); or a resistive memory cell. In one embodiment, the gap overlies a third section of the resistive film (e.g., the middle section of resistive film318located under central region306), and the third section is positioned between the first section and the second section. FIGS.9-12show various steps in the manufacture of a memory device that implements spike current suppression, in accordance with some embodiments. In one example, the memory device is memory device101. FIG.9shows a memory array902at an intermediate stage of manufacture. Memory array902includes various memory cells908. Each memory cell908includes a memory stack containing various layers of materials (e.g., chalcogenide, phase change material, etc.) corresponding to the memory cell technology that has been chosen for use. Memory cells908are an example of memory cells110,112; memory cells206,208; or memory cells308,310. Memory array902includes a via904that has been formed on a pad906. Memory array902as shown inFIG.9can be formed using conventional manufacturing techniques. As shown inFIG.10, a nitride layer1010is formed overlying a top surface of memory array902. In one example, nitride layer1010includes one or more of tungsten silicon nitride, titanium silicide nitride, tungsten nitride, or titanium nitride. In one example, one or more of tungsten silicide or cobalt silicide can be alternatively or additionally used. The proportions of the foregoing materials can be varied for different memory arrays. A word line1012is formed overlying nitride layer1010. In one example, word line1012is a conductive material. In one example, word line1012is tungsten. As shown inFIG.11, a hard mask1102is formed overlying word line1012. Then, a photoresist layer1104is formed overlying hard mask1102. As shown inFIG.12, photoresist layer1104is patterned and used to etch hard mask1102, word line1012, and nitride layer1010to provide opening1202overlying via904. In one example, a tungsten-only etch is used. After the above etch, photoresist layer1104and hard mask1102are removed. Subsequent manufacture of the memory device can be performed using conventional manufacturing techniques. Providing the opening1202splits word line1012into left and right portions. In one example, these portions correspond to left and right portions302,304. In one example, the remaining portion of nitride layer1010overlying via904provides resistive film318. In an alternative approach, nitride layer1010is not etched, so that it fully covers via904(e.g., similarly as shown inFIG.3). In one embodiment, the memory devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. In one embodiment, a transistor discussed herein (e.g., transistor of transistor circuitry316) may represent a field-effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials (e.g., metals). In one example, each transistor is used in CMOS transistor circuitry formed at the top surface of a semiconductor wafer and underneath a memory array having multiple decks of memory cells. The source and drain may be conductive and may comprise a heavily-doped (e.g., degenerate) semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type, then the FET may be referred to as a n-type FET. If the channel is p-type, then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be on or activated when a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. The transistor may be off or deactivated when a voltage less than the transistor's threshold voltage is applied to the transistor gate. FIG.13shows a method for manufacturing a memory device that implements spike current suppression, in accordance with some embodiments. For example, the method ofFIG.13can be used to form the split access line and resistive film ofFIG.3. In one example, the manufactured memory device is memory device101. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block1301, a via is formed in a memory array. In one example, the via is via904. In one example, the memory array is memory array902. At block1303, a resistive film is formed overlying the via. In one example, the resistive film is nitride layer1010. At block1305, an access line is formed overlying the resistive film. In one example, the access line is word line1012. At block1307, a photoresist layer is formed overlying the access line. In one example, the photoresist layer is photoresist layer1104. In one example, the photoresist layer is formed overlying a hard mask (e.g., hard mask1102). At block1309, the photoresist layer is patterned. In one example, the photoresist layer is patterned to use in etching that provides opening1202. At block1311, the access line is etched using the patterned photoresist layer to provide first and second portions of the access line. In one example, the access line is etched to split the access line into left and right portions302,304. In one embodiment, a method includes: forming a via (e.g., via312); forming a resistive film (e.g.,318) overlying the via; forming an access line (e.g., an access line that provides left and right portions302,304) overlying the resistive film; and patterning the access line to provide first and second portions. The patterning physically separates the first portion from the second portion (e.g., the patterning provides a split in the access line), and the first portion is configured to access a memory cell (e.g.,308) of a memory array. A first section of the resistive film is between the first portion and the via, and a second section of the resistive film is between the second portion and the via. In one embodiment, patterning the access line includes: forming a photoresist layer overlying the access line; patterning the photoresist layer; and performing an etch using the patterned photoresist layer to etch the access line. Performing the etch includes etching the access line to provide a split overlying the via and between the first and second portions (e.g., a split located in central region306and overlying via312). In one embodiment, performing the etch further includes etching the resistive film to physically separate the first and second sections. In one embodiment, the first and second sections of the resistive film each have a thickness of 1 to 20 nanometers; the first section has a width of 10 to 200 nanometers; and the second section has a width of 10 to 200 nanometers. In one embodiment, the memory array is part of a memory device (e.g.,101). The method further includes forming a transistor circuit (e.g., transistor circuitry316) located under the memory array and electrically connected to the via. The transistor circuit is configured to generate a voltage on the first portion to access the memory cell during a read or write operation, and the voltage is generated in response to a command received from a host device (e.g.,126) by a controller (e.g.,120) of the memory device. In some embodiments, spike current suppression is implemented by using a socket structure that is formed in an access line (e.g., formed in one or more word and/or bit lines of a memory array). In some embodiments, a socket of the access line is filled with a conductive layer, and two resistive films are formed in the access line on each side of the conductive layer (see, e.g.,FIG.14). In other embodiments, the socket of the access line is filled with a resistive layer (see, e.g.,FIGS.23-24), and the conductive layer and two resistive films are not used. In some embodiments, use of the socket structure above in a memory array can be combined with use of the split access line structure as described above (e.g., as described forFIGS.1-13). In one embodiment, the same access line may use both the split access line structure and the socket structure at various points in the access line. In other embodiments, each type of structure can be used on different access lines. In one embodiment, a memory device includes a memory array. The memory array includes access lines. Each of one or several access lines can be configured to access memory cells of the memory array, the access line having a first portion and a second portion on opposite sides of the access line. The first portion is configured to access a first memory cell, and the second portion is configured to access a second memory cell. A conductive layer is located between the first portion and the second portion. The conductive layer electrically connects the first portion to the second portion. A first resistor (e.g., a first resistive film integrated into the access line as a spacer) is located between the first portion and the conductive layer. A second resistor (e.g., a second resistive film integrated into the access line as a spacer) is located between the second portion and the conductive layer. One or more vias are located underlying the conductive layer, and electrically connected by the conductive layer to the first and second portions of the access line. In one embodiment, each of one or more access lines has a first portion and a second portion (e.g., left and right portions of a word line). The first portion is configured to access a first memory cell of a memory array (e.g., on a left side of the array). The second portion is configured to access a second memory cell of the memory array (e.g., on a right side of the array). A conductive layer is located between the first and second portions of the access line and has been formed in a socket of the access line. A first resistive film (e.g., tungsten silicon nitride) is integrated into the access line between the first portion and the conductive layer. A second resistive film (e.g., tungsten silicon nitride) is integrated into the access line between the second portion and the conductive layer. One or more vias are electrically connected through the conductive layer to the first and second portions of the access line. FIG.14shows an access line1415having two resistive films1420,1422. A conductive layer1430has been formed in a socket (see, e.g., socket1702ofFIG.17below) of access line1415, to implement spike current suppression, in accordance with some embodiments. Access line has a left portion1402and a right portion1404located on opposite sides of access line1415. Conductive layer1430is located between left and right portions1402,1404. Conductive layer1430is, for example, tungsten. Resistive film1420is located between left portion1402and conductive layer1430. Resistive film1422is located between right portion1404and conductive layer1430. The material used to form resistive films1420,1422has a higher resistivity than the material used to form left and right portions1402,1404. In one example, left and right portions1402,1404are formed of tungsten. In one example, resistive films1420,1422are formed of tungsten silicon nitride. A via1412is located underlying conductive layer1430. Conductive layer1430electrically connects via1412to left and right portions1402,1404. Transistor circuitry1416(e.g., a driver) is electrically connected to via1412. In one embodiment, transistor circuitry1416is formed in semiconductor substrate1414, which is located underlying a memory array including memory cells1408,1410. Left portion1402is used to access memory cell1408. Right portion1404is used to access memory cell1410. Transistor circuitry1416generates one or more voltages that are applied to access line1415through via1412. The voltages are applied to access one or more memory cells using access line1415. In one embodiment, access to the memory cells is accomplished in conjunction with applying one or more voltages to bit lines (not shown) of the memory array. In one example, memory cells1408,1410are similar to memory cells110,112, memory cells206,208, memory cells402,404, or memory cell401. In one example, each access line1415is one of access lines130. In one example, transistor circuitry1416is similar to transistor circuitry316. In one embodiment, additional resistive films can be integrated into access line1415. In one embodiment, access line1415has an additional portion (not shown) electrically connected to left portion1402by an additional resistive film (not shown). For example, the additional portion and the additional resistive film are located to the left of memory cell1408. In one example, each side of access line1415on opposite sides of via1412can have multiple portions separated by multiple resistive films (not shown). In other embodiments, a signal line (not shown) of a memory or other semiconductor device can have multiple portions (e.g., tungsten portions) separated by multiple resistive films (e.g., WSiN) such as the resistive films described above. In one embodiment, the thickness of resistive films1420,1422can be varied to control the magnitude of the resistance. In one embodiment, each resistive film1420,1422has a different thickness. In one example, the thickness is selected to correspond to a characteristic of the respective portion of the access line1415, and/or a respective characteristic of a particular region of the memory array, and/or a respective characteristic of memory cells accessed by the portion of the access line. In one embodiment, an apparatus includes: a memory array including an access line (e.g.,1415,1612) configured to access memory cells (e.g.,1408,1410) of the memory array, the access line having a first portion (e.g., left portion1402) and a second portion (e.g., right portion1404) on opposite sides of the access line, where the first portion is configured to access a first memory cell, and the second portion is configured to access a second memory cell; a conductive layer (e.g.,1430) between the first portion and the second portion, where the conductive layer electrically connects the first portion to the second portion; a first resistor (e.g.,1420) between the first portion and the conductive layer; a second resistor (e.g.,1422) between the second portion and the conductive layer; and at least one via (e.g.,1412) underlying the conductive layer, and electrically connected by the conductive layer to the first portion and the second portion. In one embodiment, the apparatus further includes a driver (e.g., a driver of transistor circuitry1416) electrically connected to the via, where the driver is configured to generate a voltage on the first portion to access the first memory cell, and to generate a voltage on the second portion to access the second memory cell. In one embodiment, the at least one via is a single via; the access line is a bit line or a word line; and the driver is a bit line driver or a word line driver. In one embodiment, the first resistor is a first resistive layer on an end of the first portion; and the second resistor is a second resistive layer on an end of the second portion. The conductive layer is formed in a socket (e.g.,1702) of the access line. The socket is overlying the via and between the first and second portions of the access line. In one embodiment, each of the first resistive layer and the second resistive layer includes tungsten silicon nitride. In one embodiment, the socket is formed by patterning and removing a third portion of the access line to physically separate the first portion from the second portion; and prior to removing the third portion, the third portion is located between the first portion and the second portion. In one embodiment, the memory array is part of a memory device; the access line is associated with a physical address within the memory array; and an access operation by a controller of the memory device to select the first memory cell addresses both the first and second portions of the access line. In one embodiment, an apparatus includes: an access line having a first portion and a second portion, where the first portion is configured to access a memory cell of a memory array; a conductive layer between the first portion and the second portion; a first resistive film (e.g.,1420,1902) between the first portion and the conductive layer; a second resistive film (e.g.,1422,1904) between the second portion and the conductive layer; and a via electrically connected, by the conductive layer, to the first portion and the second portion. In one embodiment, the apparatus further includes a driver electrically connected to the via, where the driver is configured to generate a voltage on the first portion to access the memory cell. In one embodiment, the conductive layer is located in a socket between the first portion and the second portion; and the socket is formed by removing a third portion of the access line to physically separate the first portion of the access line from the second portion. In one embodiment, a material forming each of the first resistive film and the second resistive film has a higher resistivity than a material forming the first and second portions of the access line. In one embodiment, each of the first and second resistive films includes at least one of: tungsten silicon nitride; titanium silicide nitride; tungsten nitride; titanium nitride; tungsten silicide; or cobalt silicide. In one embodiment, each of the first and second portions is configured to access memory cells located above and below the respective portion. In one embodiment, the memory array has a cross-point architecture, and the memory cell is: a memory cell including chalcogenide; a memory cell including a select device, and a phase change material as a memory element; a self-selecting memory cell including chalcogenide; or a resistive memory cell. In one embodiment, the apparatus further includes a driver connected to the via, where: the access line further has a third portion located at an end of the access line, and overlying or underlying the memory cell; the apparatus further includes a third resistive film between the first portion and the third portion; and the third portion is electrically connected to the via by the first portion so that the driver can generate a voltage on the third portion for accessing the memory cell. FIGS.15-21show steps in the manufacture of a memory device that implements spike current suppression by forming two resistive films in an access line, and a conductive layer in a socket of the access line, in accordance with some embodiments. In one example, the memory device is memory device101. FIG.15shows a memory array1502at an intermediate stage of manufacture. Memory array1502includes various memory cells1508. Each memory cell1508includes a memory stack containing various layers of materials (e.g., chalcogenide, phase change material, etc.) corresponding to the memory cell technology that has been chosen for use (see, e.g.,FIG.8). Memory cells1508are an example of memory cells110,112; memory cells206,208; or memory cells1408,1410. Memory array1502includes a via1504. In some cases, via1504can be formed on a pad similar to pad906. Memory array1502as shown inFIG.15can be formed using conventional manufacturing techniques. As shown inFIG.16, an access line1612(e.g., a word line or bit line) is formed overlying a top surface of memory array1502. In one example, access line1612is tungsten. Other conductive materials may be used. An optional nitride layer1614is formed overlying access line1612. Nitride layer1614is, for example, a silicon nitride layer. In one embodiment, nitride layer1614is later used as an etch stop. A photoresist layer (not shown) is formed overlying nitride layer1614to use for patterning both nitride layer1614and access line1612. As shown inFIG.17, nitride layer1614and access line1612have been patterned by, for example, etching using the photoresist layer above. This patterning provides a socket1702in access line1612. Socket1702has a height1704measured from a bottom1706of socket1702to a top surface of nitride layer1614. If nitride layer1614is not used, height1704is measured to a top surface of access line1612. In various embodiments, socket1702can be filled with a conductive and/or resistive material that electrically connects the left and right portions of access line1612. In various embodiments, socket1702physically separates the left and right portions of access line1612. As shown inFIG.18, resistive layer1802is formed overlying the left and right portions of nitride layer1614, the left and right portions of access line1612, and filling part of the bottom of socket1702. In one example, resistive layer1802includes one or more of tungsten silicon nitride, titanium silicide nitride, tungsten nitride, or titanium nitride. In one example, one or more of tungsten silicide or cobalt silicide can be alternatively or additionally used. The proportions of the foregoing materials can be varied for different memory arrays. In one example, resistive layer1802is formed using a conformal deposition process (e.g., for forming sidewall spacers from resistive layer1802). As shown inFIG.19, resistive layer1802has been etched to provide resistive films1902,1904as spacers on sidewalls of the left and right portions of access line1612and nitride layer1614. In one example, each spacer has a thickness of 1 to 60 nanometers. As shown inFIG.20, a conductive layer2002is formed. A portion of conductive layer2002is formed in socket1702. In one embodiment, conductive layer2002is formed of the same material as access line1612. In one example, conductive layer2002is tungsten. In one example, conductive layer2002is formed by chemical vapor deposition. In one embodiment, conductive layer2002is formed of a different material than access line1612. As shown inFIG.21, the uppermost part of conductive layer2002is removed by performing chemical mechanical polishing (CMP) using silicon nitride layer1614as a stop layer. After performing the CMP, conductive portion2102remains in socket1702(e.g., completely filling the socket, or filling the socket by at least 85 percent by volume). Subsequent manufacture of the memory device can be performed using conventional manufacturing techniques. As mentioned above, access line1612is separated into left and right portions. In one example, these portions correspond to left and right portions1402,1404ofFIG.14. Conductive portion2102electrically connects each of the left and right portions of access line1612to via1504(through resistive films1902,1904, which are electrically in series). In one embodiment, the memory array ofFIG.15may be formed on a semiconductor substrate (e.g., substrate1414ofFIG.14), such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. In one embodiment, a transistor as used herein (e.g., a transistor of transistor circuitry1416ofFIG.14) may represent a field-effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials (e.g., metals). In one example, each transistor is used in CMOS transistor circuitry formed at the top surface of a semiconductor wafer and underneath a memory array having multiple decks of memory cells. FIG.22shows a method for manufacturing a memory device that implements spike current suppression by forming two resistive films and a conductive layer in a socket, in accordance with some embodiments. For example, the method ofFIG.22can be used to form socket1702ofFIG.17and resistive films1902,1904ofFIG.21. In one example, the manufactured memory device is memory device101. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block2201, a via is formed in a memory array. In one example, the via is via1412or1504. In one example, the memory array is memory array1502. At block2203, an access line is formed overlying the via. In one example, the access line is access line1612. At block2205, the access line is patterned to provide first and second portions. The patterning forms a socket. In one example, the socket is socket1702. In one example, the first and second portions are the left and right portions of access line1612. At block2207, a first resistive film and a second resistive film are formed. In one example, the first and second resistive films are spacers1902,1904. At block2209, a conductive layer is formed in the socket. In one example, the conductive layer is conductive layer2002. In one embodiment, a method includes: forming a via (e.g.,1504); forming an access line (e.g.,1612) overlying the via; patterning the access line to provide first and second portions of the access line. The patterning forms a socket (e.g.,1702) that physically separates the first portion and the second portion. The first portion is configured to access a memory cell (e.g.,1508) of a memory array. The method further includes: forming a first resistive film (e.g.,1902) on a sidewall of the first portion, and a second resistive film (e.g.,1904) on a sidewall of the second portion; and forming a conductive layer (e.g.,2002) in the socket. The conductive layer electrically connects each of the first and second portions of the access line to the via. In one embodiment, patterning the access line includes: forming a photoresist layer overlying the access line; patterning the photoresist layer; and performing an etch using the patterned photoresist layer to etch the access line, where performing the etch includes etching the access line to provide the socket. In one embodiment, each of the first and second resistive films has a thickness of 1 to 60 nanometers. In one embodiment, the memory array is part of a memory device. The method further includes forming a transistor circuit located under the memory array, where the transistor circuit is configured to generate, using an electrical connection to the via, a voltage on the first portion to access the memory cell during a read or write operation, and the voltage is generated in response to a command received from a host device by a controller of the memory device. In one embodiment, the method further includes, prior to patterning the access line, forming a silicon nitride layer (e.g.,1614) overlying the access line. Patterning the access line to form the socket includes etching a portion of the silicon nitride layer and the access line. In one embodiment, the method further includes: after forming the conductive layer in the socket, performing chemical mechanical polishing of the conductive layer using the silicon nitride layer as a stop layer. In one embodiment, the conductive layer is formed by chemical vapor deposition. In one embodiment, forming the first and second resistive films is performed by: forming a resistive layer (e.g.,1802) overlying the first and second portions of the access line, and overlying a bottom of the socket; and etching the resistive layer (see, e.g.,FIG.19) to provide the first and second resistive films as spacers on the respective sidewalls of the first and second portions of the access line. FIGS.23and24show steps in the manufacture of a memory device that implements spike current suppression by forming a resistive layer in a socket, in accordance with some embodiments. In one example, the memory device is memory device101. FIG.23shows memory array1502at an intermediate stage of manufacture. In one embodiment, memory array1502as shown inFIG.23can be formed similarly as described forFIGS.15-17above. As shown inFIG.23, a resistive layer2302is formed in socket1702. Resistive layer2302is used to electrically connect each of left and right portions of access line1612to via1504in the final memory device. In one example, resistive layer2302includes one or more of tungsten silicon nitride, titanium silicide nitride, tungsten nitride, or titanium nitride. In one example, one or more of tungsten silicide or cobalt silicide can be alternatively or additionally used. The proportions of the foregoing materials can be varied for different memory arrays. As shown inFIG.24, chemical mechanical polishing of resistive material2302is performed so that resistive portion2402remains in socket1702. Resistive portion2402fills socket1702to a height2404. In one embodiment, after the chemical mechanical polishing, resistive portion2402fills at least 50 percent of the volume of socket1702, where the volume is determined by height2404multiplied by an area of the bottom surface1706of socket1702(such as shown inFIG.17). FIG.25shows a method for manufacturing a memory device that implements spike current suppression by forming a resistive layer in a socket, in accordance with some embodiments. For example, the method ofFIG.25can be used to form resistive layer2302ofFIG.23. In one example, the manufactured memory device is memory device101. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block2501, a via is formed in a memory array. In one example, the via is via1504. In one example, the memory array is memory array1502. At block2503, an access line is formed overlying the via. In one example, the access line is access line1612. At block2505, the access line is patterned to provide first and second portions. The patterning forms a socket. In one example, the socket is socket1702. At block2507, a resistive layer is formed in the socket. In one example, the resistive layer is resistive layer2302. In one embodiment, a method includes: forming a via; forming an access line overlying the via; patterning the access line to provide first and second portions of the access line, where the patterning forms a socket that physically separates the first portion and the second portion, and where the first portion is configured to access a memory cell of a memory array; and forming a resistive layer (e.g.,2302ofFIG.23) in the socket, where the resistive layer electrically connects each of the first and second portions of the access line to the via. In one embodiment, the resistive layer includes at least one of: tungsten silicon nitride; titanium silicide nitride; tungsten nitride; titanium nitride; tungsten silicide; or cobalt silicide. In one embodiment, patterning the access line includes: forming a photoresist layer overlying the access line; patterning the photoresist layer; and performing an etch using the patterned photoresist layer to etch the access line, where performing the etch includes etching the access line to provide the socket. In one embodiment, the method further includes forming a driver underlying the memory array, where the driver is electrically connected to the via and configured to generate a voltage on the first portion of the access line for accessing the memory cell during a read or write operation. In one embodiment, the method further includes: after forming the resistive layer in the socket, performing chemical mechanical polishing of the resistive layer. After the chemical mechanical polishing, the resistive layer fills at least 50 percent of a volume of the socket. In some embodiments, spike current suppression is implemented by one or more charge screening structures that are formed into one or more access lines of a memory array. Each charge screening structure includes an insulating layer that splits the access line into top and bottom portions, each electrically isolated from the other by the insulating layer (e.g., a thin insulator located in the middle of the access line). This increases the electrical resistance to memory cells of the memory array that are located above and/or below one of the insulating layers. For example, the increased resistance forms a resistive bottleneck that chokes charge flowing from parasitic capacitances of the memory array that might otherwise damage a memory cell that has been selected. In one example, the insulating layer is an oxide. In some embodiments, the use of one or more charge screening structures having insulating layers in the access line as described herein can be combined with use of the split access line structure as described above (e.g., as described forFIGS.1-13), and/or use of the socket structure as described above (e.g., as described forFIGS.14-25). In one embodiment, the same access line may use charge screening structures, the split access line structure, and/or the socket structure at various points in the access line. In other embodiments, each type of structure can be used on different access lines. In one embodiment, a memory device includes a memory array. The memory array includes access lines. Each of one or several access lines can be configured to access memory cells of the memory array, the access line having a first portion and a second portion on opposite sides (e.g., left and right sides) of the access line. The first portion is configured to access a first memory cell, and the second portion is configured to access a second memory cell. Each of the first and second portions includes one or more charge screening structures. In one embodiment, the charge screening structures are implemented as various screening portions located along the access line. A first screening portion of the access line is located in an electrical path between the far memory cells accessed by the first portion and the via(s). The first screening portion has a first insulating layer in an interior region (e.g., an oxide layer in the middle) of the access line (e.g., on a left side of the array). A second screening portion of the access line is located in an electrical path between the far memory cells accessed by the second portion and the via(s). The second screening portion has a second insulating layer in an interior region of the access line (e.g., on a right side of the array). Each screening portion increases a resistance of an electrical path to the near memory cells located above or below one of the insulating layers, so that spike current is suppressed. FIG.26shows an access line2602having charge screening structures that are used for spike current suppression, in accordance with some embodiments. The charge screening structures include screening portions2608,2611. Each screening portion2608,2611has a respective insulating layer2610,2612that splits the access line2602into an upper or top portion (e.g.,2660) and a lower or bottom portion (e.g.,2662). The upper portion in effect provides an upper resistor, and the lower portion in effect provides a lower resistor. The upper and lower resistors increase the resistance of the electrical path used to access near memory cells above and below the insulating layer2610,2612. For example, the resistance of each upper and lower resistor as used to access one of these near memory cells is greater than a resistance of a comparable length of the conductive portion of access line2602used to access far memory cells that are not located overlying or underlying an insulating layer. Other portions of access line2602include conductive portions2604,2606on opposite sides of access line2602. Conductive portion2604is, for example, located near distal end2601of access line2602. Access line2602is used to access various memory cells within a memory array. In one example, the memory array is memory array102ofFIG.1. These memory cells include, for example, memory cells2640,2642,2644,2646. Near memory cells2644,2646are located underlying insulating layers2610,2612. Far memory cells2640,2642are located in portions of access line2602that do not contain any such insulating layer. Although not shown, other memory cells can be located overlying insulating layers2610,2612(e.g., in a deck of the memory array above access line2602). Access line2602includes a central conductive portion2613. Via2654is located underlying central conductive portion2613, which electrically connects via2654to screening portions2608,2611and conductive portions2604,2606. An optional resistive layer2630is located between via2654and access line2602. In one example, resistive layer2630is formed of tungsten silicon nitride (WSiN). Via2654is electrically connected to transistor circuitry2650. Transistor circuitry2650includes one or more drivers used to generate voltages on access line2602for accessing various memory cells. Transistor circuitry2650is formed at a surface of semiconductor substrate2652. In one example, transistor circuitry2650is implemented using bias circuitry124ofFIG.1. In one example, semiconductor substrate2652is similar to semiconductor substrate314ofFIG.3. In one embodiment, an apparatus includes: a memory array including memory cells (e.g.,2640,2642,2644,2646); an access line (e.g.,2602) configured to access the memory cells, the access line having a first conductive portion (e.g.,2604) and a second conductive portion (e.g.,2606) on opposite sides of the access line; at least one via electrically connected to the first conductive portion and the second conductive portion; a first screening portion (e.g.,2608) of the access line, the first screening portion located in an electrical path between the first conductive portion and the via, and the first screening portion including a first insulating layer (e.g.,2610) in an interior region of the access line; and a second screening portion (e.g.,2611) of the access line, the second screening portion located in an electrical path between the second conductive portion and the via, and the second screening portion including a second insulating layer (e.g.,2612) in an interior region of the access line. In one embodiment, the first screening portion further includes a first upper resistor above the first insulating layer, and a first lower resistor below the first insulating layer; and the second screening portion further includes a second upper resistor (e.g.,2660) above the second insulating layer, and a second lower resistor (e.g.,2662) below the second insulating layer. In one embodiment, the access line is formed by placing a top conductive layer overlying a bottom conductive layer; the first upper resistor is a portion of the top conductive layer overlying the first insulating layer; and the first lower resistor is a portion of the bottom conductive layer underlying the first insulating layer. In one embodiment, a first memory cell accessed by the access line is located underlying or overlying the first insulating layer, and a second memory cell accessed by the access line is located underlying or overlying the second insulating layer. In one embodiment, the apparatus further includes a central conductive portion (e.g.,2613) of the access line located between the first conductive portion and the second conductive portion. The via is located underlying the central conductive portion; and the first insulating layer and the second insulating layer do not extend into the central conductive portion. In one embodiment, the apparatus further includes a resistive layer (e.g.,2630) between the via and the central conductive portion. In one embodiment, the resistive layer includes tungsten silicon nitride. In one embodiment, each of the first insulating layer and the second insulating layer has a thickness of 1 to 15 nanometers. In one embodiment, the at least one via is a single via; and the access line is a bit line. In one embodiment, the memory array is part of a memory device; and an access operation by a controller of the memory device to select the first memory cell addresses both the first and second conductive portions of the access line. FIG.27shows an access line2702having insulating layers2710,2712,2714located in interior regions of access line2702and used for spike current suppression, in accordance with some embodiments. In one example, access line2702is similar to access line2602. Access line2702includes left portion2704, right portion2706, and central portion2713. Left portion2704and right portion2706are on opposite sides of central portion2713. Insulating layers2710,2714are located in interior regions of the left portion2704of access line2702. Insulating layer2712is located in an interior region of the right portion2706of access line2702. Insulating layer2714is spaced apart from insulating layer2710and located towards distal end2701of access line2702. In one example, insulating layer2710is located in the middle of access line2702(e.g., at a height equal to 40-60 percent of the thickness2711of access line2702). In other examples, insulating layer2710can be located at varying (e.g., higher or lower) heights within the interior of access line2702in order to customize the resistance of the top and bottom portions of access line2702located above and below insulating layer2710. Access line2702is used to access memory cells of a memory array (e.g., memory array102ofFIG.1). These memory cells include memory cells2740,2742,2743,2744,2746. For example, memory cell2740is located overlying insulating layer2714. Memory cell2744is located underlying insulating layer2714. Driver2750is electrically connected to via2754. Driver2750generates one or more voltages on access line2702when accessing memory cells. Central portion2713electrically connects via2754to left and right portions2704,2706of access line2702. An optional resistive layer2730is located between via2754and central portion2713. In one example, resistive layer2730is similar to resistive layer2630ofFIG.26. Insulating layer2712is located at a height of2709above a bottom2707of access line2702. Insulating layer2712has a central longitudinal axis2705. Height2709is determined by the distance between bottom2707and central longitudinal axis2705. In one example, height2709is 30-70 percent of thickness2711of access line2702. In one example, access layer2702provides access to memory cells located in a deck of the memory array above the access line2702, and to memory cells in a deck below the access line2702. The height2709of insulating layer2712can be adjusted so that insulating layer2712is positioned more closely to the deck that needs more resistance screening. In one example, a determination is made (e.g., during manufacture) of those decks in the memory array that have a greater need of resistance screening and/or susceptibility to spike currents. In response to this determination, the insulating layer2712is positioned more closely to that particular deck(s) to provide increased protection against spike currents. Insulating layer2712has a lateral length2703. In one example, the lateral length is 50-300 nanometers. In one embodiment, access line may include one or more resistive layers2760,2762. In one example, resistive layers2760,2762can be formed similarly as described above for resistive films1420,1422ofFIG.14. In one embodiment, an apparatus includes: an access line (e.g.,2702) having a first portion (e.g.,2704), a second portion (e.g.,2706), and a central portion (e.g.,2713). The first and second portions are on opposite sides of the central portion, and each of the first and second portions is configured to access at least one memory cell (e.g.,2743,2746) of a memory array. The access line includes a first insulating layer (e.g.,2710) in the first portion and a second insulating layer (e.g.,2712) in the second portion. Each of the first and second insulating layers is located in an interior region of the access line. The apparatus further includes a via (e.g.,2754) electrically connected, by the central portion of the access line, to the first and second portions of the access line; and a driver (e.g.,2750) electrically connected to the via, wherein the driver is configured to generate a voltage on the first portion to access a first memory cell (e.g.,2743), the first memory cell located in a portion of the memory array underlying or overlying the first insulating layer, and to generate a voltage on the second portion to access a second memory cell, the second memory cell located in a portion of the memory array underlying or overlying the second insulating layer. In one embodiment, the access line is configured to access at least 1,000 memory cells of the memory array; a first group of 100 to 500 memory cells of the memory array is located underlying the first insulating layer; and a second group of 100 to 500 memory cells (e.g., a group including memory cell2746) of the memory array is located underlying the second insulating layer. In one embodiment, the access line has a thickness (e.g.,2711), a central longitudinal axis (e.g.,2705) of the second insulating layer (e.g.,2712) is located at a height (e.g.,2709) above a bottom (e.g.,2707) of the access line, and the height is 30 to 70 percent of the thickness. In one embodiment, each of the first and second insulating layers has a lateral length (e.g.,2703) of 50 to 300 nanometers. For example, the lateral length can be varied to adjust the resistance of the access line2702as needed to accommodate varying conditions of spike current discharge. In one embodiment, the access line further includes a third insulating layer (e.g.,2714) located in an interior region of the first portion of the access line, the third insulating layer spaced apart from the first insulating layer and towards a distal end (e.g.,2701) of the first portion; and the voltage generated on the first portion is used to access a third memory cell, the third memory cell located in a portion of the memory array underlying or overlying the third insulating layer. In one embodiment, each of the first and second insulating layers includes at least one of silicon nitride, an atomic layer deposition (ALD) oxide, or a thermal oxide. In one embodiment, the memory array has a cross-point architecture. In one embodiment, the first memory cell is: a memory cell including chalcogenide; a memory cell including a select device, and a phase change material as a memory element; a self-selecting memory cell including chalcogenide; or a resistive memory cell. FIGS.28-32show steps in the manufacture of a memory device that implements spike current suppression by forming one or more charge screening structures in an access line, in accordance with some embodiments. In one example, the memory device is memory device101. FIG.28shows a memory array2802at an intermediate stage of manufacture. Memory array2802includes various memory cells2807,2809. Each memory cell2807,2809includes a memory stack containing various layers of materials (e.g., chalcogenide, phase change material, etc.) corresponding to the memory cell technology that has been chosen for use (see, e.g.,FIG.8). Memory cells2807,2809are an example of memory cells110,112; memory cells206,208; or memory cells1408,1410. Memory array2802includes a via2804. In some cases, via2804can be formed on a pad similar to pad906. Memory array2802as shown inFIG.28can be formed using conventional manufacturing techniques. As shown inFIG.28, a resistive layer2806has been formed overlying memory array2802. In one example, resistive layer2806is a tungsten silicon nitride layer. In one example, resistive layer2806provides resistive layer2630ofFIG.26. A bottom conductive layer2808has been formed overlying resistive layer2806. Bottom conductive layer2808has a distal end2810. In one example, distal end2810corresponds to distal end2601ofFIG.26or2701ofFIG.27. In one example, bottom conductive layer2808is tungsten. Other conductive materials may be used. As shown inFIG.29, a photoresist layer has been formed overlying bottom conductive layer2808. The photoresist layer is patterned to provide an opening that exposes a portion of bottom conductive layer2808. After patterning, a portion2902of the photoresist layer is overlying a portion of memory cells2809, and a portion2904of the photoresist layer is overlying via2804. The exposed portion of bottom conductive layer2808is located overlying memory cells2807. As shown inFIG.30, the exposed portion of bottom conductive layer2808has been etched using the patterned photoresist layer. This etching provides opening3002in the top surface of the bottom conductive layer2808. Opening3002has, for example, a depth of 1-15 nanometers. In one example, the etching is a dry etch process used to remove a few nanometers of tungsten. The photoresist is stripped in situ. As shown inFIG.31, an insulating layer3102has been formed in opening3002. In one example, the insulating layer3102is silicon nitride, an atomic layer deposition oxide, or a thermal oxide. In one example, insulating layer3102has a thickness of less than 15 nanometers. In one example, an oxide is deposited, and chemical mechanical polishing is performed with a stop on the bottom conductive layer2808(e.g., tungsten). Other types of insulators can be formed in opening3002. Memory cells2807are located underlying insulating layer3102. As shown inFIG.32, a top conductive layer3202is formed overlying bottom conductive layer2808and insulating layer3102. In one example, top conductive layer3202is tungsten. In other examples, other conductive materials can be used. In one example, insulating layer3102provides insulating layer2610ofFIG.26or insulating layer2710ofFIG.27. In one example, top and bottom conductive layers3202,2808provide access line2602or2702. In one example, top and bottom conductive layers3202,2808provide a bit line for a memory array. In one example, top and bottom conductive layers3202,2808are used to form other bit lines (not shown) of the memory array. In one example, the other bit lines are formed by patterning top and bottom conductive layers3202,2808. FIG.33shows a cross-sectional view (taken along line AA, as illustrated) of the access line and memory array ofFIG.32. As illustrated, various bit lines3302have top and bottom portions separated by insulating layer3102. Bit lines3302are formed by patterning top and bottom conductive layers3202,2808. FIG.34shows a method for manufacturing a memory device that implements spike current suppression using one or more charge screening structures in an access line, in accordance with some embodiments. For example, the method ofFIG.34can be used to form the charge screening structures ofFIG.26or27. In one example, the manufactured memory device is memory device101. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block3401, a memory array including memory cells and one or more vias is formed. In one example, the memory cells are memory cells2640,2642,2644,2646. In one example, the vias include via2654. At block3403, a bottom conductive layer is formed overlying the memory cells in the vias. In one example, the bottom conductive layer is bottom conductive layer2808. At block3405, an opening is formed in a top surface of the bottom conductive layer. In one example, an opening is formed in bottom conductive layer2808. At block3407, an insulating layer is formed in the opening. In one example, the insulating layer is insulating layer3102. At block3409, a top conductive layer is formed overlying the insulating layer and the bottom conductive layer. In one example, the top conductive layer is top conductive layer3202. In one embodiment, a method includes: forming a memory array including memory cells and at least one via; forming a first conductive layer (e.g.,2808) overlying the memory cells and the via, wherein the first conductive layer is electrically connected to the memory cells; forming an opening in a top surface of the first conductive layer; forming an insulating layer (e.g.,3102) in the opening, wherein a portion of the memory cells are located underlying the insulating layer; and forming a second conductive layer (e.g.,3202) overlying the insulating layer and the first conductive layer, wherein the first and second conductive layers provide an access line for accessing the memory cells. In one embodiment, the method further includes forming a driver (e.g.,2750) in a semiconductor substrate. The memory array is formed overlying the semiconductor substrate, and the driver is electrically connected to the via. The driver is configured to generate a voltage on the access line for selecting one or more of the memory cells. In one embodiment, the method further includes forming a resistive layer (e.g.,2806) between the via and the first conductive layer. In one embodiment, the method further includes: forming a photoresist layer overlying the first conductive layer; patterning the photoresist layer; and etching the first conductive layer using the patterned photoresist layer to provide the opening in the top surface of the first conductive layer. In one embodiment, a first portion (e.g.,2904) of the patterned photoresist layer is overlying the via, and a second portion (e.g.,2902) of the patterned photoresist layer is overlying a portion of the memory cells located at a distal end (e.g.,2810) of the first conductive layer. In one embodiment, the access line is a first one of a plurality of bit lines (e.g., bit lines3302ofFIG.33), other ones of the bit lines are used to access other memory cells in the memory array, and the plurality of bit lines is formed from the first conductive layer and the second conductive layer. In one embodiment, the formed opening has a depth of 1 to 15 nanometers. FIG.35shows an access line3502having multiple insulating layers located in an interior region of the access line and used for spike current suppression, in accordance with some embodiments. In one example, access line3502is access line2602or2702. Access line3502includes various insulating layers arranged in parallel with respect to a vertical orientation, as illustrated. These insulating layers include insulating layer3510and3511. In one example, each of the insulating layers is similar to insulating layer2610or2710. The lateral length of each insulating layer can be varied to customize a resistance of access line3502at various points along the access line3502. In one embodiment, varying the lateral length of the insulating layers provides a gradient in the resistance of the top and/or bottom portions of access line3502that are above or below the insulating layers. For example, the resistance of bottom portion3520of access line3502that is overlying memory cell3540is less (due to a greater thickness of the conductive material of access line3502) than the resistance of bottom portion3521of access line3502that is overlying memory cell3544(due to a lesser thickness of the conductive material of access line3502). In one example, memory cell3544, which is nearer to via2654, is more susceptible to spike current damage than memory cell3540, which is further away from via2654. Thus, increased resistance to spike current damage is provided by a greater number of overlying insulating layers. Memory cell3540is less susceptible to spike current damage, and thus has a lower number of overlying insulating layers. In various embodiments, the number of insulating layers provided in parallel can vary between two or more as desired. Although only a left portion of the insulating layers is illustrated as having a gradient, the right portion of the insulating layers may also have a gradient. In addition, the length of each insulating layer can be varied. It is not required that the insulating layers be formed to have a symmetrical structure. In one example, memory cells overlying access line3502have a different susceptibility to spike current damage (e.g., due to a different type of memory cell or structure), such that the structure of the insulating layers closer to the top surface of access line3502is different than the structure of the insulating layers closer to the bottom surface of access line3502. In one embodiment, the vertical spacing between the insulating layers can also vary from one layer to another. In one example, the vertical spacing between each insulating layer is 5-30 nanometers. Various embodiments related to memory devices using an access line having one or more resistive layers for spike current suppression when accessing memory cells in a memory array are now described below. The generality of the following description is not limited by the various embodiments described above. As mentioned above, in some memory arrays (e.g., a cross-point memory array), current discharges through a memory cell may result in current spikes (e.g., relatively high current discharge through the memory cell in a relatively short time period), which may cause damage to the memory cell. A cross-point memory device typically uses the junction of two perpendicular metal lines (e.g., word lines and bit lines) to supply the voltages required to read and write individual memory cells. Some newer cross-point memory devices use a quilt architecture in which word line and bit line drivers are spread across a tile and each block of cells is defined by an electrical distance from the respective driver. The electrical distance from a driver to a memory cell results in different cell characteristics (e.g., different set threshold voltages due to leakage and/or seasoning). Memory cells having an electrical distance closer to a driver are referred to herein as near memory cells. Memory cells having an electrical distance farther away from a driver are referred to herein as far memory cells. The severity of spike currents can depend on the electrical distance of a particular memory cell. In one example, a memory cell includes a selector device (e.g., using a chalcogenide), a memory device, and electrodes. During read operations of the cell, a high potential is applied to the cell. When the applied voltage is above the threshold voltage, the selector device snaps. The snapping of the selector device can instantaneously result in a transient spike current because of charge built up in the memory array and periphery circuits. Depending on the magnitude of the spike current, the memory device may undesirably change its state from set to reset. This can cause the technical problem of erroneous and/or unreliable data storage for a memory device. The spike current exponentially increases with higher current delivery. Thus, near memory cells with higher current delivery (e.g., because of the proximity of CMOS circuit drivers) are more significantly impacted by the spike current as compared to far memory cells. The impact of spike current is primarily noticed during the set read disturb test (a test for reading written cells) at probe. This results in lower wafer yield performance. Spike mitigation generally can be achieved by reducing capacitance and/or increasing resistance. Lowering the capacitance can be achieved by moving to lower resistance metal layers and/or making architectural changes. However, such changes typically need significant development and integration work. Increasing the resistance can be achieved in various ways. Some methods change one of the materials in the memory cell. Examples include nitrogen incorporation in electrodes, multi-layer WSiN/carbon, and electrode modifications. However, changing the memory cell stack itself can impact performance and/or reliability in some cases. To address technical problems associated with spike currents, various embodiments described below increase the resistance between memory cells and the drivers (e.g., CMOS drivers) that generate voltages on access lines used to access the memory cells. In some embodiments, resistance is increased by adding thin, high resistance films to metal access lines. In one embodiment, one or more resistive layers are integrated into an access line (e.g., metal word line) to provide a composite access line. In one example, the resistive layer is formed of tungsten silicon nitride (WSiN) and/or amorphous carbon. In various embodiments, the resistive layers are integrated into the access line such that the resistive layers are located at the top or bottom of the access line, and/or in an interior region of the access line. The volume occupied by the composite access line is substantially the same as for an access line (or portion of the access line) that does not include a resistive layer because the resistive layers are integrated into the access line structure (e.g., in the interior region and/or at the top or bottom of the access line). One advantage provided by a composite access line is that the access line modifies the resistance outside the memory cell stack, which has less impact on memory cell performance and/or reliability. The use of composite access lines provides flexibility to modulate the resistance. For example, the choice of the material used to form the resistive layer (e.g., tungsten silicon nitride film), the thickness of the film, and/or the location of the film provide multiple ways to modulate the resistance. The composite access lines are not limited to use in cross-point memory devices, but can also be used in other semiconductor devices that use metal lines. In one embodiment, a memory device includes an access line for accessing memory cells of a memory array. The access line is formed of a conductive material having a first resistivity, and the access line includes a resistive layer having a second resistivity greater than the first resistivity. The memory cells are located in a portion of the memory array underlying or overlying the resistive layer. The memory device further includes a via electrically connected to the access line, and a driver electrically connected to the via. The driver is configured to generate a voltage on the access line to access the memory cells. In one example, the resistive layer is a high resistance film of tungsten silicon nitride and/or carbon. In one example, a thin deposited carbon film can be formed on top of or underneath a thin deposited tungsten silicon nitride film to provide the resistive layer. In one example, each film is less than 10 nanometers in thickness. In one embodiment, the resistive layer is located at a top or bottom of the access line. In one embodiment, the resistive layer is located in an interior region of the access line. In one embodiment, the access line includes two or more resistive layers. In one example, a first resistive layer is vertically stacked above a second resistive layer in the access line. In one example, a first resistive layer is located at a bottom of the access line, and a second resistive layer is located at a top of the access line. FIG.36shows an access line3602in a memory array. Access line3602has resistive layers3610,3612used for spike current suppression, in accordance with some embodiments. During manufacture of a memory device, resistive layers3610,3612can be positioned as part of access line3602either vertically up or down, and/or horizontally left or right as desired for modulating resistance. For example, the resistance for a circuit path (e.g., used for cell selection when reading or writing a cell) from a driver (not shown) of transistor circuitry2650through via2654to a memory cell can be varied by the vertical and/or horizontal positioning of one or more of resistive layers3610,3612. In one example, access line3602is similar to access line2602ofFIG.26, except that resistive layers3610,3612are used instead of insulating layers2610,2612. The structure illustrated inFIG.36can be manufactured similarly as described above forFIG.26, except that resistive layers3610,3612are formed in place of insulating layers2610,2612. Access line3602is used to access various memory cells2640,2642,2644,2646of the memory array. Memory cells2644,2646are near memory cells relative to memory cells2640,2642, which are far memory cells. The electrical distance from a driver in transistor circuitry2650to these near memory cells is less than the electrical distance to the far memory cells. In one embodiment, resistive layers3610,3612are located overlying the near memory cells, but not the far memory cells. Resistive layers3610,3612are located in an interior region of access line3602. Access layer3602is formed of a bulk conductive material (e.g., tungsten). As an example, resistive layer3612is located in the interior region of access layer3602due to a portion3660of the bulk conductor material being located above resistor layer3612, and a portion3662of the bulk conductive material being located below resistor layer3612. In contrast, in other embodiments described below, a resistive layer can be located at a top or bottom of access line3602. Access line3602has a central portion3613and left and right portions3604,3606. Central portion3613is formed overlying via2654, which electrically connects access line3602to transistor circuitry2650. An optional resistive layer2630is located between via2654and access line3602. Resistive layer2630is formed on a bottom surface of access line3602and on a top surface of via2654. FIG.37shows an access line3702having multiple vertically-stacked resistive layers3710,3711and3712,3714located in an interior region of the access line3702, in accordance with some embodiments. Access line3702electrically connects to various memory cells including near memory cells3740,3741and3760,3761. Access line3702also electrically connects to far memory cells3742,3743and3762,3763. Resistive layer3710is located overlying near memory cells3740and far memory cells3742. Resistive layer3711is located underlying near memory cells3760and far memory cells3762. Resistive layer3712is located overlying near memory cells3741, but is not formed overlying far memory cells3743. Resistive layer3714is located underlying near memory cells3761, but is not formed underlying far memory cells3763. In one embodiment, one or more performance characteristics of far memory cells3742,3762is different than for far memory cells3743,3763(e.g., due to different memory cell stack materials and/or structure). The difference in positioning of resistive layers3710,3711relative to overlying or underlying memory cells as compared to resistive layers3712,3714is based on at least one of these performance characteristics (e.g., the resistance circuit path to a driver is designed to be different based on at least one characteristic). A central portion3713of access line3702is located overlying via3754. Central portion3713and via3754are formed in a socket region of a memory array. In one example, via3754electrically connects to a driver formed in CMOS circuitry (not shown), similarly as described above. Resistive layer3730is formed on a bottom surface of access line3702. Resistive layer3732is formed on a top surface of access line3702. In one example, resistive layer3730,3732is tungsten silicon nitride. Access line3702is formed of a conductive material. In one example, the conductive material is tungsten or another metal. The resistive layers have a resistivity greater than the resistivity of the conductive material used to form access line3702. In one example, the resistive layers are formed of tungsten silicon nitride and/or carbon. In one example, a layer of tungsten silicon nitride is formed on or under a layer of carbon to provide each resistive layer. In one embodiment, access line3702is formed by depositing two metal layers. Resistive layers3710,3712are formed in a bottom metal layer. Resistive layers3711,3714are formed in a top metal layer formed (e.g., by deposition) on the bottom metal layer. In one example, the bottom metal layer is a second cut metal line in a cross-point memory array. In one example, the top metal layer is first cut metal line in the cross-point memory array. FIG.38shows an access line3802having resistive layers3810,3811located at a top and bottom of the access line3802, in accordance with some embodiments. The resistive layer3810at the bottom is patterned so that the resistive layer is overlying near memory cells3740, but not far memory cells3742. Via3854electrically connects the memory cells to a driver (not shown). In one embodiment, access line3802is formed using two metal layers3850,3860(the positioning of the two metal layers is indicated by a dashed line). For example, top metal layer3850is formed on bottom metal layer3860. Resistive layer3832is formed on a top surface of access line3802. More specifically, because resistive layer3811is located at the top of access line3802, resistive layer3832is formed directly on the top surface of resistive layer3811. Resistive layer3830is formed on a bottom surface of access line3802. More specifically, because resistive layer3810is located at the bottom of access line3802, a portion of resistive layer3830is formed directly on the bottom surface of resistive layer3810. The remaining portion of resistive layer3830is formed on a bottom surface of the bulk conductive material used to form access line3802. A portion of resistive layer3830is formed on top of via3854. FIG.39shows an access line3802having resistive layers3910,3911located at a bottom and top of access line3802, in accordance with some embodiments. The resistive layer3911at the top is patterned so that the resistive layer3911is underlying near memory cells3760, but not far memory cells3762. A portion of resistive layer3832is located in direct contact with a top surface of resistive layer3911. Resistive layer3930is on a bottom surface of access line3802. Resistive layer3930has an opening so that via3954directly contacts resistive layer3910. Resistive layers3810,3811ofFIG.38and/or resistive layers3911,3910ofFIG.39can be formed of various metal oxides or metal nitrides. In one example, the resistive layer is formed of tungsten silicon nitride. The thickness of each resistive layer can be varied as desired. In one example, the thickness of the layer is 3-10 nanometers. In one embodiment, resistive layers3830,3832can be formed of similar materials and/or thicknesses as used for resistive layers3810,3811or3910,3911. FIGS.40-42show steps in the manufacture of a memory device (e.g., a cross-point memory array) having an access line including one or more resistive layers for spike current suppression, in accordance with some embodiments.FIG.40shows the memory device at on initial stage of manufacture. Resistive layer4030has been formed overlying memory cells3740,3742. Resistive layer4030is formed on top of via4054, which is located in a socket region4002of a memory array. Resistive layer4010has been formed on top of resistive layer4030. FIG.41shows the memory device after resistive layer4010is patterned to provide resistive layer4110. Resistive layer4110is patterned and formed by removing a portion of resistive layer4010that is overlying far memory cells3742. In one example, the patterning of the resistive layer (e.g., a thin carbon film) is performed using a photolithography step followed by a dry etch step to etch the new resistive layer. A wet clean step is used to remove any residuals. A chemical mechanical polishing (CMP) step is used to planarize the new resistive layer. FIG.42shows the memory device after depositing a conductive material to form an access line4260. In one example, the conductive material is a metal. In one example, the conductive material is tungsten. After deposition, the access line (e.g., metal line) is processed using chemical mechanical polishing (CMP) to planarize the access line. After the CMP, a top surface of access line4260is planarized. In one example, access line4260is formed by sputtering or chemical vapor deposition (CVD). Access line4260is an example of access line3802, and via4054is an example of via3854. Resistive layer4110is an example of resistive layer3810, and resistive layer4030is an example of resistive layer3830. Other resistive layers in access line4260can be formed similarly as described above. In some embodiments, access line4260can be formed by depositing two or more layers of metal. FIGS.43-44show methods for manufacturing a memory device including an access line having one or more resistive layers, in accordance with some embodiments. For example, the method ofFIG.43or44can be used to form the access line and resistive layers ofFIGS.36-39. In one example, the manufactured memory device is memory device101. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. Referring toFIG.43, at block4301, a memory array including memory cells and one or more vias is formed. In one example, memory cells2640,2642,2644,2646and via2654are formed as part of a memory array. At block4303, a bottom conductive layer is formed overlying the memory cells in the vias. In one example, access layer3602is formed by depositing two tungsten layers (e.g., a bottom tungsten layer and a top tungsten layer). At block4305, a resistive layer is formed on the bottom conductive layer. In one example, resistive layers3610,3612are formed on the bottom tungsten layer, and then a top tungsten layer is deposited on the resistive layers3610,3612. At block4307, a top conductive layer is formed on the resistive layer and the bottom conductive layer. In one example, the top tungsten layer is formed on resistive layers3610,3612. In one embodiment, a method comprises: forming a memory array comprising memory cells and at least one via; forming a first conductive layer (e.g., a first deposited tungsten layer) overlying the memory cells and the via, wherein the first conductive layer is electrically connected to the memory cells; forming a resistive layer (e.g., resistive layer3610) on a top surface of the first conductive layer, wherein a portion of the memory cells are located underlying the resistive layer; and forming a second conductive layer (e.g., a second deposited tungsten layer) overlying the resistive layer and the first conductive layer. The first and second conductive layers are configured in an access line (e.g., access line3602) for accessing the memory cells. In one embodiment, the method further comprises: forming a driver in a semiconductor substrate (e.g.,2652), wherein the memory array is formed overlying the semiconductor substrate, and the driver is electrically connected to the via (e.g.,2654). The driver is configured to generate a voltage on the access line for selecting one or more of the memory cells (e.g.,2640,2642,2644,2646). In one embodiment, the resistive layer is a first resistive layer, and the method further comprises forming a second resistive layer (e.g.,2630) between the via and the first conductive layer. In one embodiment, the first resistive layer comprises tungsten silicon nitride or carbon, and the second resistive layer comprises tungsten silicon nitride. In one embodiment, the method further comprises: forming a photoresist layer overlying the first conductive layer; patterning the photoresist layer; and etching the first conductive layer using the patterned photoresist layer to provide an opening at the top surface of the first conductive layer. Forming the resistive layer on the top surface of the first conductive layer comprises forming the resistive layer in the opening. In one embodiment, the access line is a first one of a plurality of bit lines, other ones of the bit lines are used to access other memory cells in the memory array, and the plurality of bit lines is formed at least in part from the first conductive layer and the second conductive layer. Referring toFIG.44, at block4401, one or more vias (e.g.,3754) are formed in a memory array. The memory array includes memory cells (e.g.,3740,3741,3742,3743). At block4403, a conductive material is deposited to form an access line (e.g.,3702,3802,4260). The conductive material is deposited overlying the via (e.g.,4054) and the memory cells. At block4405, a resistive layer (e.g.,3710,3714,3810,3811,4110) is formed as part of the access line. The memory cells are located underlying or overlying the resistive layer. In one embodiment, a method comprises: forming at least one via (e.g.,4054) in a memory array; and forming an access line (e.g.,4260) overlying the via for accessing first memory cells of the memory array, wherein the access line is formed of a conductive material having a first resistivity. Forming the access line comprises: depositing the conductive material; and forming a first resistive layer (e.g.,4110) as part of the access line. The first resistive layer has a second resistivity greater than the first resistivity, and the first memory cells (e.g.,3740) are located underlying or overlying the resistive layer. In one embodiment, the conductive material is deposited on the first resistive layer. In one embodiment, the first resistive layer (e.g., resistive layer3811ofFIG.38) is deposited on the conductive material. In one embodiment, the first resistive layer comprises at least one of tungsten silicon nitride or carbon. In one embodiment, the method further comprises depositing a second resistive layer (e.g.,4030) on the at least one via and on the first memory cells. The first resistive layer is formed on the second resistive layer. In one embodiment, the second resistive layer is tungsten silicon nitride. In one embodiment, the method further comprises depositing a second resistive layer (e.g.,3832ofFIG.38) on the first resistive layer. The first memory cells (e.g.,3760) are formed on the second resistive layer. In one embodiment, the first resistive layer (e.g.,3710,3712) is located in an interior region of the access line. The method further comprises forming a second resistive layer (e.g.,3711,3714) as part of the access line. The second resistive layer is located in the interior region of the access line and overlying the first resistive layer. In one embodiment, the method further comprises forming a driver. The driver is configured to generate a voltage on the access line to access the first memory cells. The first memory cells have first electrical distances from the driver. The memory array has second memory cells having second electrical distances from the driver, and the second electrical distances are greater than the first electrical distances. Forming the first resistive layer comprises patterning and etching the first resistive layer to remove a first portion of the first resistive layer overlying the second memory cells. In one embodiment, patterning and etching the first resistive layer (e.g.,3810,4110) further comprises removing a second portion of the first resistive layer overlying the at least one via (e.g.,3854,4054). In one embodiment, an apparatus comprises: an access line for accessing first memory cells of a memory array, wherein the access line is formed of a conductive material having a first resistivity, the access line comprises a resistive layer having a second resistivity greater than the first resistivity, and the first memory cells are located in a first portion of the memory array underlying or overlying the resistive layer; a via electrically connected to the access line; and a driver electrically connected to the via, wherein the driver is configured to generate a voltage on the access line to access the first memory cells. In one embodiment, the resistive layer (e.g.,3810,3811,3910,3911) is located at a top or bottom of the access line (e.g.,3802). In one embodiment, the resistive layer (e.g.,3710,3711,3712,3714) is located in an interior region of the access line (e.g.,3702). In one embodiment, the conductive material is tungsten, and the resistive layer is formed of at least one of tungsten silicon nitride or carbon. In one embodiment, the first memory cells have respective first electrical distances from the driver; the memory array has second memory cells having respective second electrical distances from the driver; the second electrical distances are greater than the first electrical distances; and the resistive layer is patterned and etched so that a portion of the resistive layer underlying or overlying the second memory cells is removed. In one embodiment, each of the first memory cells is: a memory cell comprising chalcogenide; a memory cell comprising a select device, and a phase change material as a memory element; a self-selecting memory cell comprising chalcogenide; or a resistive memory cell. In one embodiment, the resistive layer is a first resistive layer, the access line further comprises a second resistive layer, and the second resistive layer is located overlying or underlying the first resistive layer. In one embodiment, at least one of the first resistive layer or the second resistive layer is located at a top or bottom of the access line. In one embodiment, the resistive layer comprises at least one of tungsten silicon nitride or carbon. In one embodiment, the apparatus further comprises a tungsten silicon nitride layer (e.g.,3730,3732,3830,3832) located on a top or bottom surface of the access line, wherein a portion of the tungsten silicon nitride layer is located between the first memory cells and the access line. In one embodiment, a volume of the conductive material of the access line is at least 70 percent of a total volume of the access line. In one embodiment, the access line is formed by: sputtering or chemical vapor deposition (CVD); and after the sputtering or CVD, chemical mechanical polishing (CMP) so that a top surface of the access line is planarized. In one embodiment, an apparatus comprises: an access line (e.g.,3602) having a first portion, a second portion, and a central portion (e.g.,3613), wherein: the first and second portions are on opposite sides of the central portion, and each of the first and second portions is configured to access at least one memory cell of a memory array; and the access line includes a first resistive layer (e.g.,3610) in the first portion and a second resistive layer (e.g.,3612) in the second portion, each of the first and second resistive layers is configured as part of the access line; at least one via electrically connected, by the central portion of the access line, to the first and second portions of the access line; and a driver electrically connected to the at least one via. The driver is configured to generate a voltage on the first portion to access a first memory cell, the first memory cell located in a portion of the memory array underlying or overlying the first resistive layer, and to generate a voltage on the second portion to access a second memory cell, the second memory cell located in a portion of the memory array underlying or overlying the second resistive layer. In one embodiment, the central portion is located in a socket region of the memory array, and the at least one via is located in the socket region. In one embodiment, the access line is configured to access at least 1,000 memory cells of the memory array; a first group of 100 to 500 memory cells of the memory array is located overlying or underlying the first resistive layer; and a second group of 100 to 500 memory cells of the memory array is located overlying or underlying the second resistive layer. In one embodiment, a central longitudinal axis of at least one of the first resistive layer or the second resistive layer is located at a height above a bottom of the access line, and the height is 30 to 70 percent of a thickness of the access line. In one embodiment, each of the first and second resistive layers (e.g.,3810,3811) is located at a top or bottom of the access line. In one embodiment, each of the first and second resistive layers has a lateral length of at least 50 nanometers. In one embodiment, each of the first and second resistive layers is formed of at least one of tungsten silicon nitride, carbon, a metal oxide, or a metal nitride. In one embodiment, wherein each of the first and second resistive layers has a thickness of 3 to 10 nanometers. In one embodiment, wherein the first memory cell is: a memory cell comprising chalcogenide; a memory cell comprising a select device, and a phase change material as a memory element; a self-selecting memory cell comprising chalcogenide; or a resistive memory cell. The description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one. Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments. In this description, various functions and/or operations of a memory device may be described as being performed by or caused by software code to simplify description. However, those skilled in the art will recognize what is meant by such expressions is that the functions and/or operations result from execution of the code by one or more processing devices, such as a microprocessor, Application-Specific Integrated Circuit (ASIC), graphics processor, and/or a Field-Programmable Gate Array (FPGA). Alternatively, or in combination, the functions and operations can be implemented using special purpose circuitry (e.g., logic circuitry), with or without software instructions. Functions can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are not limited to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by a computing device. The memory device as described above can include one or more processing devices (e.g., processing device116), such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device. Routines executed to implement memory operations may be implemented as part of an operating system, middleware, service delivery platform, SDK (Software Development Kit) component, web services, or other specific application, component, program, object, module or sequence of instructions (sometimes referred to as computer programs). Invocation interfaces to these routines can be exposed to a software development community as an API (Application Programming Interface). The computer programs typically include one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects. A computer-readable medium can be used to store software and data which when executed by a computing device causes the device to perform various methods for a memory device (e.g., read or write operations). The executable software and data may be stored in various places including, for example, ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a computer-readable medium in entirety at a particular instance of time. Examples of computer-readable media include, but are not limited to, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, solid-state drive storage media, removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMs), Digital Versatile Disks (DVDs), etc.), among others. The computer-readable media may store the instructions. Other examples of computer-readable media include, but are not limited to, non-volatile embedded devices using NOR flash or NAND flash architectures. Media used in these architectures may include un-managed NAND devices and/or managed NAND devices, including, for example, eMMC, SD, CF, UFS, and SSD. In general, a non-transitory computer-readable medium includes any mechanism that provides (e.g., stores) information in a form accessible by a computing device (e.g., a computer, mobile device, network device, personal digital assistant, manufacturing tool having a controller, any device with a set of one or more processors, etc.). A “computer-readable medium” as used herein may include a single medium or multiple media (e.g., that store one or more sets of instructions). In various embodiments, hardwired circuitry may be used in combination with software and firmware instructions to implement various functions of a memory device. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by a computing device. Various embodiments set forth herein can be implemented for memory devices that are used in a wide variety of different types of computing devices. As used herein, examples of a “computing device” include, but are not limited to, a server, a centralized computing platform, a system of multiple computing processors and/or components, a mobile device, a user terminal, a vehicle, a personal communications device, a wearable digital device, an electronic kiosk, a general purpose computer, an electronic document reader, a tablet, a laptop computer, a smartphone, a digital camera, a residential domestic appliance, a television, or a digital music player. Additional examples of computing devices include devices that are part of what is called “the internet of things” (IOT). Such “things” may have occasional interactions with their owners or administrators, who may monitor the things or modify settings on these things. In some cases, such owners or administrators play the role of users with respect to the “thing” devices. In some examples, the primary mobile device (e.g., an Apple Phone) of a user may be an administrator server with respect to a paired “thing” device that is worn by the user (e.g., an Apple watch). In some embodiments, the computing device can be a computer or host system, which is implemented, for example, as a desktop computer, laptop computer, network server, mobile device, or other computing device that includes a memory and a processing device. The host system can include or be coupled to a memory sub-system (e.g., memory device101) so that the host system can read data from or write data to the memory sub-system. The host system can be coupled to the memory sub-system via a physical host interface. In general, the host system can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. In some embodiments, the computing device is a system including one or more processing devices. Examples of the processing device can include a microcontroller, a central processing unit (CPU), special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), a system on a chip (SoC), or another suitable processor. In one example, a computing device is a controller of a memory system. The controller includes a processing device and memory containing instructions executed by the processing device to control various operations of the memory system. Although some of the drawings illustrate a number of operations in a particular order, operations which are not order dependent may be reordered and other operations may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be apparent to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
131,124
11862216
DETAILED DESCRIPTION Technical solutions in some embodiments of the present disclosure will be described clearly and completely below with reference to the accompanying drawings. Obviously, the described embodiments are merely some but not all embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art on a basis of the embodiments of the present disclosure shall be included in the protection scope of the present disclosure. Unless the context requires otherwise, throughout the description and the claims, the term “comprise” and other forms thereof such as the third-person singular form “comprises” and the present participle form “comprising” are construed as open and inclusive meaning, i.e., “including, but not limited to.” In the description of the specification, the terms such as “one embodiment,” “some embodiments,” “exemplary embodiments,” “example,” “specific example,” or “some examples” are intended to indicate that specific features, structures, materials, or characteristics related to the embodiment(s) or example(s) are included in at least one embodiment or example of the present disclosure. Schematic representations of the above terms do not necessarily refer to the same embodiment(s) or example(s). In addition, the specific features, structures, materials, or characteristics may be included in any one or more embodiments or examples in any suitable manner. Hereinafter, the terms “first” and “second” are used for descriptive purposes only, and are not to be construed as indicating or implying the relative importance or implicitly indicating the number of indicated technical features. Thus, a feature defined with “first” or “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present disclosure, the term “a plurality of” or “the plurality of” means two or more unless otherwise specified. In a shift register provided by the embodiments of the present disclosure, transistors used in the shift register may be thin film transistors (TFTs), field effect transistors (e.g., metal oxide semiconductor field-effect transistors (MOS-FETs)), or other switching devices with the same characteristics. The embodiments of the present disclosure are all described by considering an example in which the transistors are the thin film transistors. In the shift register provided by the embodiments of the present disclosure, a control electrode of each thin film transistor used in the shift register is a gate of the thin film transistor, a first electrode thereof is one of a source and a drain of the thin film transistor, and a second electrode thereof is the other of the source and the drain of the thin film transistor. Since the source and the drain of the thin film transistor may be symmetrical in structure, there may be no difference in structure between the source and the drain of the thin film transistor. That is, there may be no difference in structure between the first electrode and the second electrode of the thin film transistor in the embodiments of the present disclosure. For example, in a case where the transistor is a P-type transistor, the first electrode of the transistor is a source, and the second electrode thereof is a drain. For example, in a case where the transistor is an N-type transistor, the first electrode of the transistor is a drain, and the second electrode thereof is a source. In the embodiments of the present disclosure, a capacitor may be a capacitor device fabricated separately through a process. For example, the capacitor is realized by fabricating special capacitor electrodes, and each capacitor electrode of the capacitor may be made of a metal layer, a semiconductor layer (e.g., polysilicon doped with impurities), or the like. Alternatively, the capacitor may be may be realized through a parasitic capacitor between transistors, or through a parasitic capacitor between a transistor itself and another device or wire, or through a parasitic capacitor between lines of a circuit itself. In the shift register provided by the embodiments of the present disclosure, “a first node”, “a second node” and the like do not represent actual components, but represent junctions of related electrical connections in a circuit diagram. That is, these nodes are nodes equivalent to the junctions of the related electrical connections in the circuit diagram. In the shift register provided in the embodiments of the present disclosure, a “low voltage” refers to a voltage that can make an operated P-type transistor included in the shift register turned on, and cannot make an operated N-type transistor included in the shift register turned on (i.e., the N-type transistor is turned off); correspondingly, a “high voltage” refers to a voltage that can make the operated N-type transistor included in the shift register turned on, and cannot make the operated P-type transistor included in the shift register turned on i.e., the P-type transistor is turned off). As shown inFIG.1, some embodiments of the present disclosure provide a display apparatus1000. The display apparatus1000may be a television, a mobile phone, a computer, a notebook computer, a tablet computer, a personal digital assistant (PDA), an on-board computer, etc. As shown inFIG.1, the display apparatus1000includes a frame1100, and a display panel1200, a circuit board, a display driver integrated circuit, and other electronic components that are disposed in the frame1100. The display panel1200may be an organic light-emitting diode (OLED) display panel, a quantum dot light-emitting diode (QLED) display panel, a micro light-emitting diode (micro LED) display panel, which is not limited in the embodiments of the present disclosure. Some embodiments of the present disclosure will be schematically described below by considering an example in which the display panel1200is the OLED display panel. In some embodiments, as shown inFIG.2, the display panel1200has a display area AA and a peripheral area BB disposed on at least one side of the display area AA.FIG.2shows an example in which the peripheral area BB is disposed around the display area AA. Referring toFIGS.2and3, the display area AA of the display panel1200is provided with sub-pixels P of a plurality of light-emitting colors therein. The sub-pixels P of the plurality of light-emitting colors include at least first sub-pixels of which the light-emitting color is a first color, second sub-pixels of which the light-emitting color is a second color, and third sub-pixels of which the light-emitting color is a third color, and the first color, the second color, and the third color are three primary colors (e.g., red, green, and blue). For convenience of description, the embodiments of the present disclosure will be described by considering an example in which the plurality of sub-pixels P are arranged in a matrix form. In this case, sub-pixels P arranged in a line in a horizontal direction X are referred to as sub-pixels P in a same row, and sub-pixels P arranged in a line in a vertical direction Y are referred to as sub-pixels P in a same column. Referring toFIGS.3and4A, each sub-pixel P includes a pixel driving circuit100. Control electrodes of transistors in the pixel driving circuits100located in a same row are coupled to a same gate line GL, and first electrodes (e.g., sources) of the transistors of the pixel driving circuits100located in a same column are coupled to a same data line DL. In some embodiments, referring toFIG.4A, the pixel driving circuit100includes a driving transistor and six switching transistors. The driving transistor and the six switching transistors may use low-temperature polysilicon thin film transistors, or use oxide thin film transistors, or use both the low-temperature polysilicon thin film transistors and the oxide thin film transistors. An active layer of the low-temperature polysilicon thin film transistor uses low-temperature polysilicon (LTPS), and an active layer of the oxide thin film transistor uses oxide semiconductor such as indium gallium zinc oxide or indium gallium tin oxide. The low-temperature polysilicon thin film transistor has the advantages of high mobility and high charging rate, and the oxide thin film transistor has the advantages of low leakage current. The low-temperature polysilicon thin film transistor and the oxide thin film transistor are integrated into a display panel to produce a low-temperature polycrystalline oxide (LTPO) display panel, which may realize low-frequency drive, reduce power consumption, and improve display quality by utilizing the advantages of both the low temperature polysilicon thin film transistor and the oxide thin film transistor. With reference toFIGS.2,4A,4B,4C, and4D, the pixel driving circuit100included in the LTPO display panel will be schematically described below by considering an example in which the pixel driving circuit100includes seven transistors T1′ to T7′ and one capacitor CST. In the following description, the pixel driving circuit100is any of the pixel driving circuits100located in the sub-pixels P in an N-th row, and N is a positive integer. For example, as shown inFIG.4A, the pixel driving circuit100includes seven transistors T1′ to T7′ and one capacitor CST. In the pixel driving circuit100, a control electrode of a first transistor T1′ is coupled to a reset signal terminal RESET, control electrodes of a fourth transistor T4′ and a seventh transistor T7′ are both coupled to a first scan signal terminal GATE1, and a control electrode of a second transistor T2′ is coupled to a second scan signal terminal GATE2. The first transistor T1′ is a reset transistor, the second transistor T2′, the fourth transistor T4′, and the seventh transistor T7′ are scan transistors, and the first transistor T1′, the second transistor T2′, the fourth transistor T4′ and the seventh transistor T7′ are all N-type oxide TFTs. A control electrode of a third transistor T3′ is coupled to an end of the capacitor CST, and control electrodes of a fifth transistor T5′ and a sixth transistor T6′ are both coupled to an enable signal terminal EM. The third transistor T3′ is a driving transistor, the fifth transistor T5′ and the sixth transistor T6′ are switching transistors, and the third transistor T3′, the fifth transistor T5′ and the sixth transistor T6′ are all P-type low-temperature polysilicon TFTs. In this case, high charge mobility, high stability and high scalability may be achieved at low production costs in combination with the advantages of both the high stability at the low refresh rate and the low production costs of the oxide TFTs and the advantages of the high mobility of the LTPS-TFTs. It will be noted that first scan signal terminals GATE1of the pixel driving circuits100in the sub-pixels in the N-th row are coupled to a gate line GL(N), second scan signal terminals GATE2of the pixel driving circuits100in the sub-pixels in the N-th row are coupled to a gate line GL(N−1), and reset signal terminals RESET of the pixel driving circuits100in the sub-pixels in the N-th row are coupled to the gate line GL(N−1). Of course, the second scan signal terminals GATE2and the reset signal terminals RESET may be coupled to two gate lines GL, respectively, and a gate line GL coupled to the reset signal terminals RESET and a gate line GL coupled to the second scan signal terminals GATE2may be respectively driven by different gate driver circuits200. Referring toFIG.4B, a frame period of the pixel driving circuit100includes a reset phase S1′, a scanning phase S2′, and a light-emitting phase S3′. In the reset phase S1′, the first transistor T1′ is turned on under control of a reset signal Reset from the reset signal terminal RESET, the second transistor T2′ is turned on under control of a second scan signal Gate2from the second scan signal terminal GATE2, and voltages at a first node N1′ and a second node N2′ are reset to be initialization voltage signals. In the scanning phase S2′, the fourth transistor T4′ and the seventh transistor T7′ are both turned on under control of a first scan signal Gate1from the first scan signal terminal GATE1, the third transistor T3is turned on under control of the voltage at the second node N2′, and a data signal from a data signal terminal DATA is written into the capacitor CST. In the light-emitting phase S3′, the fifth transistor T5′ and the sixth transistor T6′ are turned on under control of an enable signal Em from the enable signal terminal EM, and the third transistor T3′ is turned on under control of the voltage at the second node N2′, so as to output a driving current signal to an element to be driven400. However, the above-mentioned pixel driving circuit needs to be driven by scan signals (i.e., high-voltage signals) suitable for the N-type transistors, and the scan transistors are all oxide TFTs, which have charge mobility lower than charge mobility of low-temperature polycrystalline oxide TFTs and a poor writing ability. Therefore, an output ability of the gate driver circuit needs to be improved. For example, as shown inFIG.4C, the pixel driving circuit100includes seven transistors T1′ to T7′ and one capacitor CST. In the pixel driving circuit100, control electrodes of a second transistor T2′ and a seven transistor T7′ are respectively coupled to a first scan signal terminal GATE1and a third scan signal terminal GATE3, and control electrodes of a first transistor T1and a fourth transistor T4′ are coupled to a second scan signal terminal GATE2. The first transistor T1′, the second transistor T2′, the fourth transistor T4′, and the seven transistor T7′ are all scan transistors, the first transistor T1′ and the fourth transistor T4′ are P-type low-temperature polysilicon TFTs, and the second transistor T2′ and the seven transistor T7′ are N-type oxide TFTs. A control electrode of a third transistor T3′ is coupled to an end of the capacitor CST, and control electrodes of a fifth transistor T5′ and a sixth transistor T6′ are both coupled to an enable signal terminal EM. The third transistor T3′ is a driving transistor, the fifth transistor T5′ and the sixth transistor T6′ are switching transistors. In this case, high charge mobility, high stability and high scalability may be achieved at low production costs in combination with the advantages of both the high stability at the low refresh rate and the low production costs of the oxide TFTs and the advantages of the high mobility of the LTPS-TFTs. It will be noted that first scan signal terminals GATE1of the pixel driving circuits100in the sub-pixels in the N-th row are coupled to a gate line GL(N−1) transmitting an N-type scan signal, second scan signal terminals GATE2of the pixel driving circuits100in the sub-pixels in the N-th row are coupled to a gate line GL(N) transmitting a P-type scan signal, and third scan signal terminals GATE3of the pixel driving circuits100in the sub-pixels in the N-th row are coupled to another gate line GL(N) transmitting an N-type scan signal. Referring toFIG.4D, a frame period of the pixel driving circuit100includes a reset phase S1′, a scanning phase S2′, and a light-emitting phase S3′. In the reset phase S1′, the second transistor T2′ is turned on under control of a first scan signal Gate1from the first scan signal terminal GATE1, and a voltage at a second node N2′ is reset to be an initialization voltage signal. In the scanning phase S2′, the first transistor T1′ is turned on under control of a second scan signal Gate2from the second scan signal terminal GATE2, and a voltage at a first node N1′ is reset to be an initialization voltage signal; the fourth transistor T4′ is turned on under control of the second scan signal Gate2from the second scan signal terminal GATE2, the seventh transistor T7′ is turned on under control of a third scan signal Gate3from the third scan signal terminal GATE3, and a data signal from the data signal terminal DATA is written into the capacitor CST. In the light-emitting phase S3′, the fifth transistor T5′ and the sixth transistor T6′ are turned on under control of an enable signal Em from the enable signal terminal EM, and the third transistor T3′ is turned on under control of the voltage at the second node N2′, so as to output a driving current signal to an element to be driven400. However, since the above pixel driving circuit100needs to be driven by scan signals suitable for the N-type transistor (i.e., high-voltage signals) and scan signals suitable for the P-type transistor (i.e., low-voltage signals), the gate driver circuit needs to provide both a high-voltage scan signal and a low-voltage scan signal. As shown inFIG.2, the peripheral area BB of the display panel1200is provided with the gate driver circuit200and a data driver circuit300therein. In some embodiments, the gate driver circuit200may be disposed on a side in an extending direction of the gate lines GL, and the data driver circuit300may be disposed on a side in an extending direction of the data lines DL, so as to drive the pixel driving circuits100in the display panel1200for display. In some embodiments, the gate driver circuit200is agate driver integrated circuit (IC). In some other embodiments, the gate driver circuit200is a gate driver on array (GOA) circuit, that is, the gate driver circuit200is directly integrated on an array substrate of the display panel1200. Compared with a case where the gate driver circuit200is set to be the gate driver IC, setting the gate driver circuit200as the GOA circuit may reduce manufacturing costs of the display panel1200and reduce a frame size of the display panel1200, so as to realize a narrow frame design. The following embodiments are all described by considering an example in which the gate driver circuit200is the GOA circuit. It will be noted thatFIGS.2and3are only schematic and described by considering an example in which the gate driver circuit200is disposed on a single side of the peripheral area BB of the display panel1200, and the gate lines GL are sequentially driven row by row from the single side, i.e., single-sided driving. In some embodiments, the gate driver circuits200may be respectively disposed on two sides, in an extending direction of the gate lines GL, of the peripheral area BB of the display panel1200, and the gate lines GL are sequentially driven row by row from two sides simultaneously by the two gate driver circuits200, i.e., double-sided driving. In some other embodiments, the gate driver circuits200may be respectively disposed on two sides, in the extending direction of the gate lines GL, of the peripheral region BB of the display panel1200, and the gate lines GL are sequentially driven row by row from two sides alternately by the two gate driver circuits200, i.e., alternate driving. The following embodiments of the present disclosure are all described by considering an example of the single-sided driving. In some embodiments of the present disclosure, as shown inFIG.3, the gate driver circuit200includes at least two shift registers RS that are cascaded. Referring toFIG.3, the gate driver circuit200includes N shift registers RS (RS1, RS2, . . . , RS(N)) that are cascaded. In this case, the N shift registers RS (RS1, RS2, . . . , RS(N)) that are cascaded are connected to N gate lines (GL1, GL2, . . . , GL(N)) in one-to-one correspondence, where N is a positive integer. In some embodiments, as shown inFIGS.3and10, in the shift registers RS (RS1, RS2, . . . , RS(N)) in the gate driver circuit200, a scan signal output terminal OUTPUT1and a cascade signal output terminal OUTPUT2are set separately. A gate scan signal is output to a gate line GL connected to the shift register through the scan signal output terminal OUTPUT1, and a cascade signal is output through the cascade signal output terminal OUTPUT2. For example, in every two adjacent shift registers RS, a signal input terminal INPUT of a latter-stage shift register RS is coupled to the cascade signal output terminal OUTPUT2of a former-stage shift register RS, and the signal input terminal INPUT of a first-stage shift register RS1is coupled to an initialization signal terminal STV. Some embodiments of the present disclosure provide a shift register RS. As shown inFIG.5, the shift register RS includes an input circuit1, a first control circuit2, a second control circuit3, and an output circuit4. The input circuit1is coupled to the signal input terminal INPUT, a first voltage signal terminal VGH and a first node N1. The input circuit INPUT is configured to transmit, under control of an input signal Input from the signal input terminal INPUT, a first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1. For example, in a case where a voltage of the input signal Input transmitted by the signal input terminal INPUT is a low voltage, the input circuit1may be turned on under control of the low voltage of the input signal Input from the signal input terminal INPUT to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1. For example, in a charging phase S2(referring toFIG.13), the voltage of the input signal Input transmitted by the signal input terminal INPUT is the low voltage, the input circuit1is turned on under the control of the low voltage of the input signal Input from the signal input terminal INPUT to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1, so that a voltage at the first node N1is raised. The first control circuit2is coupled to the first node N1, a first clock signal terminal CK1, a second voltage signal terminal VGL and a second node N2. The first control circuit2is configured to transmit, under control of a first clock signal Ck1from the first clock signal terminal CK1and the voltage at the first node N1, a second voltage signal Vgl from the second voltage signal terminal VGL to the second node N2. For example, in a case where a voltage of the first clock signal Ck1from the first clock signal terminal CK1is a high voltage, and the voltage at the first node N1is a high voltage, the first control circuit2may be turned on under control of the high voltage of the first clock signal Ck1from the first clock signal terminal CK1and the high voltage at the first node N1to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the second node N2. For example, in the charging phase S2(referring toFIG.13), the voltage of the first clock signal Ck1transmitted by the first clock signal terminal CK1is the high voltage, the voltage at the first node N1is the high voltage, and the first control circuit2is turned on under the control of the high voltage of the first clock signal Ck1from the first clock signal terminal CK1and the high voltage at the first node N1to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the second node N2, so that a voltage at the second node N2is lowered. The second control circuit3is coupled to the second node N2, a second clock signal terminal CK2, and a third node N3. The second control circuit3is configured to transmit, under control of the voltage at the second node N2, a second clock signal Ck2from the second clock signal terminal CK2to the third node N3. For example, in a case where the voltage at the second node N2is a low voltage, the second control circuit3may be turned on under control of the low voltage at the second node N2to transmit the second clock signal Ck2from the second clock signal terminal CK2to the third node N3. For example, in an outputting phase S3(referring toFIG.13), the second clock signal Ck2is the low voltage, the voltage at the second node N2is the low voltage, and the second control circuit3is turned on under control of the low voltage at the second node N2to transmit the low voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3, so that a voltage at the third node N3is lowered. The output circuit4is coupled to the third node N3, the first voltage signal terminal VGH, and a scan signal output terminal OUTPUT1. The output circuit4is configured to transmit, under control of the voltage at the third node N3, the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal output terminal OUTPUT1. For example, in a case where the voltage at the third node N3is a low voltage, the output circuit4may be turned on under control of the low voltage at the third node N3to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal output terminal OUTPUT1. For example, in the outputting phase S3(referring toFIG.13), the voltage at the third node N3is the low voltage, and the output circuit4is turned on under control of the low voltage at the third node N3to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal output terminal OUTPUT1, so that the scan signal output terminal OUTPUT1of the shift register RS outputs the scan signal. It will be noted that the second voltage signal terminal VGL is configured to transmit a direct current low-voltage signal (low voltage), and the first voltage signal terminal VGH is configured to transmit a direct current high-voltage signal (high voltage). It can be seen from the above that in the shift register RS provided by some embodiments of the present disclosure, the input circuit1is turned on under the control of the low voltage of the input signal Input from the signal input terminal INPUT to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1; the first control circuit2transmits, under the control of the high voltage of the first clock signal Ck1from the first clock signal terminal CK1and the high voltage at the first node N1, the second voltage signal Vgl from the second voltage signal terminal VGL to the second node N2; the second control circuit3transmits, under the control of the low voltage at the second node N2, the low voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3; and the output circuit4transmits, under the control of the low voltage at the third node N3, the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal output terminal OUTPUT1, so that the scan signal output terminal OUTPUT1of the shift register RS outputs the scan signal. In this way, in the outputting phase S3(referring toFIG.13), the output circuit4transmits the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal output terminal OUTPUT1, so as to output the scan signal. The shift register RS outputting the scan signal through a constant voltage terminal (the first voltage signal terminal VGH), compared with a case where the scan signal is output through a square wave pulse signal terminal (the output voltages including a low voltage and a high voltage), may reduce an influence of load of the scan signal output terminal OUTPUT1on the voltage signal output by the shift register RS. The voltage signal output by the scan signal output terminal OUTPUT1of the shift register RS is more stable, which may improve display stability. In addition, according to the driving requirements of the pixel driving circuit100, a magnitude of the voltage of the first voltage signal Vgh of the first voltage signal terminal VGH is controlled, so that the driving requirements of the pixel driving circuit may be met. For example, for the LTPO pixel driving circuit shown inFIG.4A, the scan transistors (i.e., T2′, T4′, and T7′) and the reset transistor (i.e., T1′) are N-type transistors, and need to be turned on at a high voltage. The voltage of the first voltage signal Vgh from the first voltage signal terminal VGH is controlled to be the high voltage, so that the high voltage required for turning on the scan transistors and the reset transistor may be output through the above shift register RS. For example, for the LTPO pixel driving circuit shown inFIG.4C, the second transistor T2′ and the seventh transistor T7′ are N-type transistors, and need to be turned on at a high voltage. The voltage of the first voltage signal Vgh from the first voltage signal terminal VGH is controlled to be the high voltage, so that the high voltage required for turning on the second transistor T2′ and the seventh transistor T7′ may be output through the above shift register RS. It will be noted that the shift register RS provided by the embodiments of the present disclosure is not only used in the LTPO pixel driving circuits shown inFIGS.4A and4C, but also used in other LTPO pixel driving circuits100in which at least a part of the scan transistors use N-type transistors. In some embodiments, as shown inFIGS.5and6, the input circuit1is further coupled to the second voltage signal terminal VGL and the second node N2. The input circuit1is further configured to transmit, under control of the input signal Input from the signal input terminal INPUT and the voltage at the second node N2, the second voltage signal Vgl from the second voltage signal terminal VGL to the first node N1. For example, in a case where the voltage of the input signal Input transmitted by the signal input terminal INPUT is a high voltage, and the voltage at the second node N2is a high voltage, the input circuit1may be turned on under control of the high voltage of the input signal Input from the signal input terminal INPUT and the high voltage at the second node N2to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the first node N1. For example, in a denoising phase S4(referring toFIG.13), the voltage of the input signal Input transmitted by the signal input terminal INPUT is the high voltage, the voltage at the second node N2is the high voltage, and the input circuit1is turned on under the control of the high voltage of the input signal Input from the signal input terminal INPUT and the high voltage at the second node N2to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the first node N1, so that the voltage at the first node N1is lowered. The first control circuit2is further coupled to the first voltage signal terminal VGH. The first control circuit2is further configured to transmit, under control of the first clock signal Ck1from the first clock signal terminal CK1and the voltage at the first node N1, the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2. For example, in a case where the voltage of the first clock signal Ck1transmitted by the first clock signal terminal CK1is a low voltage, and the voltage at the first node N1is a low voltage, the first control circuit2may be turned on under control of the low voltage of the first clock signal Ck1from the first clock signal terminal CK1and the low voltage at the first node N1to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2. For example, in the denoising phase S4(referring toFIG.13), the voltage of the first clock signal Ck1transmitted by the first clock signal terminal CK1is the low voltage, the voltage at the first node N1is the low voltage, and the first control circuit2is turned on under the control of the low voltage of the first clock signal Ck1from the first clock signal terminal CK1and the low voltage at the first node N1to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2, so that the voltage of the second node N2is raised. The second control circuit3is further coupled to the first voltage signal terminal VGH. The second control circuit3is further configured to transmit, under control of the voltage at the second node N2, the first voltage signal Vgh from the first voltage signal terminal VGH to the third node N3. For example, in a case where the voltage at the second node N2is a high voltage, the second control circuit3may be turned on under control of the high voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the third node N3. For example, in the denoising phase S4(referring toFIG.13), the voltage at the second node N2is the high voltage, and the second control circuit3is turned on under the control of the voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the third node N3, so that the voltage at the third node N3is raised. The output circuit4is further coupled to the second voltage signal terminal VGL. The output circuit4is further configured to transmit, under control of the voltage at the third node N3, the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1. For example, in a case where the voltage at the third node N3is a high voltage, the output circuit4may be turned on under control of the high voltage at the third node N3to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1. For example, in the denoising phase S4(referring toFIG.13), the voltage at the third node N3is the high voltage, the output circuit4is turned on under the control of the high voltage at the third node N3to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1, so as to stop the output of the scan signal, and perform a denoising processing on the scan signal output terminal OUTPUT1. It can be seen from the above that in the shift register RS provided by some embodiments of the present disclosure, the input circuit1is turned on under the control of the high voltage of the input signal Input from the signal input terminal INPUT and the high voltage at the second node N2to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the first node N1; the first control circuit2is turned on under the control of the low voltage of the first clock signal Ck1from the first clock signal terminal CK1and the low voltage at the first node N1to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2; the second control circuit3is turned on under the control of the voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the third node N3; and the output circuit4is turned on under the control of the high voltage at the third node N3to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1, so that the denoising processing is performed on the scan signal output terminal OUTPUT1. In this way, after the outputting phase S3(referring toFIG.13), the output circuit4transmits the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1, so as to perform denoising on the scan signal output terminal OUTPUT1continuously. The third node N3is further coupled to a cascade signal output terminal OUTPUT2, and the cascade signal output terminal OUTPUT2is configured to output a cascade signal to another shift register RS, so as to provide an input signal Input for the another shift register RS. In addition, the scan signal and the cascade signal are respectively output through the scan signal output terminal OUTPUT1and the cascade signal output terminal OUTPUT2, so that the scan signal and the cascade signal will not interfere with each other, and the output is relatively stable. In addition, the cascade signal output terminal OUTPUT2may further be configured to provide a P-type scan signal for the second scan signal terminal GATE2of the pixel driving circuit shown inFIG.4C. For example, for the LTPO pixel driving circuit shown inFIG.4C, the second transistor T2′ and the seventh transistor T7′ are N-type transistors, and need to be turned on at the high voltage. The voltage of the first voltage signal Vgh from the first voltage signal terminal VGH is controlled to be the high voltage, so that the high voltage required for turning on the second transistor T2′ and the seventh transistor T7′ may be output through the scan signal output terminal OUTPUT1of the above shift register RS. The first transistor T1′ and the fourth transistor T4′ are P-type transistors, and need to be turned on at a low voltage. The second clock signal Ck2from the second clock signal terminal CK2is controlled to be at the low voltage, so that the low voltage required for turning on the first transistor T1′ and the fourth transistor T4′ may be output through the cascade signal output terminal OUTPUT2of the above shift register RS. It will be noted that the shift register RS provided by the embodiments of the present disclosure is not only used in the LTPO pixel driving circuit shown inFIG.4C, but also used in other LTPO pixel driving circuits100in which at least a part of the scan transistors use N-type transistor(s) and P-type transistor(s). In some embodiments, as shown inFIGS.6,7and9, the shift register RS further includes a reset circuit5coupled to the first voltage signal terminal VGH, the second node N2, and a reset signal terminal RESET. The reset circuit5is configured to transmit, under control of a reset signal Reset from the reset signal terminal RESET, the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2. For example, as shown inFIGS.6,9and10, the reset circuit5includes a twenty-second transistor T22. A control electrode of the twenty-second transistor T22is coupled to the reset signal terminal RESET, a first electrode of the twenty-second transistor T22is coupled to the first voltage signal terminal VGH, and a second electrode of the twenty-second transistor T22is coupled to the second node N2. Based on this, before the shift register RS is charged, that is, before a voltage of the input signal Input of the signal input terminal INPUT changes to a low voltage, the reset circuit5may be turned on under control of the reset signal Reset from the reset signal terminal RESET to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2, so that the voltage at the second node N2is reset, which may prevent a large current during startup. For example, in a reset phase S1(referring toFIG.13), the reset circuit5may be turned on under control of the low voltage of the reset signal Reset from the reset signal terminal RESET to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2, so that the voltage at the second node N2is approximately equal to the voltage of the first voltage signal Vgh from the first voltage signal terminal VGH. In some embodiments, as shown inFIGS.6,7and9, the first control circuit2includes a first-level control sub-circuit21and a second-level control sub-circuit22. The first-level control sub-circuit21is coupled to the first node N1, the second node N2, the first voltage signal terminal VGH, the second voltage signal terminal VGL and a fourth node N4. The first-level control sub-circuit21is configured to transmit, under control of the voltage at the first node N1, the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2, or the second voltage signal Vgl from the second voltage signal terminal VGL to the fourth node N4. For example, as shown inFIGS.8and10, the first-level control sub-circuit21includes a first transistor T1and a second transistor T2. A control electrode of the first transistor T1is coupled to the first node N1, a first electrode of the first transistor T1is coupled to the first voltage signal terminal VGH, and a second electrode of the first transistor T1is coupled to the second node N2. A control electrode of the second transistor T2is coupled to the first node N1, a first electrode of the second transistor T2is coupled to the second voltage signal terminal VGL, and a second electrode of the second transistor T2is coupled to the fourth node N4. The second-level control sub-circuit22is coupled to the first clock signal terminal CK1, the second node N2, the first voltage signal terminal VGH, and the fourth node N4. The second-level control sub-circuit22is configured to transmit, under control of the first clock signal Ck1from the first clock signal terminal CK1, a voltage at the fourth node N4or the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2. For example, as shown inFIGS.8and10, the second-level control sub-circuit22includes a third transistor T3and a fourth transistor T4. A control electrode of the third transistor T3is coupled to the first clock signal terminal CK1, a first electrode of the third transistor T3is coupled to the first voltage signal terminal VGH, and a second electrode of the third transistor T3is coupled to the second node N2. A control electrode of the fourth transistor T4is coupled to the first clock signal terminal CK1, a first electrode of the fourth transistor T4is coupled to the fourth node N4, and a second electrode of the fourth transistor T4is coupled to the second node N2. In the charging phase S2(referring toFIG.13), the input circuit1is turned on under the control of the low voltage of the input signal Input from the signal input terminal INPUT to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1, so that the first node N1is at the high voltage; the first-level control sub-circuit21transmits, under control of the high voltage of the first node N1, the second voltage signal Vgl from the second voltage signal terminal VGL to the fourth node N4; and the second-level control sub-circuit22transmits, under control of the high voltage of the first clock signal Ck1from the first clock signal terminal CK1, the voltage at the fourth node N4to the second node N2, so that the second node N2is at the low voltage. In the outputting phase S3(referring toFIG.13), the input circuit1is turned on under the control of the low voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1, so that the first node N1is at the high voltage; the first-level control sub-circuit21transmits, under control of the high voltage at the first node N1, the second voltage signal Vgl from the second voltage signal terminal VGL to the fourth node N4; and the second-level control sub-circuit22transmits, under control of the high voltage of the first clock signal Ck1from the first clock signal terminal CK1, the voltage at the fourth node N4to the second node N2, so that the second node N2is at the low voltage. It will be seen from the above that in the charging phase S2and the outputting phase S3(referring toFIG.13), the first node N1is continuously stabilized at the high voltage and the second node N2is continuously stabilized at the low voltage due to action of the first-level control sub-circuit21and the second-level control sub-circuit22. That is, in a frame including the outputting phase S3, a duration for which the first node N1is at the high voltage and a duration for which the second node N2is at the low voltage are both approximately twice a pulse width of the input signal Input from the signal input terminal INPUT. In this way, it may be designed that the output cascade signal is output in the outputting phase S3, so as to provide the input signal Input for another shift register RS. In some embodiments, as shown inFIGS.6,7and8, the second control circuit3includes a third-level control sub-circuit31and a fourth-level control sub-circuit32. The third-level control sub-circuit31is coupled to the second node N2, the first voltage signal terminal VGH, the second voltage signal terminal VGL and a fifth node N5. The third-level control sub-circuit31is configured to transmit, under control of the voltage at the second node N2, the first voltage signal Vgh from the first voltage signal terminal VGH or the second voltage signal Vgl from the second voltage signal terminal VGL to the fifth node N5. For example, as shown inFIGS.7and8, the third-level control sub-circuit31includes a fifth transistor T5and a sixth transistor T6. A control electrode of the fifth transistor T5is coupled to the second node N2, a first electrode of the fifth transistor T5is coupled to the first voltage signal terminal VGH, and a second electrode of the fifth transistor T5is coupled to the fifth node N5. A control electrode of the sixth transistor T6is coupled to the second node N2, a first electrode of the sixth transistor T6is coupled to the second voltage signal terminal VGL, and a second electrode of the sixth transistor T6is coupled to the fifth node N5. The fourth-level control sub-circuit32is coupled to the fifth node N5, the first voltage signal terminal VGH, the second clock signal terminal CK2and the third node N3. The fourth-level control sub-circuit is configured to transmit, under control of a voltage at the fifth node N5, the first voltage signal Vgh from the first voltage signal terminal VGH or the second clock signal Ck2from the second clock signal terminal CK2to the third node N3. For example, as shown inFIGS.7and8, the fourth-level control sub-circuit32includes a seventh transistor T7and an eighth transistor T8. A control electrode of the seventh transistor T7is coupled to the fifth node N5, a first electrode of the seventh transistor T7is coupled to the first voltage signal terminal VGH, and a second electrode of the seventh transistor T7is coupled to the third node N3. A control electrode of the eighth transistor T8is coupled to the fifth node N5, a first electrode of the eighth transistor T8is coupled to the second clock signal terminal CK2, and a second electrode of the eighth transistor T8is coupled to the third node N3. In the charging phase S2(referring toFIG.13), the third-level control sub-circuit31is turned on under control of the low voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the fifth node N5, so that the fifth node N5is at the high voltage; and the fourth-level control sub-circuit32transmits, under control of the high voltage at the fifth node N5, the high voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3, so that the third node N3is at the high voltage. In the outputting phase S3(referring toFIG.13), the third-level control sub-circuit31is turned on under control of the low voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the fifth node N5, so that the fifth node N5is at the high voltage; and the fourth-level control sub-circuit32transmits, under control of the high voltage at the fifth node N5, the low voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3, so that the third node N3is at a low voltage. It will be seen from the above that in the charging phase S2and the outputting phase S3(referring toFIG.13), the second node N2is continuously stabilized at the low voltage, so that the third-level control sub-circuit31continuously transmits the first voltage signal Vgh from the first voltage signal terminal VGH to the fifth node N5under control of the low voltage at the second node N2, and the fifth node N5is continuously stabilized at the high voltage; and the fourth-level control sub-circuit32continuously transmits the second clock signal Ck2from the second clock signal terminal CK2to the third node N3under control of the high voltage at the fifth node N5. That is, the voltage at the third node N3varies depending on the voltage of the second clock signal Ck2from the second clock signal terminal CK2. In this way, in the outputting phase S3, the low voltage of the second clock signal Ck2from the second clock signal terminal CK2may be output as the cascade signal, so as to provide the input signal Input for the another shift register RS. In some embodiments, as shown inFIGS.6,9, and10, the second control circuit3further includes a fifth-level control sub-circuit33. The fifth-level control sub-circuit33is coupled to the fifth node N5, the first voltage signal terminal VGH, the second voltage signal terminal VGL, the second clock signal terminal CK2, and the third node N3. The fifth-level control sub-circuit33is configured to transmit, under control of the voltage at the fifth node N5, the second clock signal Ck2from the second clock signal terminal CK2to the third node N3. For example, as shown inFIGS.9and10, the fifth-level control sub-circuit33includes a ninth transistor T9, a tenth transistor T10and an eleventh transistor T11. A control electrode of the ninth transistor T9is coupled to the fifth node N5, a first electrode of the ninth transistor T9is coupled to the first voltage signal terminal VGH, and a second electrode of the ninth transistor T9is coupled to a sixth node N6. A control electrode of the tenth transistor T10is coupled to the fifth node N5, a first electrode of the tenth transistor T10is coupled to the second voltage signal terminal VGL, and a second electrode of the tenth transistor T10is coupled to the sixth node N6. A control electrode of the eleventh transistor T11is coupled to the sixth node N6, a first electrode of the eleventh transistor T11is coupled to the second clock signal terminal CK2, and a second electrode of the eleventh transistor T11is coupled to the third node N3. In the charging phase S2(referring toFIG.13), the fifth-level control sub-circuit33is turned on under the control of the high voltage at the fifth node N5to transmit the second clock signal Ck2from the second clock signal terminal CK2to the third node N3. In the outputting phase S3(referring toFIG.13), the fifth-level control sub-circuit33is turned on under the control of the high voltage at the fifth node N5to transmit the second clock signal Ck2from the second clock signal terminal CK2to the third node N3. It will be seen from the above that in the charging phase S2and the outputting phase S3(referring toFIG.13), the fifth-level control sub-circuit33continuously transmits the second clock signal Ck2from the second clock signal terminal CK2to the third node N3under the control of the high voltage at the fifth node N5. That is, the fifth-level control sub-circuit33and the fourth-level control sub-circuit32are connected in parallel. In this way, it is conducive to a rapid response of the voltage at the third node N3to variation of the voltage of the second clock signal Ck2from the second clock signal terminal CK2. In some embodiments, as shown inFIGS.6and11, the output circuit4includes a twelfth transistor T12and a thirteenth transistor T13. A control electrode of the twelfth transistor T12is coupled to the third node N3, a first electrode of the twelfth transistor T12is coupled to the first voltage signal terminal VGH, and a second electrode of the twelfth transistor T12is coupled to the scan signal output terminal OUTPUT1. A control electrode of the thirteenth transistor T13is coupled to the third node N3, a first electrode of the thirteenth transistor T13is coupled to the second voltage signal terminal VGL, and a second electrode of the thirteenth transistor T13is coupled to the scan signal output terminal OUTPUT1. In the outputting phase S3(referring toFIG.13), the twelfth transistor T12transmits the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal output terminal OUTPUT1under control of the low voltage at the third node N3. In the denoising phase S4(referring toFIG.13), the thirteenth transistor T13is turned on under control of the high voltage at the third node N3to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1, so as to perform the denoising processing on the scan signal output terminal OUTPUT1. In some other embodiments, as shown inFIGS.6,9, and10, the output circuit4includes an odd number of output sub-circuits that are connected in series. A first output sub-circuit is coupled to the third node N3, and a last output sub-circuit is coupled to the scan signal output terminal OUTPUT1. The first output sub-circuit is configured to transmit, under the control of the voltage at the third node N3, the first voltage signal Vgh from the first voltage signal terminal VGH or the second voltage signal Vgl from the second voltage signal terminal VGL to a next output sub-circuit adjacent thereto. The last output sub-circuit is configured to transmit, under control of a signal output by a previous output sub-circuit adjacent thereto, the first voltage signal Vgh from the first voltage signal terminal VGH or the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1. Other output sub-circuits except the first and last output sub-circuits in the odd number of output sub-circuits are each configured to transmit, under control of a signal output by a previous output sub-circuit adjacent thereto, the first voltage signal Vgh from the first voltage signal terminal VGH or the second voltage signal Vgl from the second voltage signal terminal VGL to a next output sub-circuit adjacent thereto. In order to improve of an output capability of the shift register RS, for example, as shown inFIGS.6,9, and10, the output circuit4includes the first output sub-circuit41, a second output sub-circuit42, and a third output sub-circuit43. The first output sub-circuit41is coupled to the third node N3, the first voltage signal terminal VGH, the second voltage signal terminal VGL, and a seventh node N7. The first output sub-circuit41is configured to transmit, under the control of the voltage at the third node N3, the first voltage signal Vgh from the first voltage signal terminal VGH or the second voltage signal Vgl from the second voltage signal terminal VGL to the seventh node N7. For example, as shown inFIGS.9and10, the first output sub-circuit41includes a twelfth transistor T12and a thirteenth transistor T13. A control electrode of the twelfth transistor T12is coupled to the third node N3, a first electrode of the twelfth transistor T12is coupled to the first voltage signal terminal VGH, and a second electrode of the twelfth transistor T12is coupled to the seventh node N7. A control electrode of the thirteenth transistor T13is coupled to the third node N3, a first electrode of the thirteenth transistor T13is coupled to the second voltage signal terminal VGL, and a second electrode of the thirteenth transistor T13is coupled to the seventh node N7. The second output sub-circuit42is coupled to the seventh node N7, the first voltage signal terminal VGH, the second voltage signal terminal VGL, and an eighth node N8. The second output sub-circuit42is configured to transmit, under control of a voltage at the seventh node N7, the first voltage signal Vgh from the first voltage signal terminal VGH or the second voltage signal Vgl from the second voltage signal terminal VGL to the eighth node N8. For example, as shown inFIGS.9and10, the second output sub-circuit42includes a fourteenth transistor T14and a fifteenth transistor T15. A control electrode of the fourteenth transistor T14is coupled to the seventh node N7, a first electrode of the fourteenth transistor T14is coupled to the first voltage signal terminal VGH, and a second electrode of the fourteenth transistor T14is coupled to the eighth node N8. A control electrode of the fifteenth transistor T15is coupled to the seventh node N7, a first electrode of the fifteenth transistor T15is coupled to the second voltage signal terminal VGL, and a second electrode of the fifteenth transistor T15is coupled to the eighth node N8. The third output sub-circuit43is coupled to the eighth node N8, the first voltage signal terminal VGH, the second voltage signal terminal VGL, and the scan signal output terminal OUTPUT1. The third output sub-circuit43is configured to transmit, under control of a voltage at the eighth node N8, the first voltage signal Vgh from the first voltage signal terminal VGH or the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1. For example, as shown inFIGS.9and10, the third output sub-circuit43includes a sixteenth transistor T16and a seventeenth transistor T17. A control electrode of the sixteenth transistor T16is coupled to the eighth node N8, a first electrode of the sixteenth transistor T16is coupled to the first voltage signal terminal VGH, and a second electrode of the sixteenth transistor T16is coupled to the scan signal output terminal OUTPUT1. A control electrode of the seventeenth transistor T17is coupled to the eighth node N8, a first electrode of the seventeenth transistor T17is coupled to the second voltage signal terminal VGL, and a second electrode of the seventeenth transistor T17is coupled to the scan signal output terminal OUTPUT1. In the outputting phase S3(referring toFIG.13), the first output sub-circuit41transmits, under control of the low voltage at the third node N3, the first voltage signal Vgh from the first voltage signal terminal VGH to the seventh node N7; the second output sub-circuit42transmits, under control of the high voltage at the seventh node N7, the second voltage signal Vgl from the second voltage signal terminal VGL to the eighth node N8; and the third output sub-circuit43transmits, under control of the low voltage at the eighth node N8, the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal output terminal OUTPUT1. Here, three levels of output sub-circuits are used, so that a gate of a transistor in the last output sub-circuit may be controlled by a signal output by a stabilized signal terminal, and the output is relatively stable. In addition, the three levels of output sub-circuits are used to improve the output capability of the output sub-circuits in a stepwise manner, so that the scan signal that meets the driving requirements of the pixel driving circuit may be output by using transistors with relatively small width-to-length ratios. In some embodiments, as shown inFIGS.7and9, the input circuit1includes a first initialization sub-circuit11and a second initialization sub-circuit12. The first initialization sub-circuit11is coupled to the first node N1, the second node N2, the first voltage signal terminal VGH, the second voltage signal terminal VGL and a ninth node N9. The first initialization sub-circuit11is configured to transmit, under the control of the voltage at the second node N2, the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1, or the second voltage signal Vgl from the second voltage signal terminal VGL to the ninth node N9. For example, as shown inFIGS.7to11, the first initialization sub-circuit11includes an eighteenth transistor T18and a nineteenth transistor T19. A control electrode of the eighteenth transistor T18is coupled to the second node N2, a first electrode of the eighteenth transistor T18is coupled to the first voltage signal terminal VGH, and a second electrode of the eighteenth transistor T18is coupled to the first node N1. A control electrode of the nineteenth transistor T19is coupled to the second node N2, a first electrode of the nineteenth transistor T19is coupled to the second voltage signal terminal VGL, and a second electrode of the nineteenth transistor T19is coupled to the ninth node N9. The second initialization sub-circuit12is coupled to the signal input terminal INPUT, the first node N1, the first voltage signal terminal VGH, the second voltage signal terminal VGL, and the ninth node N9. The second initialization sub-circuit12is configured to transmit, under the control of the input signal Input from the signal input terminal INPUT, the first voltage signal Vgh from the first voltage signal terminal VGH or a voltage at the ninth node N9to the first node N1. For example, as shown inFIGS.7and9, the second initialization sub-circuit12includes a twentieth transistor T20and a twenty-first transistor T21. A control electrode of the twentieth transistor T20is coupled to the signal input terminal INPUT, a first electrode of the twentieth transistor T20is coupled to the first voltage signal terminal VGH, and a second electrode of the twentieth transistor T20is coupled to the first node N1; and a control electrode of the twenty-first transistor T21is coupled to the signal input terminal INPUT, a first electrode of the twenty-first transistor T21is coupled to the ninth node N9, and a second electrode of the twenty-first transistor T21is coupled to the first node N1. In the charging phase S2(referring toFIG.13), the second initialization sub-circuit12is turned on under the control of the low voltage of the input signal Input from the signal input terminal INPUT to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1, so that the first node N1is at the high voltage. In the outputting phase S3(referring toFIG.13), the second initialization sub-circuit12transmits, under the control of the voltage at the second node N2, the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1, so that the first node N1is at the high voltage. It will be seen from the above that in the charging phase S2and the outputting phase S3(referring toFIG.13), the first node N1may be continuously stabilized at the high voltage. In the shift register in the embodiments of the present disclosure, the output circuit of the shift register RS includes the odd number of output sub-circuits, and each output sub-circuit includes a P-type transistor and an N-type transistor. In the shift register in the embodiments of the present disclosure, the first transistor T1, the third transistor T3, the fifth transistor T5, the seventh transistor T7, the ninth transistor T9, the eleventh transistor T11, the twelfth transistor T12, the fourteenth transistor T14, the sixteenth transistor T16, the eighteenth transistor T18, the twentieth transistor T20and the twenty-second transistor T22are all P-type transistors; and the second transistor T2, the fourth transistor T4, the sixth transistor T6, the eighth transistor T8, the tenth transistor T10, the thirteenth transistor T13, the fifteenth transistor T15, the seventeenth transistor T17, the nineteenth transistor T19and the twenty-first transistor T21are all N-type transistors. Some embodiments of the present disclosure further provide a driving method for a shift register RS, and the driving method is applied to the shift register RS in any of the above embodiments. As shown inFIG.13, a frame period includes a charging phase S2and an outputting phase S3, and the driving method for the shift register includes the following. In the charging phase S2, the input circuit1transmits, under the control of the low voltage of the input signal Input of the signal input terminal INPUT, the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1; the first control circuit2transmits, under the control of the high voltage of the first clock signal Ck1from the first clock signal terminal CK1and the high voltage at the first node N1, the second voltage signal Vgl from the second voltage signal terminal VGL to the second node N2; the second control circuit3transmits, under the control of the low voltage at the second node N2, the high voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3; and the output circuit4transmits, under the control of the high voltage at the third node N3, the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1. In the outputting phase S3, the input circuit1transmits, under the control of the low voltage at the second node N2, the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1; the first control circuit2transmits, under the control of the high voltage of the first clock signal Ck1from the first clock signal terminal CK1and the high voltage at the first node N1, the second voltage signal from the second voltage signal terminal VGL to the second node N2; the second control circuit3transmits, under the control of the low voltage at the second node N2, the low voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3; and the output circuit4transmits, under the control of the low voltage at the third node N3, the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal output terminal OUTPUT1. In some embodiments, as shown inFIG.13, the frame period further includes a denoising phase S4, and the driving method for the shift register further includes the following. In the denoising phase S4, the input circuit1transmits, under the control of the high voltage of the input signal Input from the signal input terminal INPUT and the high voltage at the second node N2, the second voltage signal Vgl from the second voltage signal terminal VGL to the first node N1; the first control circuit2transmits, under the control of the low voltage of the first clock signal Ck1from the first clock signal terminal CK1and the low voltage at the first node N1, the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2; the second control circuit3transmits, under the control of the high voltage at the second node N2, the first voltage signal Vgh from the first voltage signal terminal VGH to the third node N3; and the output circuit4transmits, under the control of the high voltage at the third node, the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal output terminal OUTPUT1. In some embodiments, as shown inFIG.13, the frame period further includes a reset phase S1, and the driving method for the shift register further includes the following. In the reset phase S1, the reset circuit5transmits, under the control of the reset signal Reset from the reset signal terminal RESET, the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2; the input circuit1transmits, under the control of the high voltage of the input signal Input from the signal input terminal INPUT and the high voltage at the second node N2, the second voltage signal Vgl from the second voltage signal terminal VGL to the first node N1; and the first control circuit2transmits, under the control of the low voltage at the first node N1, the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2. An operating process of the shift register RS shown inFIG.10in a process of driving a gate line will be described in detail below. The following description will be made by considering an example in which the first transistor T1, the third transistor T3, the fifth transistor T5, the seventh transistor T7, the ninth transistor T9, the eleventh transistor T11, the twelfth transistor T12, the fourteenth transistor T14, the sixteenth transistor T16, the eighteenth transistor T18, the twentieth transistor T20and the twenty-second transistor T22are all P-type transistors (regardless of an influence of threshold voltages of the transistors); the second transistor T2, the fourth transistor T4, the sixth transistor T6, the eighth transistor T8, the tenth transistor T10, the thirteenth transistor T13, the fifteenth transistor T15, the seventeenth transistor T17, the nineteenth transistor T19and the twenty-first transistor T21are all N-type transistors (regardless of an influence of threshold voltages of the transistors); the voltage transmitted by the first voltage signal terminal VGL is the low voltage, and the voltage transmitted by the second voltage signal terminal VGH is the high voltage. The “low voltage” can make the P-type transistors turned on, but cannot make the N-type transistors turned on (i.e., make the N-type transistors turned off). The “high voltage” can make the N-type transistors turned on, but cannot make the N-type transistors turned on (i.e., make the P-type transistors turned off). For example, in the following description, “0” represents the low voltage, and “1” represents the high voltage. In the reset phase S1, referring toFIGS.10and13, RESET is set to be 0 (RESET=0). In this case, the twenty-second transistor T22is turned on under the control of the low voltage of the reset signal Reset from the reset signal terminal RESET to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2, so that the second node N2is at the high voltage. The nineteenth transistor T19is turned on under the control of the high voltage at the second node N2to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the ninth node N9, so that the ninth node N9is at the low voltage; and the twenty-first transistor T21is turned on under the control of the high voltage of the input signal Input from the signal input terminal INPUT to transmit the low voltage at the ninth node N9to the first node N1, so that the voltage at the first node N1is the low voltage. The first transistor T1is turned on under the control of the low voltage at the first node N1to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2, so that the voltage at the second node N2is the high voltage. It will be noted that the reset phase S1inFIG.13includes a phase in which RESET is set to be 1 (RESET=1). In this phrase, the twenty-second transistor T22is turned off, the voltage at the second node N2has been reset, and the voltage at the second node N2in this phase is still the voltage of the first voltage signal Vgh (i.e., the high voltage) transmitted by the first voltage signal terminal VGH. In the charging phase S2, referring toFIGS.10and13, INPUT is set to be 0 (INPUT=0), RESET is set to be 1 (RESET=1), CK1is set to be 1 (CK1=1), and CK2is set to be 1 (CK2=1). In this case, the twenty-second transistor T22is turned off under the control of the high voltage of the reset signal Reset from the reset signal terminal RESET, so that it is ensured that the voltage at the second node N2is free from the reset signal Reset in the charging phase S2. The twentieth transistor T20is turned on under the control of the low voltage of the input signal Input from the signal input terminal INPUT to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1, so that the first node N1is at the high voltage. The second transistor T2is turned on under the control of the high voltage at the first node N1to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the fourth node N4, so that the fourth node N4is at the low voltage. The fourth transistor T4is turned on under the control of the high voltage of the first clock signal Ck1from the first clock signal terminal CK1to transmit the low voltage at the fourth node N4to the second node N2, so that the second node N2is at the low voltage. The fifth transistor T5is turned on under the control of the low voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the fifth node N5, so that the fifth node N5is at the high voltage. The eighth transistor T8is turned on under the control of the high voltage at the fifth node N5to transmit the high voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3and the cascade signal output terminal OUTPUT2. The tenth transistor T10is turned on under the control of the high voltage at the fifth node N5to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the sixth node N6, so that the sixth node N6is at the low voltage; and the eleventh transistor T11is turned on under the control of the low voltage at the sixth node N6to transmit the high voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3and the cascade signal output terminal OUTPUT2. The thirteenth transistor T13is turned on under the control of the high voltage at the third node N3to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the seventh node N7, so that the seventh node N7is at the low voltage. The fourteenth transistor T14is turned on under the control of the low voltage of the seventh node N7to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the eighth node N8, so that the eighth node N8is at the high voltage. The seventeenth transistor T17is turned on under the control of the high voltage at the eighth node N8to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal terminal OUTPUT1, so as to perform denoising on the scan signal output terminal OUTPUT1continuously. In the outputting phase S3, referring toFIGS.10and13, INPUT is set to be 1 (INPUT=1), RESET is set to be 1 (RESET=1), CK1is set to be 1 (CK1=1), and CK2is set to be 0 (CK2=0). In this case, the twenty-second transistor T22is turned off under the control of the high voltage of the reset signal Reset from the reset signal terminal RESET, so that it is ensured that the voltage at the second node N2is free from the reset signal Reset in the outputting phase S3. The eighteenth transistor T18is turned on under the control of the low voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the first node N1, so that the first node N1is at the high voltage. The second transistor T2is turned on under the control of the high voltage at the first node N1to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the fourth node N4, so that the fourth node N4is at the low voltage. The fourth transistor T4is turned on under the control of the high voltage of the first clock signal Ck1of the first clock signal terminal CK1to transmit the low voltage at the fourth node N4to the second node N2, so that the second node N2is at the low voltage. The fifth transistor T5is turned on under the control of the low voltage at the second node N2to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the fifth node N5, so that the fifth node N5is at the high voltage. The eighth transistor T8is turned on under the control of the high voltage at the fifth node N5to transmit the low voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3and the cascade signal output terminal OUTPUT2. The tenth transistor T10is turned on under the control of the high voltage at the fifth node N5to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the sixth node N6, so that the sixth node N6is at the low voltage; and the eleventh transistor T11is turned on under the control of the low voltage at the sixth node N6to transmit the low voltage of the second clock signal Ck2from the second clock signal terminal CK2to the third node N3and the cascade signal output terminal OUTPUT2. The twelfth transistor T12is turned on under the control of the low voltage at the third node N3to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the seventh node N7, so that the seventh node N7is at the high voltage. The fifteenth transistor T15is turned on under the control of the high voltage at the seventh node N7to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the eighth node N8, so that the eighth node N8is at the low voltage. The sixteenth transistor T16is turned on under the control of the low voltage at the eighth node N8to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the scan signal terminal OUTPUT1, so as to output the scan signal. In the denoising phase S4, referring toFIGS.10and13, INPUT is set to be 1 (INPUT=1), RESET is set to be 1 (RESET=1), CK1is set to be 0 (CK1=0), and CK2is set to be 1 (CK2=1). In this case, the twenty-second transistor T22is turned off under the control of the high voltage of the reset signal Reset from the reset signal terminal RESET, so that it is ensured that the voltage at the second node N2is free from the reset signal Reset in the denoising phase S4. The third transistor T3is turned on under the control of the low voltage of the first clock signal Ck1from the first clock signal terminal CK1to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2, so that the second node N2is at the high voltage. The nineteenth transistor T19is turned on under the control of the high voltage at the second node N2to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the ninth node N9, so that the ninth node N9is at the low voltage. The twenty-first transistor T21is turned on under the control of the high voltage of the input signal Input from the signal input terminal INPUT to transmit the low voltage at the ninth node N9to the first node N1, so that the first node N1is at the low voltage. The first transistor T1is turned on under the control of the low voltage at the first node N1to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the second node N2, so that the second node N2is at the high voltage. The sixth transistor T6is turned on under the control of the high voltage at the second node N2to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the fifth node N5, so that the fifth node N5is at the low voltage. The seventh transistor T7is turned on under the control of the low voltage at the fifth node N5to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the third node N3, so that the third node N3is at the high voltage. The ninth transistor T9is turned on under the control of the low voltage at the fifth node N5to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the sixth node N6, so that the sixth node N6is at the high voltage; and the eleventh transistor T11is turned off under the control of the high voltage at the sixth node N6. The thirteenth transistor T13is turned on under the control of the high voltage at the third node N3to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the seventh node N7, so that the seventh node N7is at the low voltage. The fourteenth transistor T14is turned on under the control of the low voltage at the seventh node N7to transmit the first voltage signal Vgh from the first voltage signal terminal VGH to the eighth node N8, so that the eighth node N8is at the high voltage. The seventeenth transistor T17is turned on under the control of the high voltage at the eighth node N8to transmit the second voltage signal Vgl from the second voltage signal terminal VGL to the scan signal terminal OUTPUT1, so as to perform denoising on the scan signal terminal OUTPUT1. Some embodiments of the present disclosure further provide a gate driver circuit200. Referring toFIG.12, the gate driver circuit200includes at least two shift registers RS that are cascaded. In some embodiments, as shown inFIGS.3and12, in the shift registers RS (RS1, RS2, . . . , RS(N)) in the gate driver circuit200, the scan signal output terminal OUTPUT1and the cascade signal output terminal OUTPUT2are disposed separately. The gate scan signal Gate is output to a gate line GL connected to the shift register through the scan signal output terminal OUTPUT1, and the cascade signal is output to the gate line GL connected to the shift register through the cascade signal output terminal OUTPUT2. For example, in every two adjacent shift registers RS, a signal input terminal INPUT of a latter-stage shift register RS is coupled to the cascade signal output terminal OUTPUT2of a previous-stage shift register RS, and the signal input terminal INPUT of a first-stage shift register RS1is coupled to an initialization signal terminal STV. In some embodiments, the gate driver circuit200further includes a first clock signal line LCK1, a second clock signal line LCK2and a third clock signal line LCK3. The first clock signal line LCK1is coupled to the first clock signal terminal CK1of each shift register RS, the second clock signal line LCK2is coupled to second clock signal terminals CK2of odd-numbered stages of shift registers RS, and the third clock signal line LCK3is coupled to second clock signal terminals CK2of even-numbered stages of shift registers RS. As shown inFIG.13, a signal N-CK2inFIG.13is a square wave pulse signal of the second clock signal terminal CK2of a next-stage shift register RS, and a rising edge of the signal N-CK2is aligned with a rising edge of the scan signal output terminal OUTPUT1of a current-stage shift register RS. For example, the second clock signal Ck2is a square wave pulse signal provided by the second clock signal line LCK2coupled to the odd-numbered stages of shift registers RS, and the signal N-CK2is a square wave pulse signal provided by the third clock signal line LCK3coupled to the even-numbered stages of shift registers RS. In addition, the gate driver circuit200in some embodiments of the present disclosure further includes a first voltage signal line LVGH and a second voltage signal line LVGL. The first voltage signal line LVGH is coupled to the first voltage signal terminal VGH of each shift register RS, and the second voltage signal line LVGL is coupled to the second voltage signal terminal VGL of each shift register RS. In the embodiments of the present disclosure, cascading manners of various stages of the shift registers RS in the gate driver circuit200and the connection manners of the various stages of the shift registers RS and the clock signal lines are not limited thereto. The forgoing descriptions are merely specific implementation manners of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any changes or replacements that a person skilled in the art could conceive of within the technical scope of the present disclosure shall be included in the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
83,399
11862217
DESCRIPTION OF EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. Variations and modifications related to the description of the embodiment can be made without departing from the scope of the present invention. FIG.1is a configuration diagram to illustrate a device according to an embodiment of the present invention. As shown inFIG.1, device1according to the embodiment of the present invention is configured to include an STT-MRAM (Spin Transfer Torque-Magnetoresistive Random Access Memory)2as an MRAM; an NV-CPU (Nonvolatile Central Processing Unit)3; an NV-FPGA (Nonvolatile Field-Programmable Gate Array)4; a power gating controller5that controls power supply to each memory cell in STT-MRAM2, NV-CPU3, and NV-FPGA4; and a access controller6that reads data from STT-MRAM2and stores the data in advance of reading, controlling an access to STT-MRAM2. Access controller6is provided as an intervention in access to STT-MRAM2, and these modules are connected with bus7. In device1according to the present invention, specifically, in the MCU (Microcontroller Unit), NV-CPU3transmits data to STT-MRAM2, allowing NV-FPGA4to read the data from STT-MRAM2; and NV-FPGA4transmits data to STT-MRAM2, allowing NV-CPU3to read the data from STT-MRAM2. That is, the following operations are performed: STT-MRAM2stores the results computed by NV-CPU3; using the results stored in STT-MRAM2, NV-FPGA4further performs computing and returns the results to STT-MRAM2; and NV-CPU3receives from STT-MRAM2the results computed by NV-FPGA4. In the embodiment of the present invention, it is possible to provide a microcomputer appropriate for a sensor node and the like which enables both high performance (for example, operating frequency of about 200 to 300 MHz) and low power consumption (for example, no more than 100 μW). As for the low power consumption, using a nonvolatile memory other than an MRAM may have a certain effect because it can reduce the standby power. However, in a case where a nonvolatile memory other than an MRAM is used, high-speed data-writing or -reading is impossible. Thus, to achieve high performance with several hundred MHz operating frequency, in the embodiment of the present invention, in the MCU as device1, an MRAM, preferably, STT-MRAM2is employed for a region to store data related to computing by the CPU and the FPGA. Device1may be referred to as a nonvolatile microcomputer chip, a nonvolatile microcomputer, or a nonvolatile microcontroller unit. STT-MRAM2is configured to include multiple memory cells separated into multiple regions including selection transistors and MTJs (Magnetic Tunneling Junctions). Preferably, STT-MRAM2is configured with multiple sub-array blocks and each of the blocks has a switch to turn ON/OFF the power from a power supply unit, not shown in the figure. Power gating controller5allows STT-MRAM2to be power-gated per block division. Here, the block division is a separated block in the multiple regions in the MRAM; inFIG.9, it refers to MRAM sub-arrays that constitute of MRAM 0 and MRAM 1. In NV-CPU3, the memory installed in the module is constituted of only nonvolatile memories. NV-CPU3has a switch to turn ON/OFF the power for the whole module from a power supply unit, not shown in the figure. Since NV-CPU3is constituted of nonvolatile memories, it is unnecessary to back up or write data in the CPU when the switch is turned ON/OFF (that is, whenever power-gated), and power-gating control can be performed. Naturally, since no data is backed up or written, there is no power consumption. It is especially effective in a device that intermittently executes a certain number of operations and enters into a standby state between processes, especially in an IoT sensor node, because no data back up or writing is required in power-gating control. NV-FPGA4is configured to include a nonvolatile memory. Each tile in NV-FPGA4has a switch for power-gating, which enables to turn ON/OFF the power from a power supply unit, not shown in the figure. In addition, when a DSP is installed in NV-FPGA4, the DSP has a switch to turn ON/OFF the power from a power supply unit to the DSP, not shown in the figure. Since NV-FPGA4is configured to include a nonvolatile memory, it is unnecessary to back up or write data in the FPGA when the switches are turned ON/OFF (that is, whenever power-gated), and also unnecessary to save or write configuration data. No need for backing up or writing data and the configuration data means no power consumption for that. A conventional device installed with a volatile FPGA and without a nonvolatile FPGA requires backing up and writing data and the configuration data whenever power-gated. However, the embodiment of the present invention, in which an FPGA is nonvolatile, does not require backing up or writing data and the configuration data whenever power-gated. Power gating controller5controls power supply to each MRAM sub-array in STT-MRAM2, NV-CPU3, and each tile and a DSP in NV-FPGA4and supplies power only to the designated modules in operation. Here, in STT-MRAM2, each memory cell is preferably configured with 2T-2MTJ including two selection transistors and two MTJs. The STT-MRAM includes 1T-1MTJ, 2T-2MTJ, and 4T-4MTJ. In order to perform power-gating, the 4T-4MTJ requires peripheral equipment for power-gating which causes power consumption, thus, unpreferable. On the other hand, the 1T-1MTJ and the 2T-2MTJ, are suitable because their cell configurations themselves have a power-gating function; and in order to enhance the performance, the 2T-2MTJ is more preferable from the view of the number of bits. In addition, 2T-2MTJ provided with WL, BL, /BL, SL, and /SL in each cell may be sufficient, however, 2T-2MTJ provided with WL, BL, /BL, and SL in each cell where SL and/SL are shared is the most preferable because it can suppress the lay-out size. Further, it can adjust the writing pulse width in response to writing characteristics of the MTJ, which can suppress the writing current to optimize it. A concept of device1according to an embodiment of the present invention will be described.FIGS.2A to2Care graphs to illustrate a concept of the present invention;FIG.2Ais a power versus time graph of a device based on a conventional CMOS-based configuration. When in active, the power is the sum total of dynamic and static power. When in standby, some static power is consumed. In the case with power-gating, as shown inFIG.2B, the static power is consumed only when dynamic power is consumed; and static power is not consumed when dynamic power is not consumed. However, additional power is consumed before and after being in active state to back up data in a volatile memory and to write the data into the volatile memory. Therefore, in the case a nonvolatile memory is used instead of a volatile memory, as shown inFIG.2C, the data back up or writing, required inFIG.2B, are unnecessary. Thus, this embodiment of the present invention can be implemented by using nonvolatile memories for all modules in device1. A usual IoT sensor node intermittently executes a certain number of operations and enters into a standby state between processes. Conventional CMOS-based architectures use volatile internal memories, which require data transfer between internal and external memories to back up data before turning off the power. The embodiment does not require the data back up. In addition, a device in which only nonvolatile memories are used for all modules (MCU) does not require external memories and does not need to transfer the stored data. Therefore, the power-gating technique can be effectively applied at a granular level and can actively cut wasteful power consumption. As NV-FPGA4is configured to include a nonvolatile memory, it is unnecessary to back up or write data in the FPGA whenever power-gated, and unnecessary to back up or write the configuration data. It is especially effective in a device that intermittently executes a certain number of operations and enters into a standby state between processes, especially in an IoT sensor node, because no data back up or writing is required in power-gating control. FIGS.3A and3Bare graphs to further illustrate a concept of the present invention. As shown inFIG.3A, a sequential processing on the CPU can only reduce power consumption of power-gating. However, as shown inFIG.3B, operations are accelerated by performing a part of operations on the CPU by an FPGA incorporated in the device, which enhances the performance. Thus, a sequential process (processing in order according to the sequence) in each operation interval is parallelly performed by the CPU and an FPGA-based accelerator (FPGA-ACC). Since the parallel processing significantly reduces the processing time (see “processing time reduction by FPGA-ACC” inFIG.3B), the duration of power gating (PG) becomes longer. As a result, both the static and dynamic power portions in the increased time of power gating by parallel processing, i.e., the time of the process time reduced by FPGA-ACC, become unnecessary. This removed unnecessary power is far greater than the increased power consumption due to the computing of the FPGA-ACC. Thus, to install an FPGA-ACC to a nonvolatile microcomputer configured with a NV-CPU and an MRAM can realize high computing performance and low power consumption. A shorter processing time can reduce the operating time of the MRAM, which consumes most of the power, thereby, achieving further lower power consumption. When NV-FPGA4can be connected to NV-CPU3with bus7, sequential processing can be parallelly processed by NV-CPU3and NV-FPGA4. In an IoT sensor node, in particular, as described later with reference toFIGS.4and5, it is preferable to suppress data amount in the terminal end sensor node110, for example, by obtaining a feature value or processing an image, and to transmit the data to a higher unit, cloud system140in order to avoid processing of the sensor data in cloud system140. In the embodiment of the present invention, NV-CPU3and NV-FPGA4parallelly performs sequential processing, which enables high computing performance and low power consumption. It is preferable to be applied to an IoT sensor node. Thus, it can reduce a sequential processing time between operation intervals, and enables further power-saving. Here, an FPGA configures an MTJ on a CMOS, enabling a nonvolatile FPGA with super low power consumption. As described above, a concept of the present invention is to realize a microcomputer appropriate for a sensor node and the like, which enables both high performance (for example, operating frequency of about 200 to 300 MHz) and low power consumption (for example, no more than 100 μW). As for the low power consumption, using a nonvolatile memory other than an MRAM may have a certain effect because it can reduce the standby power. However, in a case where a nonvolatile memory other than an MRAM is used, it is impossible to realize both high-speed data-writing and -reading and computing performance with several hundred MHz operating frequency. On the other hand, in a case where an MRAM is used, it is possible to realize high-speed performance with high-speed writing and reading function and low-power consumption by using nonvolatile memories, simultaneously. Thus, it exerts a great effect on a microcomputer as a device configured with an NV-CPU, an NV-FPGA, and an MRAM memory according to an embodiment of the present invention. Thus, in order to realize a microcomputer with both high performance and low power consumption appropriate for a sensor node, implementation of an MRAM as a memory to a microcomputer installed with a CPU and an FPGA requiring a high computing performance exerts a great effect. Next, a sensor node using the device inFIG.1and a system using it will be described.FIG.4is a configuration diagram to illustrate a system with sensor nodes;FIG.5is a configuration diagram to illustrate a sensor node. System100includes: one or more sensor nodes110to be installed in an indoor or outdoor structure or mounted on people or animals; a gateway (GW)120to connect one or more sensor nodes110to communications network130such as the Internet; and a higher unit such as a cloud system140to store and process various information transmitted from one or more sensor node110via communications network130. The sensor node110includes: a sensor element111to measure various physical quantity; an MCU112to process data from sensor element111into information; a communications unit113to output the information processed by MCU112and various control data to the outside; and a power supply114to convert natural energy, artificial vibrations, and the like, into power and store it. MCU112, a device with a configuration shown inFIG.1, can process data with low power and it is unnecessary to process sensor data by cloud system140. MCU112can suppress data amount in the terminal end sensor node110, for example, by obtaining a feature value or processing an image, and to transmit the data to a higher unit, cloud system140; and thus, it can significantly lower traffic amount. Next, a concrete configuration of device1will be explained.FIG.6is a configuration diagram embodyingFIG.1. As shown inFIG.6, an MCU10as a device includes: an STT-MRAM11; an NV-CPU12; an NV-FPGA13; an MEM4X access controller14; a bus15; a PMU (Performance Monitoring Unit)16; a system control (SYS CONTROL)17; a bus matrix (AHB-MATRIX)18; and peripheral equipment of a CPU in the MCU including: an ADC (Analog-Digital Converter)19; a timer20; a WDT (watchdog timer)21; a UART (Universal Asynchronous Receiver/Transmitter)22; a serial bus (for example, PC)23; an SPI (Serial Peripheral Interface)24; a GPIO (General-Purpose Input/Output)25; and a BIAS26. The peripheral equipment as a CPU is an example; another configuration is possible. FIG.7is a specific configuration diagram to illustrate STT-MRAM11. As shown inFIG.7, STT-MRAM11includes: a left-side array constituted of MRAM sub-arrays; and a right-side array similarly constituted of MRAM sub-arrays, and the MRAM itself is controlled by Control. In order to access data stored in a specified address of memories, the location (coordinates) of the memory cell with the data stored is specified on the basis of an input data address. In Xpredec and Ypredec, a signal corresponding to the coordinates indicating the location of the memory cell is generated on the basis of the address; the signal is converted into a complement signal and the like required for an actual access in Xdec and Ydec; and then the access to the subject memory cell is executed. Thus, conversion from an address to a memory location is performed in two stages. Ydec is arranged at both left and right sides of the left-side array and the right-side array because an operation to read memory data is different in left and right. The outer Ydec flows a constant current into a reading-subject memory cell and generates a voltage signal corresponding to a cell state (or, a resistance state of the MTJ). The voltage signal is amplified by a sense amplifier (SA) attached to the inner Ydec and data are extracted from the reading-subject memory cell. In the diagram, a solid arrow represents a control signal and a dotted arrow represents a data signal. STT-MRAM11is not provided with a switch for power-gating because there is no power supply line in each memory cell. As shown in the right side ofFIG.7, a left and right shared type WBT is used for each cell in STT-MRAM11, therefore, the cell area can be reduced. FIG.8is a diagram to illustrate an operation waveform of STT-MRAM11. For the clock (CLK), data are written by the Write enable signal and read by the Read enable signal. As described with reference toFIG.1, access controller6receives a data-reading instruction from NV-CPU3and determines whether or not the data have been read from STT-MRAM2in advance, and if the data have been already read, access controller6transmits the stored data to NV-CPU3. Specifically speaking, access controller6includes: an address-storing register, a multiplexer, multiple data-storing registers, and a comparator, not shown inFIG.1. Access controller6receives from NV-CPU3an input about an address in STT-MRAM2that is a reading destination and stores the address in the address-storing register. The multiplexer reads multiple destinations of STT-MRAM2specified in the address and stores the data read from STT-MRAM2in each data-storing register. Access controller6receives a new reading instruction together with a specified reading destination from NV-CPU3, compares an address stored in the address-storing register with the reading destination address by using the comparator, and determines whether or not the reading destination address has been read from STT-MRAM2in advance, and if it has been already read and stored in a data-storing register, access controller6transmits the stored data to NV-CPU3in response to the reading instruction. FIG.9is a diagram to illustrate an access controller (Accelerator)14between the CPU and MRAMs. It includes both a 16-bit instruction and a 32-bit instruction. All data are 32-bit size. InFIG.9, HADDR represents a memory address of MRAM11, accessed by CPU12, and HRDATA represents data stored in the accessed address. Access controller (Accelerator)14includes: a register (reg) to store HADDR; a comparator (cmp) to compare the HADDR with the next HADDR; a prefetch address generator; a MUX (Multiplexer) to select either HADDR or an output from an address generator and output it; registers (reg 0, reg 1) to store data read from MRAM11; and another MUX (Multiplexer) to select any read data and output it. Here, “prefetch” means to capture, or fetch, data in advance of the timing to actually use it. As shown inFIG.9, an arrow from an upper or lower side of the circuit block is a controlled input that is “0” or “1”, and MUXs have a function to select one of two inputs, either from left or right depending on the value “0” or “1”, and output it as is. Thus, access controller14: includes an address-storing register (reg) provided at an input side of STT-MRAM11, which receives an input about an address in STT-MRAM11that is a reading destination and stores the address; a multiplexer (Multiplexer) that reads multiple destinations of STT-MRAM11specified in the address-storing register (reg); multiple data-storing registers (reg 0, reg 1) that store data read from STT-MRAM11; and a comparator (cmp) that receives a reading instruction together with a specified reading destination and compares an address with an reading destination address stored in the address-storing register (reg). The data address to be used is passed from CPU12to access controller (Accelerator)14via HADDR; at that time, the address is stored in the left side register (reg) in the Accelerator ofFIG.9. The right side MUX in the Accelerator compares the data passed from HADDR with the data stored in reg, and if they do not correspond to each other, the right side MUX regards the HADDR value as an MRAM ADDR value, read data for two 16-bit instructions (16-bit×2=32-bit, two of them in parallel for one MRAM) in one time from both MRAM 0 and MRAM 1, and store them into reg 0 and reg 1. In a case 16-bit instructions stored in the consecutive memory addresses are consecutively executed, data for four instructions are captured into reg 0 and reg 1 in one time by the above-described processing, therefore, the data passed from CPU12via HADDR is compared with the data stored in reg, and if the above-described conditions are satisfied, the data captured in reg 0 and reg 1 in advance and corresponding to the address specified in HADDR is specified by a computing unit, “Output control” in the FIG., then used as an output to HRDATA, or a reading instruction from CPU12. At that time, in response to a reading instruction from CPU12via HADDR, data are not passed from MRAM11but from reg 0 or reg 1; therefore, data are not returned at a possible transfer speed between the Accelerator and the MRAM (for example, 50 MHz) but returned at a possible transfer speed between CPU12and the Accelerator (for example, 200 MHz).FIG.11Aillustrates a series of flow; values used here, such as 50 MHz, 200 MHz, are examples. In a case 32-bit instructions stored in the consecutive memory addresses are consecutively executed, data for two instructions are captured into reg 0 and reg 1 in one time by the above-described processing, therefore, the processing same as the above is performed. In this case, the data is returned at 100 MHz.FIG.11Billustrates a series of flow; a value used here, such as 100 MHz, is an example. The access from CPU12to MRAM11is performed in multiplex and the read data are temporally saved in registers (reg 0, reg 1). If accesses to the same memory address are repeated, the data stored in registers is re-used instead of memory data.FIGS.10A to10Fare diagrams to illustrate a transition example of data transfer inFIG.9. A data request is executed before the CPU requires the data, and when the next instruction that is prepared for fetch is executed, the 16-bit instruction stored in the registers is performed. Thus, a high-speed instruction fetch is performed without interruption. InFIGS.10A to10F,FIG.10Aillustrates an initial state. InFIG.10B, a prefetch data request, data storage into a register in the access controller (accelerator circuit), and a fetch operation of instruction A are simultaneously executed. InFIG.10C, a fetch operation of instruction B is executed. InFIG.10D, a fetch operation of instruction C is executed. InFIG.10E, a fetch operation of instruction D is executed and the prefetch data requested inFIG.10Bhas been prepared at this time. InFIG.10F, another prefetch data request, data storage into another register in the access controller (accelerator circuit), and a fetch operation of instruction E are simultaneously executed. As the fetch preparation of the instruction to be next executed has been completed in the previous state, an instruction fetch can be executed at high speed without interruption. FIGS.11A and11Bare examples to illustrate data flowchart;FIG.11Ais a case in which a 16-bit instruction assigned to consecutive memory addresses;FIG.11Bis a case in which a 32-bit instruction assigned to consecutive memory addresses. As shown inFIG.11A, the 16-bit instruction assigned to consecutive memory addresses is fetched in series, enabling prefetch data for four instructions in advance, as shown inFIG.11A, an instruction fetch can be executed four times faster than the access speed to the MRAM. In addition, the 32-bit instruction assigned to consecutive memory addresses can be executed two times faster than the memory access speed by interleaving and performing the same control to store data for two 32-bit instruction in a register, as shown inFIG.11B. As a result, it is possible to conceal a bottleneck in the memory access and an instruction fetch can be appropriately executed at high speed depending on the length of a fetch instruction. Thus, speed enhancement has been achieved by so-called best effort manner. The degree of actual performance improvement depends on the program to be executed. Specifically, it is possible even in a program in which memory accesses to random accesses due to conditional branching frequently occur; however, it is preferably effective in processing to execute a sequential processing such as an MCU for a sensor node application. In such processing, accesses to the memory are also regular, it therefore functions very effectively. As a result, similar effects to cache can be obtained without increasing the area or the power overhead. Here, this access controller will be described in more detail. For example, a test chip for a nonvolatile VLSI processor using a 40 nm MOS/MTJ process has been fabricated. It can be designed by using an automatic design flow and a cell library for an MTJ-based NV-LIM LSI. An area overhead due to introduction of the accelerator circuit can be estimated as 13.6% on the basis of the number of gate in each block. Note that each block, separately designed for evaluation of overhead in this case, can be integrated and laid-out as one circuit block. In that case, the area overhead would be expected to be even smaller. FIG.12is a diagram to illustrate a simulation waveform in the access controller (accelerator circuit). In this example, the following instructions are sequentially executed: (1) a 16-bit instruction assigned to consecutive memory addresses; (2) a branch instruction to access to inconsecutive memory addresses; (3) a 32-bit instruction assigned to consecutive memory addresses.FIG.12reveals that an operating frequency dynamically changes from 50 MHz to 200 MHz depending on if transition of a memory address to be accessed satisfies conditions for instruction fetch acceleration. FIG.13is a chart to compare power consumption of a system configured with a CPU incorporating the access controller (accelerator circuit) and an MRAM with that of conventional systems. Here, the effects to ease performance requirements for the MRAM have been checked as follows. Low performance MRAM (LP-MRAM): reading/writing at 50 MHz. Middle performance MRAM (MP-MRAM): reading/writing at 100 MHz High Performance MRAM (HP-MRAM): reading/writing at 200 MHz. Note that all the MRAMs are designed in the same manner. FIG.13reveals that the MRAM consumes most of the total power consumption and that the higher performance is required, the more power the MRAM consumes. The access controller (accelerator circuit), which does not require change of performance requirements for the MRAM, can be used to enhance the system performance with only power overhead of the accelerator circuit. Here, performances in the case with and without the accelerator circuit are compared. In the evaluation, an area, power consumption, and a processing performance are evaluated in three type of systems with MRAMS having different performances described above, a system with a cache, and a system with the accelerator circuit. TABLE 1Conventional ExamplePresent ExamplePerformancew/LP-MRAMw/MP-MRAMw/HP-MRAMw = cachew/LP-MRAMArea ratio1.01.01.02.361.03Voltage (V)1.11.11.11.11.1Frequency (MHz)5010020050/20050/100/200Peak perf. (MIPS)49.5699.12198.24198.24198.24Power (mW)2.0142.7023.5242.4872.170Peak efficiency ratio11.492.293.243.71Temperature range (° C.)0-10030-10070-1000-1000-100 As shown in TABLE 1, in the implementations with middle or high performance MRAM, their efficiency decrease as the MRAM consumes more power, which narrows temperature range ensuring the operation. In the implementation with a cache, the performance could be expected to be higher; however, the area overhead becomes very large. On the other hand, in the implementation with the accelerator circuit, an accelerative unit can be embedded with a small overhead area and the operating frequency of the CPU can be accelerated without changing performance requirements for the MRAM. As a result, area efficiency can be improved. Thus, comparing with the implementation with a conventional cache, its performance efficiency (MIPS/mW) improves from 2.29 times to 3.71 times, and reading and writing operations can be ensured in a wide temperature range. The performance of the access controller varies depending on programs that should be executed; however, it is revealed that a benchmark using some sample programs enables the access controller to perform at more than about 100 MHz even in a filter operation by relatively large capacity memory access, and that it very effectively performs in programs with relatively few memory accesses or branches. Next, the NV-CPU will be explained. All the flip-flops used for the NV-CPU are MTJ-based nonvolatile flip-flops. Since they are nonvolatile, there is no need to back up data for power-gating. FIG.14is a cross-sectional view to illustrate an MTJ device used in the NV-CPU. The MTJ device is embodied by the configuration of an MTJ provided above a CMOS substrate, the MTJ is formed by providing a pin layer, a barrier layer, and a free layer on the top metal layer on which CMOS is formed. An MTJ element has two different resistances depending on the spin direction. The MTJ element can maintain a resistance state without continuous power supply. Therefore, the MTJ device can be used as a 1-bit nonvolatile memory. FIG.15is an example to illustrate a nonvolatile flip-flop circuit.FIG.16is a diagram to illustrate a simulation waveform.FIG.17is a diagram to illustrate a flip-flop operation.FIG.18is a diagram to illustrate a writing operation.FIG.19is a diagram to illustrate a reading operation. The flip-flops are a master-slave type flip-flops, which can be divided into a master unit, a slave unit, and a nonvolatile memory unit. In the embodiment of the present invention, an MTJ element is used for constituting a nonvolatile memory unit. InFIG.16, “DATA” represents an input signal; “Q” and “QB” represent output signals (complement to each other, “B” stands for Bar, or complement); “CLK” and “CLKB” represent clock signals (complement to each other); “LB” represents a reading control signal, usually it is “1” but when it is “0”, a reading processing from the MTJ element to a memory unit of FF is performed; “WB” represents a writing control signal, usually it is “1” but when it is “0”, a writing processing from the memory unit of FF to the MTJ element is performed; “SB” represents a setting signal, usually it is “1” but when it is “0”, FF memory state is turned to “1” regardless of an input. The FF circuit can be mainly divided into three units: a master unit, a slave unit, a nonvolatile memory unit. The master unit captures an input signal DATA when a clock is “0” and the master unit passes the signal to the slave unit when the clock is “1”. The slave unit captures the DATA from the master unit and further outputs it to “Q” and “QB” when the clock is “1”. The slave unit performs nothing when the clock is “0”. The combination of the master unit and the slave unit works as a usual master-slave type D flip-flop. The nonvolatile memory unit includes: two MTJ elements that complementally store 1-bit memory; and a writing circuit that generates current to write data into the MTJ elements. The nonvolatile memory unit writes data captured in the slave unit into the nonvolatile memory or reads the data from the nonvolatile memory to the slave unit depending on the control signal LB or WB. Next, an NV-FPGA will be explained in detail.FIG.20is a diagram to illustrate details of an NV-FPGA. InFIG.20, the NV-FPGA is configured to 8 columns-21 rows; however, it can be freely set. Each tile in the FPGA has a power switch (PS) and a controller to turn ON/OFF the power, enabling each tile to be power-gated. FIG.21is a diagram to illustrate a tile configuration. The tile includes: a configurable logic block (CLB) having some logic elements (LEs); a connecting block (CB) to interface the CLB to some routing tracks; a switch block (SB) for signal routing; a configuration circuit (CFGC); and a controller. A logic element LE, for example, includes: a 6-input LUT circuit, a flip-flop (FF) circuit, and a multiplexer (MUX). Configuration data of the CLB, SB, and CB are written via the CFGC. Values in a truth table are written into an MTJ element in each LUT circuit to perform a predetermined logical computing. The FF circuit is constituted of a CMOS-FF unit and an MTJ element unit. When it operates as usual, data is read/written by using the CMOS-FF; immediately before turning off the power, a value of the CMOS-FF is written into the MTJ element unit; when the power turns on again, the value stored in the MTJ element unit is written back into the CMOS-FF. The CB connects any input/output pin of the CLB with any routing track on the basis of the configuration data. The SB connects each routing track with any neighbor tile on the basis of the configuration data. A routing switch, which is a basic component of the above-described CB and SB, is a circuit to control turning ON/OFF of a path transistor on the basis of memory data. The memory data are stored in an MTJ-based latch with an area efficiency. The path transistor is implemented using an NMOS switch. The controller is used to perform power-gating at block level. Each function block is optimally turned off. The switch block (SB) and the connection block (CB) are both configured to include a basic component referred to as a routing switch, which includes a nonvolatile storage area.FIG.22is an example to illustrate a circuit including a circuit with a routing switch. An output (Q) from a nonvolatile memory element is used to turn ON/OFF an NMOS path switch. The nonvolatile memory element includes: two inverters, two local write-control transistors, a sense amplifier using two MTJ devices. The routing information is complementally programmed. The sense amplifier reads a stored state M during the power on period without generating a steady DC current path, and is used to keep it as Q. Once the configuration data are programmed, no additional control transistor is required because the configuration data are unchanged. Note that the tile includes a decoder and a driver, which embodies a reconfigurable computing module after the fabrication. The configurable logic block (CLB) is configured to include a basic component referred to as a logic element, which is constituted of a nonvolatile Lookup Table circuit (nonvolatile LUT circuit) and a nonvolatile flip-flop (nonvolatile FF), both having a nonvolatile memory function.FIG.23is a block diagram to illustrate an example of a Lookup Table circuit. FIG.23is a block diagram to illustrate a 6-input LUT circuit. The 6-input LUT circuit is constituted of five components: a sense amplifier, a 64-to-1 NMOS selector, an MTJ configuration array, an NMOS reference tree, and a programmable reference resistor. A truth table for an arbitrary 6-input logical function is stored in series connected MTJ devices with 64 pairs in the MTJ configuration array, such as (R0, R64), (R1, R65), (R63, R127). The writing operation to store a logical function information into the MTJ devices is performed by activating a word line (WL) and a bit line (BL). It is almost same as a writing operation in a conventional magnetic RAM (MRAM). BL0and BL2are shared between the MTJ configuration array and a programmable calibration resistor. A writing access transistor Mwc is shared between the 64 MTJ pairs in the MTJ configuration array. The logical operation of the LUT circuit is completely different from the reading operation of the MRAM because neither the BL nor the WL are used in the operation. When an EN is set to high and both an NMOS selector and an NMOS reference tree are activated by complementary logic inputs X, a current IFand IREFrespectively pass through a pair of MTJ corresponding in the MTJ configuration array and the programmable calibration resistor. When a difference between IFand IREFis sensed, a complemental full swing outputs (Z, Z′) are generated by the sense amplifier. In order to ensure a sufficient sensing margin, series/parallel connected MTJ devices in the MTJ configuration array and the programmable calibration resistor are configured as follows. First, in the MTJ configuration array, when the stored data Y are 0, it is configured to (RAP, RAP); and when the stored data Y are 1, it is configured to (RP, RP). When a resistance value of the MTJ device follows Gaussian distribution N (R, σR2) (where R is an average value and σRis a standard deviation), the total resistance value of series connected MTJ devices follows N (2R, 2σR2). That means the resistance distribution can be narrowed to avoid an overlap of (RP, RP) state and (RAP, RAP) state. Next, in the programmable calibration resistor, the total resistance is adjusted to insert IREFin the middle of I (RP, RP) and I (RAP, RAP). As a MTJ device has two different resistance values, by using four MTJ devices (Rr0, Rr1, Rr2, Rr3), 16 different reference resistance values can be obtained. The total resistance value can be adjusted following the fluctuation of the IFcurrent level due to process variation. Note that RPrepresents a low resistance and RAPrepresents a high resistance in the MTJ device. FIG.24is an example to illustrate a nonvolatile flip-flop circuit. A nonvolatile flip-flop circuit (nonvolatile FF circuit) include: an NMOS-based differential-pair circuit (DPC), a cross coupled CMOS inverters, two MTJ devices, and an MTJ writing circuit. In a normal operation, complementary inputs (D, D′) from the NV-LUT circuit are stored in the cross coupled CMOS inverters. When WCKB is activated at low level, they are stored in the MTJ devices (M, M′) in a master latch. Behaviors of the master latch are as follows. FIG.25Ais a diagram to illustrate a THROUGH phase (CLK=1 and CLK′=0). As M1 and M4 are turned on, a load capacitance Cq′ is discharged to GND and M6 is turned on. As a result, a load capacitance Cq is charged, voltage at an output node q becomes VDD, and the output node q′ becomes 0 V. FIG.25Bis a diagram to illustrate a HOLD phase (CLK=0 and CLK′=1). As M3 is turned on, voltages at the output node (q, q′) are held in a cross-coupled CMOS inverters. At the same time, M1 and M2 are turned off so that the DPC does not operate. As a result, there is no direct current path from VDD to GND. FIG.25Cis a diagram to illustrate a STORE phase. When inputs (D, D′) are (1, 0) and WCKB is activated at low level, M10 and M13 are turned on by NOR gates and a writing current Iw is applied to MTJ devices. FIG.25Dis a diagram to illustrate a RESTORE phase. When RESB is activated at low level, M9 is turned on and voltages at the output node q and q′ are balanced. As a result, the clamped voltage is applied to each MTJ device. Then, sensing currents IMand IM′ are respectively penetrated through M and M′. When RESB is activated at high level, M9 is turned off and a difference between IMand IM′ is amplified by the cross-coupled CMOS inverters. In the nonvolatile FF circuit shown inFIG.24, data stored in the FF constituted of CMOS inverters, which are cross-coupled immediately before turning off the power, are written into MTJ elements, and the data are read again as the stored data from the MTJ elements by the CMOS inverters of FF, which are cross-coupled after turning on the power. Thus, no data back up/reloading via an external nonvolatile memory is required, enabling a prompt turning ON/OFF the power transition. Preferably, a DSP (Digital Signal Processor) is incorporated. Using a DSP enables even a relatively large volume computing. The DSP is also provided with a power switch (PS) and a controller to turn ON/OFF the power, enabling each tile to be power-gated. Similarly in the tile, a switch block and a connection block in the DSP are configured to include a basic component, referred to as a routing switch, which includes a nonvolatile storage area. As described above, each basic component in the NV-FPGA includes a nonvolatile memory, which stores the configuration data. In addition, the nonvolatile memory also stores a memory state of the flip-flop. Therefore, it is unnecessary to back up data in an external nonvolatile memory immediately before turning off the power or write back the data after turning on the power again, enabling easy turning ON/OFF the power. By writing a certain computing into a nonvolatile FPGA in advance and turning on the power as needed basis, computing can be immediately started and the CPU processing can be accelerated. In addition, turning off the power during the non-use period can avoid wasteful power consumption. In the circuit configuration of a DSP core inFIG.26, SEL [0] and SEL [1] represent control signals to select a function; A, B, and C represent an input; and OUT represents an output. It operates as a circuit to perform the following computing: when (SEL [0], SEL [1])=(0, 0), OUT=A×B; when (SEL [0], SEL [1])=(0, 1), OUT=A×B+C; when (SEL [0], SEL [1])=(1, 0), OUT=A+B; and (SEL [0], SEL [1])=(1, 1) is not used. Here, any circuit configuration can be used and other configurations are possible. Thus, NV-FPGA4preferably has one or more tiles to perform a part of operations on CPU3and a DSP (Digital Signal Processor) to perform a part of operations on CPU3faster than the tile because they can satisfy both low power consumption and high performance as shown inFIG.3B. Implementation Examples Next, implementation examples will be explained. TABLE 2 provides specifications for chips actually fabricated. TABLE 2MOS Tech. Node40-nm LVT, SVT, HVTMTJ Tech. Node39-nm perpendicular(Electricallydetermined size)Supply Voltage1.1-1.3 V (Core)1.8/3.3 V (Peripherals)MRAM Capacity64 kB(4 kB sub-array × 16)MRAM Sub-2T-2MTJ cellArray Structure256 cols. × 64 rows × 2FPGA Capacity1,176 six-input LUTs7 DSPsTransistor Count4.8MMTJ Count1.5M FIG.27is an image of a fabricated chip, including an STT-MRAM, an NV-CPU and its peripheral circuits, and an NV-FPGA. FIG.28is a diagram to illustrate a measurement waveform. A program counter operates in response to a 200 MHz clock (CLK), data are transmitted from NV-CPU3to NV-FPGA4via MRAM2in response to an enable signal from NV-CPU3to NV-FPGA4, and data are transmitted from NV-FPGA4to NV-CPU3via MRAM2in response to an enable signal from NV-FPGA4to NV-CPU3. The enable signals are control signals between NV-CPU3and NV-FPGA4, and data flow from NV-CPU3to NV-FPGA4via MRAM2or flows from NV-FPGA4to NV-CPU3via MRAM2. In the present example, data exchange at 200 MHz can be achieved between NV-CPU3and NV-FPGA4. Here, a specific address region in MRAM2is reserved as a region to store data transmitted between NV-CPU3and NV-FPGA4. Data to be input from NV-CPU3to NV-FPGA4are written into the region, a signal indicating completion of writing and completion of preparation for starting a calculation is passed from NV-CPU3to NV-FPGA4, and NV-FPGA4starts computing using the data written into the above region. After computing, the results calculated by NV-FPGA4are passed to NV-CPU3in the similar way. As the address to store data transmitted between NV-CPU3and NV-FPGA4is pre-determined, only a signal indicating completion of writing and completion of preparation for starting a calculation is transmitted between them and it is unnecessary to pass the data about the memory address storing data related to the process. FIG.29is a shmoo plot. The vertical axis represents core voltage (V) of the NV-CPU, the NV-FPGA, the STT-MRAM and the horizontal axis represents operating frequency (MHz). According to the shmoo plot, the following operational combinations of frequency and voltage have been checked in the range of no less than 100 MHz and no more than 204 MHz, core voltage range from 1.05 V to 1.30 V, at 2 MHz interval, and 0.1 V interval. Operations in the white area inFIG.29have been validated. Operations at 100 MHz can be validated in the range of no less than 1.07 V and no more than 1.3 V at 0.1 V interval. Operations at 202 MHz can be validated at 1.3 V. Since 100 MHz frequency operations at 1.1 V voltage and 200 MHz frequency operations at 1.3 V voltage have been validated, operations with voltage above an approximate line or curve through these two points applied to each core can be ensured. The approximate line can be, for example, 2×10−3f−V+0.9=0 (f: frequency (MHz), V: voltage (V)). FIG.30is a graph to illustrate a relation between intermittent operation intervals and average power. It shows the results without power-gating (without PG), with power-gating (with PG), and with power-gating and also acceleration processing by FPGA (with PG & FPGA-ACC). The horizontal axis represents the intermittent operation interval, and the average powers of each intermittent operation interval of 10 ms, 20 ms, 30 ms, 40 ms, 50 ms, 60 ms, 70 ms, 80 ms, 90 ms, and 100 ms have been obtained. The data have been assumed to be processed by a Laplacian filter. Here, active state and inactive state in the NV-CPU and the NV-FPGA are repeated, and the time interval between the points of starting an operation and the next operation is referred to as “intermittent operation interval”. In the case without power-gating, average power consumption is kept high, 1000 regardless of the intermittent operation interval. On the other hand, in the case with power-gating, the longer the intermittent operation interval, the less the average power consumption. In addition, the power-gating significantly reduces the power consumption. Further, in the case with power-gating and also acceleration processing by FPGA, the longer the intermittent operation interval, the less the average power consumption, furthermore, the FPGA used in the same intermittent operation interval far greater reduces the power consumption comparing the case without the FPGA. When the intermittent operation interval is 50 msec, the average power consumption with power-gating is 100 μW, and the case also with FPGA is 47.14 μW, which achieves reduction of power consumption by 54% comparing the case without power-gating. The results indicating inFIG.30revealed that a microcontroller unit as a device according to the present example can be set with an intermittent operation interval of no more than 100 ms when it was fabricated. In addition, another view point revealed that a microcontroller unit as a device according to the present example can be used with no more than 100 μW average power. More specifically,FIG.30indicates that a preferable intermittent operation interval is no more than 100 ms because the intermittent operation interval of 10 ms, 20 ms, 30 ms, 40 ms, 50 ms, 60 ms, 70 ms, 80 ms, 90 ms, and 100 ms can suppress the average power to the predetermined one. Thus, any suitable range in these ranges can be used. The intermittent operation interval is preferably no less than 10 ms and no more than 100 ms, more preferably, no less than 10 ms and no more than 60 ms, still more preferably, no less than 10 ins and no more than 50 ms. In the case only power-gating is introduced in consideration that the average power is no more than 100 μW, the preferable range follows the case with power-gating and FPGA. FIG.31is a graph to illustrate power obtained by energy harvesting per energy source. From this graph, and assuming from the power obtained from light such as solar rays, heat, and vibration such as mechanical vibration, natural vibration, and artificial vibration, high frequency, 100 (μW/cm2or μW/cm3) may be acceptable as a standard for the MCU used for an IoT sensor node driven by the power obtained by energy harvesting. Thus,FIG.30reveals that the intermittent operation interval with power-gating and also processing by FPGA using the nonvolatile CPU, the MRAM, and the nonvolatile FPGA can be no less than approximately 20 msec at 100 μW average power consumption. The upper limit of the intermittent operation interval can be freely set. In addition,FIG.30reveals that the power-gating and also processing by FPGA achieves the average power consumption of no more than 100 μW and that a device with the intermittent operation interval of no less than 20 ins can be achieved, thereby, providing a device for IoT. The results indicates that a device fabricated based on the present invention, in which the MRAM, the NV-CPU, and the NV-FPGA are configured with a nonvolatile memory for inactive units using an MTJ, can cut a wasteful power consumption by using power-gating technique, in which it is unnecessary to back up the data stored in a memory cell in the MRAM, the NV-CPU, and the NV-FPGA and a power controller stops power supply to each module in the MRAM, the NV-CPU and the NV-FPGA, or inactive units. In addition, a reconfigurable computing module in the FPGA implements various signal processing at high speed. Further, an access controller enables an effective data transfer between the NV-CPU and the MRAM, which allows the whole system to operate at high speed. Thus, it has been found that a microcomputer as a device with low-power and high-performance can be provided. FIG.32is a graph to illustrate power required for each processing by a Laplacian filter, a DCT (Discrete Cosine Transform), an FIR (Finite Impulse Response) filter, an FFT (Fast Fourier Transform). In every case without processing by the FPGA, the MRAM as a memory consumes significant power; however, with processing by the FPGA, power consumption in the MRAM can significantly be reduced. The reduced power is greater than the power consumed by the FPGA, thus, achieving a great effect. TABLE 3 represents the number of times of using tiles, DSPs, LUTs, and FFs, the maximum operating frequency, and the power at 200 MHz in the processing by a Laplacian filter, a DCT (Discrete Cosine Transform), an FIR (Finite Impulse Response) filter, and an FFT (Fast Fourier Transform). TABLE 3MaxPower@200 MHzConfiguredDevice utilizationfrequencyoperationapplicationTilesDSPsLUTsFFs[MHz][mW]Laplacian6315013252283.21filterDCT7125662972533.50FIR filter9417523562054.57FFT3823021992361.94 The maximum operating frequency exceeds 200 MHz regardless of the kinds of operation; in the DCT, it exceeds 250 MHz. The power at 200 MHz decreases in the order of the FIR filter, the DCT, the Laplacian filter, and the FFT. As for the kinds of function used for the operation, the number of used times increases in the order of the DSPs, the tiles, the FFs, and the LUTs. The present example will be compared with other conventional embodiments.FIG.33is a table comparing the present example with conventional examples (Non Patent Literatures 1 to 5).FIG.34is a graph to illustrate relations between the operating frequency and average power assuming it is used for an IoT application according toFIG.33. The example with FPGA is according to the technique in Non Patent Literature 3. The operating frequency in Non Patent Literature 3 is 25 MHz while that in the present example is 200 MHz, which enables a high data-processing performance. The average power in the present example is 47.14 μW while that in the conventional examples are larger numbers with more digits. Thus, a device according to the present example and embodiments of the present invention can first provide a device with low-power and high-performance and a sensor node using the same. Although the present description uses the abbreviations “NV-CPU” and “NV-FPGA”, they can be interpreted as “nonvolatile CPU” and “nonvolatile FPGA”. In addition, the term “memory cell”, used in an NV-CPU, an NV-FPGA, and an MRAM, can be referred to as storage area. The NV-FPGA is an FPGA-ACC shown inFIG.30, or an FPGA-based accelerator. Needless to say, if volatile units are used, data stored in nonvolatile area in a nonvolatile CPU and a nonvolatile FPGA (nonvolatile FPGA-ACC) are subject to be backed up and written at power-gating. Concepts of embodiments of the present inventions are as follows. First, a device includes: an MRAM configured to include multiple memory cells separated into multiple regions including selection transistors and MTJs; a nonvolatile CPU configured to include a nonvolatile memory; a nonvolatile FPGA-ACC configured to include a nonvolatile memory and execute a part of operations on the nonvolatile CPU; and a power-gating controller that controls power supply to each memory cell in the MRAM, the nonvolatile CPU, and the nonvolatile FPGA-ACC. This allows a configuration as an FPGA-based accelerator to separately perform computing of the nonvolatile FPGA-ACC and the nonvolatile CPU; and to store data in the MRAM to be related to computing of the nonvolatile FPGA-ACC and the nonvolatile CPU. First, since both of the FPGA and the CPU are nonvolatile, it is unnecessary to back up or write the data and the configuration data in the FPGA whenever power-gating is performed by the power gating controller, further, it is also unnecessary to back up or write the data in the CPU (seeFIGS.2A to2C). Second, the CPU and the FPGA-based accelerator (FPGA-ACC) parallelly process sequential processing (processing in order according to the sequence) between operation intervals. The parallel processing can significantly reduce the processing time, which allows for a longer power gating (PG) time; and thus, being unnecessary for both static and dynamic power for the increased power-gating time due to the parallel processing, that is, for the reduced processing time by the FPGA-ACC. This unnecessary power is far greater than the increased power consumption due to the computing of the FPGA-ACC (seeFIGS.3A and3B). Thus, to provide a nonvolatile FPGA-ACC to a nonvolatile microcomputer configured with a nonvolatile CPU and an MRAM can realize high performance and low power consumption. A shorter processing time can reduce the operating time of the MRAM, which consumes most of the power, thereby achieving further lower power consumption. By making a nonvolatile CPU and a nonvolatile FPGA-ACC connectable to each other, sequential processing can be parallelly processed by the nonvolatile CPU and the nonvolatile FPGA-ACC (seeFIG.28). The FPGA, in particular, can undertake computing owing to its reconfigurability and is suitable for parallel processing with the CPU, which allows flexible computing in sensor node. Second, the above-described device further includes an access controller that controls accesses to the MRAM by reading data in advance and backing up the data when the data are to be read from the MRAM. Such an access controller receives a data-reading instruction from the nonvolatile CPU and determines whether or not the data have been read from the MRAM in advance, and if the data have been already read, the access controller transmits the stored data to the nonvolatile CPU. This enables a multiplexing access from the CPU to the MRAM and allows the read data to be temporally saved in the accelerator. When accesses to the same memory address are repeated, temporally-saved data are not read from the MRAM and is reused (seeFIGS.10A to10F. Such a configuration of an accelerator can be embodied as shown inFIG.9, for example. The access controller includes:an address-storing register that receives an input about an MRAM address that is a reading destination, the address-storing resister storing the address;a multiplexer that outputs multiple destinations of the MRAM stored in the address-storing register to the MRAM for reading;multiple data-storing registers that store data read from the MRAM in response to an input from the multiplexer; anda comparator that receives a reading instruction together with a specified reading destination and compares an address related to the specified reading destination with a reading address stored in the address-storing register, whereinthe access controller that receives a reading instruction together with a specified reading destination and outputs data already read and stored in any one of the data-storing registers in response to the reading instruction when the comparator determines the data have been read from the MRAM in advance. In addition, the access controller further including a prefetch address generator connected to the multiplexer, whereinthe prefetch address generator generates an address including a reading address destination stored in the address-storing register. Third, a data transfer method between a CPU and an MRAM via an access controller, including:the access controller receiving a data-reading instruction from the CPU together with a reading address;the access controller reading multiple address data including the reading address from the MRAM in advance;the access controller receiving a data-reading instruction from the CPU together with a next reading address; andthe access controller determining whether or not the data have been already read and responding to the reading instruction, if the reading instruction is for already-read data, using the data read in advance without performing data-reading from the MRAM. That is, it can simultaneously perform a prefetch data request, data storage in the access controller, and a fetch operation, and also can sequentially perform fetch operations (seeFIGS.10A to10F), thus, an instruction fetch can be executed at high speed without interruption. It is useful for sequential performance, for example, in a sensor node application, and effectively functions owing to regular access to the memories. It can improve the system performance without changing performance requirements for the MRAM but only with a power overhead of the access controller, which allows for a faster operating frequency of the CPU. The access controller, which can be embedded in a chip as an acceleration unit with a small area overhead, can implement a faster operating frequency of the CPU, which guarantees a writing and reading operations in a wide temperature range. Fourth, a processing method in a microcontroller including a nonvolatile CPU, an MRAM, and a nonvolatile FPGA-ACC as a reconfigurable computing module, whereinthe MRAM is configured with a region to store data transmitted between the nonvolatile CPU and the nonvolatile FPGA-ACC,the method comprises:the MRAM writing data into the region, the data being input from the nonvolatile CPU to the nonvolatile FPGA-ACC;the nonvolatile CPU passing a signal indicating completion of preparation for the writing and starting a calculation to the nonvolatile FPGA-ACC;the nonvolatile FPGA-ACC starting an operation by using the data written in the region; andthe nonvolatile CPU being passed an operation result computed by the nonvolatile FPGA-ACC to the nonvolatile CPU through the region. This method enables an efficient processing in a microcontroller because it is unnecessary to pass information about the memory address storing data required for processing between a nonvolatile CPU, a nonvolatile FPGA-ACC, and an MRAM. In the above-described processing method, in particular, it is preferable that power-gating control is performed for the nonvolatile CPU and the nonvolatile FPGA-ACC. The power-gating control is to supply power only during computing, that is, not to supply power during inactive intervals. Using the nonvolatile CPU and the nonvolatile FPGA-ACC removes the need for baking up or writing data when the power is turning ON or OFF. Thus, power-gating can reduce average power consumption and the longer the intermittent operation interval, the less power is consumed (see the result “with PG & FPGA-ACC” inFIG.30). In the above-described processing method, in particular, it is preferable that computing by the nonvolatile FPGA-ACC relates to any one of processes by a Laplacian filter, a DCT (Discrete Cosine Transform), an FIR (Finite Impulse Response) filter, and an FFT (Fast Fourier Transform). In the case a processing method in a microcontroller as an IoT sensor node, in particular, as explained with reference toFIGS.4and5, it is preferable to suppress data amount in the terminal end sensor node110, for example, by obtaining a feature value or processing an image, and to transmit the data to a higher unit cloud system140in order to avoid processing of the sensor data in cloud system140. Thus, it is suitable for any one of these processes. In the above-described processing method, in particular, it is preferable that the nonvolatile CPU and a nonvolatile FPGA-based accelerator parallelly perform sequential processing. The sequential processing by the nonvolatile CPU and the nonvolatile FPGA-ACC in parallel realizes high computing performance and low power consumption, thus, it is preferably applied for an IoT sensor node. REFERENCE SIGNS LIST 1: device2,11: STT-MRAM (MRAM)3,12: NV-CPU4,13: NV-FPGA5: power-gating controller6,14: access controller7: bus100: system110: sensor node120: gateway (GW)130: communications network140: cloud system
60,243
11862218
DETAILED DESCRIPTION The present disclosure provides many different embodiments, or examples, for implementing different features of this disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations), and the spatially relative descriptors used herein may likewise be interpreted accordingly. A magnetic tunnel junction (MTJ) includes first and second ferromagnetic films separated by a tunnel barrier layer. One of the ferromagnetic films (often referred to as a “reference layer”) has a fixed magnetization direction, while the other ferromagnetic film (often referred to as a “free layer”) has a variable magnetization direction. For MTJs with positive tunneling magnetoresistance (TMR), if the magnetization directions of the reference layer and free layer are in a parallel orientation, electrons will more likely tunnel through the tunnel barrier layer, such that the MTJ is in a low-resistance state. Conversely, if the magnetization directions of the reference layer and free layer are in an anti-parallel orientation, electrons will less likely tunnel through the tunnel barrier layer, such that the MTJ is in a high-resistance state. Consequently, the MTJ can be switched between two states of electrical resistance, a first state with a low resistance (RP: magnetization directions of the reference layer and the free layer are parallel) and a second state with a high resistance (RAP: magnetization directions of the reference layer and the free layer are anti-parallel). It is noted that MTJs can also have a negative TMR, e.g., lower resistance for anti-parallel orientation and higher resistance for parallel orientation, and though the following description is written in the context of positive TMR based MTJs, it will be appreciated the present disclosure is also applicable to MTJs with a negative TMR. Because of their binary nature, MTJs are used in memory cells to store digital data, with the low resistance state RPcorresponding to a first data state (e.g., logical “0”), and the high-resistance state RAPcorresponding to a second data state (e.g., logical “1”). To read data from such an MTJ memory cell, the MTJ's resistance RMTJ(which can vary between RPand RAP, depending on the data state that is stored) can be compared to a reference cell's resistance, RRef(where RRef, for example, is designed to be in between RPand RAP, for instance, an average). In some techniques, a given read voltage VReadis applied to the MTJ memory cell and the reference cell. This read voltage results in a read current flowing through the MTJ (IMTJ) and a reference current flowing through the reference cell (IRef). If the MTJ is in a parallel state, the read current IMTJhas a first value (IMTJ-P) greater than IRef; while if the MTJ is in an anti-parallel state, the read current IMTJhas a second value (IMTJ-AP) that is less than IRef. Thus, during a read operation, if IMTJis greater than IRef, then a first digital value (e.g., “0”) is read from the MTJ memory cell. On the other hand, if IMTJis less than IReffor the read operation, then a second digital value (e.g., “1”) is read from the MTJ memory cell. However, MTJ read operation may sometimes also flip or significantly change, and the corresponding probability is called Read Disturb Rate (RDR). RDR, in turn, depends on the magnitude of the current passed through the MTJ (IMTJ) and the duration for which it is passed. Although a large read current would provide good signal separation between RPand RAP, a large read current may inadvertently overwrite the free layer in the MTJ. Also, a writing current may also be increased as a result of the large read current. A large writing current would introduce more energy dissipation in the write operation, and may contribute to the chances of MTJ breakdown. Conversely, though a small read current would be less likely to overwrite the free layer, a small read current may provide poor signal separation between RPand RAP. As the size of the MTJ is scaled down, the resistance of the MTJ increases and exacerbates these read operation issues. The magnitude of the current passed through the MTJ (IMTJ) depends on an effective TMR of the MTJ memory cell. The effective TMR not only is affected by the resistance of the MTJ but also the resistance of the write path, access transistors, read circuit, etc. In the applications, the effective TMR could be much lower (for instance just one-third) than the actual TMR of the MTJ. In addition, sizes of the MTJ memory cell and reference resistor are scaled down, and resistances of the MTJ memory cell RMTJand the reference resistor RRefare scaled up accordingly for successive technology nodes. The difference in the current between the MTJ memory cell and the reference cell, i.e., ΔI (ΔIPor ΔIAPrespectively for MTJ in P-state and AP-state) scales down. Hence, as technology nodes advance, the detected signal degrades phenomenally. In view of above, the present disclosure provides reading circuits and techniques for reading MTJ memory cells that enhance a ratio for the read-current between MTJ's P-state and AP-state beyond a ratio enabled by an effective TMR of an MTJ array TMRarray, thereby improving the read disturb rate (RDR) while maintaining the pre-designed low write current. One or more non-linear resistors (NLRs) are added to the read system. In some embodiments, a first non-linear resistor (NLR) is connected in series with an MTJ memory cell to enhance the effective TMR by providing a greater resistance when the MTJ memory cell is in a high-resistance (e.g., AP-state) and a smaller resistance when the MTJ memory cell is in a low-resistance state (e.g., P-state). The effective TMR can be designed to be even greater than the TMR of the MTJ itself. In some further embodiments, a second non-linear resistor (NLR) may also be added in series with a reference resistor to improve the readability further. In some embodiments, the non-linear resistor (NLR) can have a current controlled negative resistance, i.e., an S-type negative resistance (NR). An example IV characteristic curve of an S-type negative resistor is shown inFIG.10. The S-type negative resistor may be a component (e.g., forwardly biased thyristor, SCR, diac, triac, etc.) or an equivalent sub-circuit. FIG.1illustrates some embodiments of a magnetic tunnel junction (MTJ) memory cell100that can be used with various read techniques as provided herein. The MTJ memory cell100includes a magnetic tunnel junction (MTJ) memory element102and an access transistor104. A bit line (BL) is coupled to one end of the MTJ memory element102, and a source line (SL) is coupled to an opposite end of the MTJ memory element through the access transistor104. Thus, application of a suitable word-line (WL) voltage to a gate electrode of the access transistor104couples the MTJ memory element102between the BL and the SL, and allows a bias to be applied over the MTJ memory element102through the BL and the SL. Consequently, by providing suitable bias conditions, the MTJ memory element102can be switched between two states of electrical resistance, a first state with a low resistance (the P-state, magnetization directions of the reference layer and the free layer are parallel) and a second state with a high resistance (the AP-state, magnetization directions of the reference layer and free layer are antiparallel), to store data. In some embodiments, the MTJ memory element102comprises the reference layer106and a free layer108disposed over the reference layer106and separated from the reference layer106by a barrier layer110. The reference layer106is a ferromagnetic layer that has a magnetization direction that is “fixed”. As an example, the magnetization direction of the reference layer106can be “up”, i.e., perpendicular to the plane of the reference layer106pointing upwardly along the z-axis. The barrier layer110, which can manifest as a thin dielectric layer or non-magnetic metal layer in some cases, separates the reference layer106from the free layer108. The barrier layer110can be a tunnel barrier which is thin enough to allow quantum mechanical tunneling of current between the reference layer106and the free layer108. In some embodiments, the barrier layer110can comprise an amorphous barrier, such as aluminum oxide (AlOx) or titanium oxide (TiOx), or a crystalline barrier, such as manganese oxide (MgO) or a spinel (e.g., MgAl2O4). The free layer108is capable of changing its magnetization direction between one of two magnetization states, which correspond to binary data states stored in the memory cell. For example, in a first state, the free layer108can have an “up” magnetization direction in which the magnetization of the free layer108is aligned in parallel with the magnetization direction of the reference layer106, thereby providing the MTJ memory element102with a relatively low resistance. In a second state, the free layer108can have a “down” magnetization direction which is aligned and anti-paralleled with the magnetization direction of the reference layer106, thereby providing the MTJ memory element102with a relatively high resistance. The magnetic directions disclosed herein could also be “flipped” or in-plane (e.g., pointing in the x and/or y directions), rather than up-down depending on the implementation. In some embodiments, the free layer108may comprise magnetic metal, such as iron, nickel, cobalt, boron, and alloys thereof, for example, such as a CoFeB alloy ferromagnetic free layer. Although this disclosure is described largely in terms of MTJs, it is also to be appreciated that it is applicable to spin valve memory elements, which may use a magnetically soft layer as the free layer108, and a magnetically hard layer as the reference layer106, and a non-magnetic barrier separating the magnetically hard layer and magnetically soft layer. The barrier layer110of a spin valve is typically a non-magnetic metal. Examples of non-magnetic metals include, but are not limited to: copper, gold, silver, aluminum, lead, tin, titanium and zinc; and/or alloys such as brass and bronze. A synthetic anti-ferromagnetic (SyAF) layer105is disposed under the reference layer106or at one side of the reference layer106opposite to the free layer108. The SyAF layer105is made of ferromagnetic materials having constrained or “fixed” magnetization directions. This “fixed” magnetization direction can be achieved in some cases by an initializing exposure to a high magnetic field after the entire chip is manufactured. As an example, the SyAF layer105may comprise a pair of pinning layers including a first pinning layer114and a second pinning layer118. The first pinning layer114and the second pinning layer118may have opposite magnetization directions aligned with the magnetization direction of the reference layer106. Using the same example given above, the first pinning layer has the same “up” magnetization direction with the reference layer. The second pinning layer has an opposite “down” magnetization direction aligned and is anti-paralleled with the magnetization direction of the reference layer106. An interlayer spacer layer116is disposed between the first pinning layer114and the second pinning layer118. The interlayer spacer layer116can be an anti-parallel coupling (APC) layer that causes an interexchange coupling (IEC) between the first pinning layer114and the second pinning layer118such that the first pinning layer114and the second pinning layer118have anti-parallel magnetic directions and stable each other. As an example, the interlayer spacer layer116may comprise ruthenium (Ru) or Iridium (Ir). The first pinning layer114may include cobalt layers and nickel layers one stacked above another (Co/Ni)m. The first pinning layer114may also be cobalt palladium stack (Co/Pd)m, or cobalt platinum stack (Co/Pt)m, where m can be a positive integer. The second pinning layer118may comprise a reverse of the compositions of the first pinning layer114with the same or different amount of layers. For example, the second pinning layer118may include nickel layers and cobalt layers one stacked above another (Ni/Co)n, or palladium cobalt stack ((Pd/Co)n, or platinum cobalt stack (Pt/Co)n, where n can be a positive integer. A transition layer112may be disposed between the first pinning layer114and the reference layer106. The transition layer112is made of non-magnetic materials and is configured as a buffer layer, a lattice match layer, and/or a diffusion barrier. As an example, the transition layer112may comprise tantalum (Ta), tungsten (W), molybdenum (Mo), Hafnium (Hf), or CoFeW. FIG.2illustrates a memory device200that includes a number of MTJ memory cells100according to some embodiments of the present disclosure. Each MTJ memory cell100includes an MTJ memory element102and an access transistor104. The MTJ memory cells100are arranged in M columns (bits), and N rows (words), and are labeled CROW-COLUMN inFIG.2. Word-lines (WL) extend along respective rows and are coupled to gate electrodes of the access transistors104along the respective rows. Bit lines (BL) and source lines (SL) extend along respective columns, with the BLs being coupled to the free layers of the MTJ memory elements102, and the SLs being coupled to the reference layers of the MTJ memory elements102through the access transistors104. For example, in Row 1 of the memory device200, the cells C1-1through CM-1form an M-bit data word accessible by activation of word-line WL1. Thus, when WL1is activated, data states can be written to or read from the respective cells C1-1through CM-1through bit lines BL1through BLMand/or by source lines SL1through SLM. Each column also has a sense amplifier (S/A) that is used to detect a stored data state from an accessed cell of the column during a read operation. Thus, the data in the accessed cells is sensed using sense amp circuits202(S/A C1through S/A CM) associated with columns 1 through M, respectively. For example, when WL1is activated (other WLs are deactivated), the bit lines (BL1through BLM, respectively) develop respective biases corresponding to the respective data states stored in the accessed memory cells (C1-1through CM-1, respectively); and the sense amps (S/A C1through S/A CM, respectively) detect the data states from the bit lines (BL1through BLM, respectively). During a typical write operation to Row 1, a voltage VWLis applied to a word-line WL1, wherein the VWLis typically greater than or equal to a threshold voltage of the access transistors104, thereby turning on the access transistors within Row 1 and coupling the bit lines BL1-BLMto the MTJ memory elements102in the accessed cells (e.g., memory cells C1-1through C1-M). Suitable voltages are applied to the bit lines BL1-BLMand source lines SL1-SLM, where the voltage on each bit line is representative of a data value to be written to the memory cell attached to that bit line. While Row1 is accessed, the word-lines of the other rows (WL2-WLN) remain off, such that the MTJ memory elements of the other cells remain isolated and are not written to or read from. During a typical read operation of Row 1, voltage VWLis again applied to word-line WL1to turn on the access transistors104and couple the bit lines BL1through BLMto the MTJ memory elements of the accessed cells (C1-1through C1-M). The MTJ memory elements then discharge charge through the access transistors104to the bit lines BL1through BLM, based on their stored states, thereby causing the bit line voltages BL1-BLMto change. The amount by which the bit line voltages change depends upon the state of the MTJ memory elements102being accessed. To determine whether the state of the MTJ memory elements being accessed is a “1” or a “0”, one differential input terminal of each sense amp202is coupled to the bit line of the column (e.g., S/A C1 is coupled to bit line BL1) and the other differential sense amp input is coupled to a reference voltage (e.g., reference bit line REFBL1in this example). Depending upon whether the cell bit line BL1is high or low relative to the reference voltage on REFBL1, the sense amp returns a “1” or a “0”. It will be appreciated that current can flow in various directions depending on the implementation. In some embodiments, read current flows from the BL to the SL. However, a backward read can also occur in other embodiments, in which read current flows from the SL to the BL. Also, the entire MTJ structure can be fabricated upside down and is called top-pinning MTJ. Hence, in the case of a top-pinning MTJ, the BL is nearer the reference layer106, and the SL is nearer the free layer108. FIG.3illustrates a block diagram for some embodiments of a reading circuit300that can be used in the memory device200ofFIG.2. For simplicity, an MTJ memory cell100is shown inFIG.3, though it will be appreciated that additional memory cells can be arranged in parallel with the illustrated MTJ memory cell100via a bit line BL and a source line SL consistent withFIG.2. The reading circuit300comprises a read bias circuit302. During a read operation, the read bias circuit302provides a reading voltage Vreadfor the MTJ memory cell100and a reference cell100′ and accordingly output an output signal. A current mirror circuit may be used as a load of the read bias circuit. A sense amplifier304may be used to generate a digital output signal by processing output signals of the read bias circuit302. For example, the read bias circuit302may sense a read current IMTJ flowing through the MTJ memory cell100and a reference current IRefflowing through the reference cell and generate a sensing voltage V_mtjand a reference voltage V_refto feed into the sense amplifier304. A read enable circuit308can pull up a voltage level (e.g., a voltage level on the bit line BL) during the read operation, and a pull-down circuit308can pull down a voltage level (e.g., a voltage level on the source line SL) during the read operation. A first non-linear resistor (NLR) device310is coupled to the MTJ memory cell100in series and provides a transmission path for the read current IMTJ. The first NLR device310may be connected between the read bias circuit302, and the read enable circuit306. The first NLR device310is configured to provide a resistance that provides adjustment for a current flowing through the MTJ memory cell100. The resistance of the first NLR device310may decrease as the voltage applied on the first NLR device310increases. In some embodiments, the first NLR device310is an S-type negative resistance (NR) such as a forwardly biased thyristor (e.g., silicon control rectifier (SCR), diac, triac, etc.). In some further embodiments, a second NLR device312is also coupled to the reference cell100′ in series and provides adjustment for a reference current Iref. The second NLR device312may be connected between the read bias circuit302, and the read enable circuit306in parallel with the first NLR device310. The second NLR device312provides an adjustment to the reference current IREFsuch that the reference current IREFfall within the range between the read current of P-state and AP-state. The second NLR device312may have same or similar features as the first NLR device310. As an example, for S-type negative resistance (NR) such as a forwardly biased thyristor (e.g., SCR, diac, triac, etc.), a reverse-biased zener diode, or equivalent transistor circuits, an additional NLR may not be needed for the reference cell since the separation between the RAP+RNLRand RP+rNLRshould be large. On the other hand, the second NLR device can be more beneficial for a forward-biased conventional diode (e.g., pn-diode, Schottky diode) or equivalent transistor circuit, since RNLRand rNLRcould be quite close, and therefore NLR should be added in the read path of reference cell as well. FIG.4AandFIG.4Billustrate schematic views of data paths400aand400bof the memory array in more detail. The data path400aor400bcorresponds to a single column of the memory array ofFIG.2, albeit along with some standard additional circuitry which was omitted fromFIG.2for simplicity. For clarity, the data path400aor400bis illustrated with only a single MTJ memory cell100, though it will be appreciated that additional memory cells can be arranged in parallel with the illustrated MTJ memory cell100via BL and SL consistent withFIG.2. The data path400aincludes an MTJ current path402and a reference current path404, which are arranged in parallel with one another between VDDand VSS. A read bias circuit302can be a differential amplifier. The read bias circuit302may include a current mirror circuit including transistors M3, M2used as a load for the MTJ current path402and the reference current path404. Transistors M4and M6can be driven by the same input voltage V4from an equalizer. A read enable circuit306may include transistors M5, M7that respectively pulls up a voltage level for the MTJ current path402and the reference current path404during the read operation. A pull-down circuit308may include transistors M8, M10that respectively pulls down a voltage level for the MTJ current path402and the reference current path404during the read operation. The read enable circuit306and the pull-down circuit308cut off the read circuit when the read operation is not required. A sense amplifier304may include a differential amplifier having transistors M11-M15. M13and M14are driven by different voltages V_mtjand V_ref. M12and M11serve as current mirror load. The voltage outputs of M13and M14are sensed at the respective drain terminals. V01is fed into, for instance, an inverter which acts as a simple sense amplifier, shapes waveform and ensures correct polarity of the output in this implementation. The sense amplifier304is configured to detect a data state from the MTJ memory cell100by comparing a voltage provided by the MTJ memory cell100(V_mtj) with a reference voltage (V_ref) provided by a reference cell100′. Based on these voltages (V_mtj, V_Ref), the sense amplifier304provides an output voltage (V_Out) that is in one of two states, representing a logical “1” or a logical “0”, which was stored in the accessed memory cell100. The MTJ current path402includes a first current mirror transistor M3, a first pull-up read-enable transistor M7, the MTJ memory cell100(including an MTJ memory element MTJ and a first access transistor M1), and a first pull-down read-enable transistor M8. Bit line (BL) and source line (SL) are coupled to opposite ends of the MTJ memory cell100. The BL is coupled to the MTJ memory element MTJ, and the SL is coupled to the first access transistor M1and is separated from the MTJ memory element MTJ by the first access transistor M1. The reference current path404includes a second current mirror transistor M2; a second pull-up read-enable transistor M5; the reference cell100′ (including a reference MTJ memory element Ref, which can be implemented as a resistor with a fixed resistance in some embodiments, and a second access transistor M9); and a second pull-down read-enable transistor M10. A reference bit line (BLRef) and reference source line (SLRef), which have lengths and resistances that are substantially equal to those of the BL and SL, are coupled to opposite ends of the reference cell100′. The BLRefis coupled to the reference MTJ memory element Ref, and the SLRefis coupled to the second access transistor M9and is separated from the reference MTJ memory element Ref by the second access transistor M9. Control signals are provided to a word-line node WL and a read-enable node RE to facilitate read and write operations. The word-line node WL may be biased by a voltage source V2, and the read-enable node RE may be biased by a voltage source V3during read and write operations. The word-line node WL is coupled to respective gates of the first access transistor M1and the second access transistor M9. The read-enable node RE is coupled to respective gates of the pull-up transistors M7, M5, and the pull-down transistors M8, M10. The read-enable node RE is typically low (e.g., 0 volts) during write operations, and is typically high (VDD) during read operations. A first NLR device310is coupled in the MTJ current path402. The first NLR device310may be connected in series and between the first pull-up read-enable transistor M7and the first current mirror transistor M3. The first NLR device310is configured to provide a resistance that provides adjustment for a current flowing through the MTJ current path402. The resistance of the first NLR device310may decrease as the voltage applied on the first NLR device310increases, and thus increase an effective tunnel magnetoresistance (TMR) of the MTJ memory cell. TMR of an MTJ memory cell is defined as (RAP−RP)/(RPath+RP+RMOS)=(IP−IAP)/IAP, where RAPis the electrical resistance of the MTJ element in the anti-parallel state; RPis the resistance of the MTJ element in the parallel state; RPathis the resistance of the write path; RMOSis the resistance of the access transistor; IPis the current in the parallel state; and IAPis the current in the anti-parallel state. Using an MTJ with positive tunneling magnetoresistance (TMR) as an example for illustration. If the magnetization directions of the reference layer and free layer are in a parallel orientation, the MTJ is in a low-resistance state (P-state). If the magnetization directions of the reference layer and free layer are in an anti-parallel orientation, the MTJ is in a high-resistance state (AP-state). The insertion of the first NLR device310increases Ipand decrease Iap, and thus increase TMR. The first NLR device310provides a first resistance (rnlr) when the low-resistance state P-state is read and a second resistance (Rnlr) greater than the first resistance (rnlr) when the high-resistance state AP-state is read. Thus, the difference between Ipand Iapis increased. The effective TMR becomes: {(RAP−RP)+(Rnlr−rnlr)}/(RPath+RP+RMOS+rSD). The insertion of the first NLR device310also provides more margin to design the reference cell100′. The reference resistor Rrefwould be in a range between RAP+Rnlrand RP+rnlr, instead of in a smaller range between RAPand RP. The reference cell100′ has a reference resistance greater than a sum of the first resistance (RP) of the MTJ memory cell100and the first resistance (rnlr) of the first NLR device310and smaller than a sum of the second resistance (RAP) of the MTJ memory cell100and the second resistance (Rnlr) of the first NLR device310. In addition, the insertion of the first NLR device310reduces the RDR for forward read direction, as the read current for AP-state is reduced. To maintain the same charging for P-state, the read voltage needs to be increased. There could be at least following three ways: increase the VRead; increase a gate voltage VG of the access transistor; or increase both VReadand VG. Similarly, in some further embodiments, a second NLR device312is also coupled to the reference cell100′ in series and provides adjustment for a reference current Iref. The second NLR device312may be connected between the read bias circuit302and the read enable circuit306in parallel with the first NLR device310. The second NLR device312may have same or similar features as the first NLR device310. FIG.4Bshows the data path400b. Compared to the data path400ainFIG.4A, the second NLR device312is not present. Thus M4is connected to M5while M6is separated from M7by the first NLR device310. As an example, for S-type negative resistance (NR) such as a forward biased thyristor (e.g. SCR, diac, triac, etc), a reverse-biased zener diode, or equivalent transistor circuits, there shouldn't be a need of NLR for the reference cell100′ since the separation between the RAP+RNLRand RP+rNLRshould be large; but for a forward-biased conventional diode (e.g., pn-diode, Schottky diode) or equivalent transistor circuit, since RNLRand rNLRcould be quite close, NLR should be added in read path of reference cell as well. Referring now toFIG.5, a description of some embodiments of how the data paths400a,400bcan operate during read operations is provided with regards to a timing/waveform diagram.FIG.5shows waveforms for two read operations on a single MTJ memory cell superimposed over one another to show how the current and voltage levels relate to one another.FIG.6shows waveforms for two read operations of a reading operation without an NLR device for comparison purpose. For a first read operation, the MTJ is in a parallel state, such that the first read operation returns a low voltage (e.g., logical “0”). For the second read operation, the MTJ is in an anti-parallel state, such that the second read operation returns a high voltage (e.g., logical “1”). As shown inFIG.5andFIG.6, when V(re) is active to enable read operation, V(scr_gate) is active, and V_mtj changes in response to I(Mtj). SA may generate V_out according to V01, which is changed in response to V_mtj. For a comparison circuit without NLR devices shown inFIG.6, IPis 50.6 μA; IAPis 44.1 μA, and thus a sensed TMR is around 14.74%. A read time is about 7.4 ns. The SCR gate voltage is tuned to make sure that P-state current IP is same for comparison purpose. From simulated waveforms shown inFIG.6, IPin the disclosed reading operation is 50.4 μA; IAPis 32.9 μA, and thus, the sensed TMR is around 53.19%. Also seen from the waveforms, a read time is about 5.4 ns. Thus, AP-state current IAPof the disclosed reading operation ofFIG.5is reduced, and TMR for the proposed circuit is relatively high as compared to that of the reading operation ofFIG.6. Also, the disclosed circuit can perform read operations at higher read speed. FIG.7illustrates a cross-sectional view of some embodiments of an integrated circuit700, which includes MTJ memory elements102a,102bdisposed in an interconnect structure704of the integrated circuit700. The integrated circuit700includes a substrate706. The substrate706may be, for example, a bulk substrate (e.g., a bulk silicon substrate) or a silicon-on-insulator (SOI) substrate. The illustrated embodiment depicts one or more shallow trench isolation (STI) regions708, which may include a dielectric-filled trench within the substrate706. Two access transistors710,712are disposed between the STI regions708. The access transistors710,104include access gate electrodes714,716, respectively; access gate dielectrics718,720, respectively; access sidewall spacers722; and source/drain regions724. The source/drain regions724are disposed within the substrate706between the access gate electrodes714,716and the STI regions708, and are doped to have a first conductivity type which is opposite a second conductivity type of a channel region under the gate dielectrics718,720, respectively. The word line gate electrodes714,716may be, for example, doped polysilicon or a metal, such as aluminum, copper, or combinations thereof. The word line gate dielectrics718,720may be, for example, an oxide, such as silicon dioxide, or a high-□ dielectric material. The word line sidewall spacers722can be made of silicon nitride (e.g., Si3N4), for example. The interconnect structure704is arranged over the substrate706and couples devices (e.g., access transistors710,104) to one another. The interconnect structure704includes a plurality of IMD layers726,728,730, and a plurality of metallization layers732,734,736which are layered over one another in alternating fashion. The IMD layers726,728,730may be made, for example, of a low κ dielectric, such as un-doped silicate glass, or an oxide, such as silicon dioxide. The metallization layers732,734,736include metal lines738,740,742, which are formed within trenches, and which may be made of a metal, such as copper or aluminum. Contacts744extend from the bottom metallization layer732to the source/drain regions724and/or gate electrodes714,716; and vias746extend between the metallization layers732,734,736. The contacts744and the vias746extend through dielectric-protection layers750,752(which can be made of dielectric material and can act as etch stop layers during manufacturing). The dielectric-protection layers750,752may be made of an extreme low-□ dielectric material, such as SiC, for example. The contacts744and the vias746may be made of a metal, such as copper or tungsten, for example. MTJ memory elements102a,102b, which are configured to store respective data states, are arranged within the interconnect structure704between neighboring metal layers. The MTJ memory element102aincludes an MTJ, including a pinning layer114, a metallic interlayer116, a reference layer106, a barrier layer110, and a free layer108. FIG.8depicts some embodiments of a top view ofFIG.7's integrated circuit700as indicated in the cut-away lines shown inFIGS.7-8. As can be seen, the MTJ memory elements102a,102bcan have a square/rectangular or circular/elliptical shape when viewed from above in some embodiments. In other embodiments, however, for example, due to practicalities of many etch processes, the corners of the illustrated square shape can become rounded, resulting in MTJ memory elements102a,102bhaving a square shape with rounded corners, or having a circular shape. The MTJ memory elements102a,102bare arranged over metal lines740, respectively, and have upper portions in direct electrical connection with the metal lines742, respectively, without vias or contacts there between in some embodiments. In other embodiments, vias or contacts couple the upper portion to the metal lines742. FIG.9illustrates a flowchart900of some embodiments of a method of reading from an MTJ memory cell. At act902, a memory device is provided. The memory device includes a magnetic tunnel junction (MTJ) current path and a reference current path in parallel with the MTJ current path. The MTJ current path comprises an MTJ memory cell connected in series with a non-linear resistance device. In some embodiments, this memory device can, for example, correspond to the memory device and the data path illustrated inFIGS.1-4B. At act904, a reading voltage (VREAD) is provided to generate an MTJ current (IMTJ) through the MTJ current path and to generate a reference current (IREF) through the reference current path. In some embodiments, the MTJ current can correspond, for example, to signal IMTJinFIG.5, and the reference current can correspond, for example, to signal IRefinFIG.5. At act906, the reference current IREFand the MTJ current IMTJare compared with one another to determine a status of the MTJ memory cell between a first data state having a first resistance and a second data state having a second resistance. The first data state differs from the second data state. At act908, a differential current between the memory current path and the reference current path is sensed. A voltage detection signal is detected based on the sensed differential current. At act910, the voltage detection signal is buffered to output a digital signal indicating a data state of the MTJ memory device. FIG.10shows an example load line analysis of a series connection of an MTJ memory cell100and a forward biased SCR device as the first NLR device310. The quiescent points for the P-state and the AP-state of the MTJ memory cell are shown in the figure as V1/RPand V1/RAP. The IV curve of a negative resistance device including the SCR device has a region where for the differential increase in voltage is proportional to a differential decrease in current through the device, and vice versa, i.e., the IV characteristics have a negative slope. Note that this negative slope region slope is unstable. Therefore, the device operating points reside in the region of the positive slopes on either sides of the negative slope region. The operating points for the P-state and the AP-state are chosen to be different regions on either side of the negative region. Other NLR devices used for the disclosed reading path may operate similarly. The SCR device offers small resistance rSCRfor P-state of the MTJ, while the SCR device offers a large resistance RSCRfor AP-state. Therefore, the net resistance between the read voltage and the ground for P-state and AP-state respectively becomes: RPath+RP+RMOS+rSCRand RPath+RAP+RMOS+RSCR. Therefore, the effective TMR is (RAP−RP)/(RPath+RP+RMOS) without SCR. While the new effective TMR is {(RAP−RP)+(RSCR−rSCR)}/(RPath+RP+RMOS+rSCR) after adding SCR. Thereby, the effective TMR can be improved, and thus makes it much easier to detect the difference in the read currents for P-state and AP-state. Thus, in some embodiments, the present application provides a memory device. The memory device includes a memory cell array comprising a plurality of magnetic tunnel junction (MTJ) memory cells arranged in columns and rows, a read bias circuit connected to the memory cell array and configured to provide a reading bias for a MTJ memory cell of the memory cell array, and a first non-linear resistance device connected in series and between the MTJ memory cell and the read bias circuit. The first non-linear resistance device is configured to provide a first resistance when conducting a first current and a second resistance greater than the first resistance when conducting a second current smaller than the first current. In other embodiments, the present application provides a memory device. The memory device includes an MTJ memory cell configured to switch between a first data state and a second data state, a reference cell coupled in parallel with the MTJ memory cell, and a read bias circuit connected to the memory cell and the reference cell and configured to provide a reading bias respectively for the MTJ memory cell and the reference cell. The first data state has a first resistance and the second data state has a second resistance greater than the first resistance. The memory device further includes a first non-linear resistance device connected in series and between the MTJ memory cell and the read bias circuit. The first non-linear resistance device is configured to provide a first resistance when conducting a first current and a second resistance greater than the first resistance when conducting a second current smaller than the first current. In yet other embodiments, the present disclosure provides a method for reading from a memory device. In the method, a magnetic tunnel junction (MTJ) current path is provided, and a reference current path is provided in parallel with the MTJ current path. The MTJ current path comprises an MTJ memory cell connected in series with a non-linear resistance device. A reading bias is provided to generate an MTJ current through the MTJ current path and to generate a reference current through the reference current path. The reference current and the MTJ current are compared to generated a digital signal indicating a data state of the MTJ memory cell. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure.
40,892
11862219
DETAILED DESCRIPTION The following disclosure provides different embodiments, or examples, for implementing features of the provided subject matter. Specific examples of components, materials, values, steps, arrangements, or the like, are described below to simplify the present disclosure. These are, of course, merely examples and are not limiting. Other components, materials, values, steps, arrangements, or the like, are contemplated. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. In accordance with some embodiments, a memory cell includes a write bit line, a write transistor and a read transistor. The write transistor is coupled between the write bit line and a first node. The read transistor is coupled to the write transistor by the first node. The write transistor is configured to set a stored data value of the memory cell by a write bit line signal that adjusts a polarization state of the read transistor. In some embodiments, the polarization state corresponds to the stored data value of the memory cell. In some embodiments, the read transistor includes a first gate terminal coupled to the write transistor by the first node, and a ferroelectric region having the polarization state that corresponds to the stored data value of the memory cell. In some embodiments, by using the ferroelectric region in the memory cell, the memory cell has less charge leakage at the first node compared to other approaches. In some embodiments, by using the ferroelectric region in the memory cell, the ferroelectric region is able to hold or maintain the polarization state even after voltage at the first node is removed thereby resulting in the memory cell having a longer data retention time and a larger memory window than other approaches. In some embodiments, by having at least a longer data retention time or a larger memory window than other approaches, the memory cell is refreshed less than other approaches resulting in less power consumption than other approaches. FIG.1is a block diagram of a memory cell array100, in accordance with some embodiments. In some embodiments, memory cell array100is part of an integrated circuit. Memory cell array100comprises an array of memory cells102[1,1],102[1,2], . . . ,102[2,2], . . . ,102[M,N] (collectively referred to as “array of memory cells102A”) having M rows and N columns, where N is a positive integer corresponding to the number of columns in array of memory cells102A and M is a positive integer corresponding to the number of rows in array of memory cells102A. The rows of cells in array of memory cells102A are arranged in a first direction X. The columns of cells in array of memory cells102A are arranged in a second direction Y. The second direction Y is different from the first direction X. In some embodiments, the second direction Y is perpendicular to the first direction X. Each memory cell102[1,1],102[1,2], . . . ,102[2,2], . . . ,102[M,N] in array of memory cells102A is configured to store a corresponding bit of data. Array of memory cells102A is a dynamic random-access memory (DRAM) array including DRAM-like memory cells. In some embodiments, each memory cell in array of memory cells102A corresponds to a two transistor (2T) memory cell with 1-Ferroelectric field effect transistor (FeFET) as shown inFIGS.2A-2C. In some embodiments, each memory cell in array of memory cells102A corresponds to a three transistor (3T) memory cell with 1-FeFET as shown inFIGS.3A-3C. In some embodiments, each memory cell in array of memory cells102A corresponds to a four transistor (4T) memory cell with 1-FeFET as shown inFIGS.4A-4C. Different types of memory cells in array of memory cells102A are within the contemplated scope of the present disclosure. For example, in some embodiments, each memory cell in array of memory cells102A is a static random access memory (SRAM). In some embodiments, each memory cell in array of memory cells102A corresponds to a ferroelectric resistive random-access memory (FeRAM) cell. In some embodiments, each memory cell in array of memory cells102A corresponds to a magneto-resistive random-access memory (MRAM) cell. In some embodiments, each memory cell in array of memory cells102A corresponds to a resistive random-access memory (RRAM) cell. Other configurations of array of memory cells102A are within the scope of the present disclosure. Memory cell array100further includes M write word lines WWL[1], . . . WWL[M](collectively referred to as “write word line WWL”). Each row 1, . . . , M in array of memory cells102A is associated with a corresponding write word line WWL[1], . . . , WWL[M]. Each row of memory cells in array of memory cells102A is coupled with a corresponding write word line WWL[1], . . . , WWL[M]. For example, memory cells102[1,1],102[1,2], . . . ,102[1,N] in row 1 are coupled with write word line WWL[1]. Each write word line WWL extends in the first direction X. Memory cell array100further includes M read word lines RWL[1], . . . RWL[M](collectively referred to as “read word line RWL”). Each row 1, . . . , M in array of memory cells102A is associated with a corresponding read word line RWL[1], . . . , RWL[M]. Each row of memory cells in array of memory cells102A is coupled with a corresponding read word line RWL[1], . . . , RWL[M]. For example, memory cells102[1,1],102[1,2], . . . ,102[1,N] in row 1 are coupled with read word line RWL[1]. Each read word line RWL extends in the first direction X. Memory cell array100further includes N write bit lines WBL[1], . . . WBL[N](collectively referred to as “write bit line WBL”). Each column 1, . . . , N in array of memory cells102A is associated with a corresponding write bit line WBL[1], . . . , WBL[N]. Each column of memory cells in array of memory cells102A is coupled with a corresponding write bit line WBL[1], . . . , WBL[N]. For example, memory cells102[1,1],102[2,1], . . . ,102[M,1] in column 1 are coupled with write bit line WBL[1]. Each write bit line WBL extends in the second direction Y. Memory cell array100further includes N read bit lines RBL[1], . . . RBL[N] (collectively referred to as “read bit line RBL”). Each column 1, . . . , N in array of memory cells102A is associated with a corresponding read bit line RBL[1], . . . , RBL[N]. Each column of memory cells in array of memory cells102A is coupled with a corresponding read bit line RBL[1], . . . , RBL[N]. For example, memory cells102[1,1],102[2,1], . . . ,102[M,1] in column 1 are coupled with read bit line RBL[1]. Each read bit line RBL extends in the second direction Y. Other configurations of memory cell array100are within the scope of the present disclosure. Different configurations of at least write bit lines BL, write word lines WWL, read bit lines RBL or read word lines RWL in memory cell array100are within the contemplated scope of the present disclosure. In some embodiments, memory cell array100includes additional write ports (write word lines WWL or write bit lines WBL) and/or read ports (read word lines RWL or read bit lines RBL). Furthermore, in some embodiments, array of memory cells102A includes multiple groups of different types of memory cells. By way of an illustrative example, a write operation is performed to memory cell102[1,1] located in row 1 and column 1 of array of memory cells102A. Row 1 includes memory cells102[1,1],102[1,2], . . . ,102[1,N] that are selected by write word line WWL[1]. Column 1 includes memory cells102[1,1],102[2,1], . . . ,102[M,1] that are selected for receiving a data signal and storing a binary bit of data by write bit line WBL[1]. Together, write word line WWL[1] and write bit line WBL[1] select and store a binary bit of data in memory cell102[1,1]. By way of an illustrative example, a read operation is performed to memory cell102[1,1] located in row 1 and column 1 of array of memory cells102A. Row 1 includes memory cells102[1,1],102[1,2], . . . ,102[1,N] that are selected by read word line RWL[1]. Column 1 includes memory cells102[1,1],102[2,1], . . . ,102[M,1] that are selected to access the stored binary bit of data by read bit line RBL[1]. Together, read word line RWL[1] and read bit line RBL[1] select and read the binary bit of data stored in memory cell102[1,1]. FIG.2Ais a circuit diagram of a memory cell200A, in accordance with some embodiments. Memory cell200A is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Components that are the same or similar to those in one or more ofFIGS.2A-2C,3A-3C,4A-4C(shown below) are given the same reference numbers, and detailed description thereof is thus omitted. For ease of illustration, some of the labeled elements ofFIGS.2A-2C,3A-3C,4A-4Care not labelled in each ofFIGS.2A-2C,3A-3C,4A-4C. In some embodiments,FIGS.2A-2C,3A-3C,4A-4Cinclude additional elements not shown inFIGS.2A-2C,3A-3C,4A-4C. Memory cell200A is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell200A includes a write transistor M1, a read transistor M2, a write word line WWL, a read word line RWL, a write bit line WBL and a read bit line RBL. Write word line WWL corresponds to a write word line of write word lines WWL[1], . . . , WWL[M], read word line RWL corresponds to a read word line of read word lines RWL[1], . . . , RWL[M], write bit line WBL corresponds to a write bit line of write bit lines WBL[1], . . . , WBL[N], and read bit line RBL corresponds to a read bit line of read bit lines RBL[1], . . . , RBL[N] ofFIG.1, and similar detailed description is therefore omitted. Write transistor M1includes a gate terminal coupled to write word line WWL, a drain terminal coupled to write bit line WBL, and a source terminal coupled to at least a gate terminal of read transistor M2by a node ND1. Write transistor M1is configured to write data in memory cell200A. Write transistor M1is enabled (e.g., turned on) or disabled (e.g., turned off) in response to a write bit line signal on the write bit line WBL. Write transistor M1is shown as a P-type Metal Oxide Semiconductor (PMOS) transistor. In some embodiments, write transistor M1is an N-type Metal Oxide Semiconductor (NMOS) transistor. Read transistor M2includes a drain terminal coupled to read word line RWL, a source terminal coupled to read bit line RBL, and a gate terminal coupled to the source terminal of write transistor M1. Read transistor M2is referred to as a ferroelectric field effect transistor (FeFET) device, as read transistor M2includes a ferroelectric region202positioned within the gate terminal of the read transistor M2. The ferroelectric region202is configured to have different polarization states based on the voltage applied to the gate of the read transistor M2. The polarization of the ferroelectric region202determines the conductivity (e.g., low resistance state or high resistance state) of read transistor M2which represents the data stored in read transistor M2. Data is stored by programming the ferroelectric region202to have different polarization states. The different polarization states create two different threshold voltage states (e.g., Vth) that correspond to a logic ‘1’ and a logic ‘0’. Due to the threshold voltage difference, the ferroelectric region202in the read transistor M2is configured to use specific gate voltages based on its logic state to turn on. In some embodiments, the difference between these gate voltages is referred to as memory window. The binary states of stored data in memory cell200A are encoded in the form of the polarization of the ferroelectric region202. The direction or value of the polarization (e.g., +P or −P) of the ferroelectric region202determines the resistance state (e.g., low or high) of the read transistor M2. In some embodiments, a low resistance state of the read transistor M2corresponds to the read transistor M2being turned on or conducting, and a high resistance state of the read transistor M2corresponds to the read transistor M2being turned off or not conducting. In some embodiments, a low resistance state of the read transistor M2corresponds to a first stored value (e.g., logic “0” or “1”), and a high resistance state of the read transistor M2corresponds to a second stored value (e.g., logic “1” or “0”) opposite from the first stored value. A voltage of the gate of the read transistor M2or node ND1controls the polarization states and corresponding electric field in the ferroelectric region202of read transistor M2. Write transistor M1is configured to write data by controlling the voltage of node ND1or the gate of read transistor M2thereby controlling the polarization states of the ferroelectric region202of read transistor M2. In some embodiments, if the write transistor M1is enabled or turned on, a voltage of the write bit line WBL is configured to control the voltage of the node ND1or the gate of read transistor M2. Thus, in some embodiments, the polarized state of the ferroelectric region202is controlled by the voltage of the write bit line WBL. In some embodiments, the voltage of the write bit line WBL corresponds to the data stored in memory cell200A. In some embodiments, the polarization state of the ferroelectric region202is maintained even after an electric field or a corresponding voltage at node ND1is removed, and the read transistor M2is a non-volatile transistor device. Read transistor M2is configured to read data stored in memory cell200A. In some embodiments, read transistor M2is configured to output data stored in memory cell200A based on whether read transistor M2is turned on or off. The polarization state of the ferroelectric region202determines whether read transistor M2is turned on or off. In some embodiments, write transistor M1and read transistor M2each include channel regions that are formed of a same type of material. In some embodiments, write transistor M1and read transistor M2each have channel regions that have a silicon body or bulk. Read transistor M2is shown as a PMOS transistor. In some embodiments, read transistor M2is an NMOS transistor. During a write operation of memory cell200A, the voltage of the write bit line WBL (e.g., data to be stored in memory cell200A) is set by a write driver circuit (not shown), and the write word line WWL is set to a logical low thereby turning on write transistor M1. In response to write transistor M1being turned on, the voltage of the write bit line WBL is applied to the gate of read transistor M2or node ND1. As the voltage of the write bit line WBL is applied to the gate of read transistor M2or node ND1, the write bit line voltage controls the polarization state of the ferroelectric region202and the corresponding data stored by read transistor M2. In other words, the voltage of the write bit line WBL is used to set the read transistor M2in a low resistance state (e.g., conducting) or a high resistance state (e.g., not conducting). Afterwards, the write word line WWL is set to a logical high thereby turning off write transistor M1. In response to write transistor M1being turned off, data stored in memory cell200A is held, and memory cell200A is in a hold mode. By using ferroelectric region202in memory cell200A, memory cell200A does not have charge leakage at node ND1compared to other approaches (such as DRAM). By using ferroelectric region202in memory cell200A, the non-volatile nature of the ferroelectric material in ferroelectric region202is able to hold or maintain the polarization state even after the voltage at node ND1is removed thereby resulting in a longer data retention time and a larger memory window than other approaches. By having at least a longer data retention time or a larger memory window than other approaches, memory cell200A is refreshed less than other approaches resulting in less power consumption than other approaches. In some embodiments, memory cell200A and memory cells200B-200C (FIGS.2B-2C) have a 2T memory cell structure that is compatible with complementary metal oxide semiconductor (CMOS) processes and is therefore scalable. During a read operation of memory cell200A, the voltage of the read bit line RBL is pre-discharged to a logical low, and the read word line RWL is raised to a logical high. In some embodiments, if the read transistor M2is in a low resistance state, then the read transistor M2is turned on or conducting, and the current from the read word line RWL through the read transistor M2to the read bit line RBL is sensed by a sense amplifier (not shown), and the data associated with the read transistor M2being in a low resistance state (e.g., “1” or “0”) is read out. In some embodiments, if the read transistor M2is in a high resistance state, then the read transistor M2is turned off or not conducting, and the current from the read word line RWL through the read transistor M2to the read bit line RBL is sensed by a sense amplifier (not shown), and the data associated with the read transistor M2being in a high resistance state (e.g., “0” or “1”) is read out. In this embodiment, the current through the read transistor M2is negligible since the read transistor M2is turned off. Afterwards, the read word line RWL is set to a logical low. Other transistor terminals for each of the transistors M1, M2, M1′ or M2′ (described below) of the present application are within the scope of the present disclosure. For example, reference to the drains and sources of a same transistor in the present disclosure can be changed to a source and a drain of the same transistor. Thus, for write transistor M1, reference to the drain and source of write transistor M1can be changed to the source and drain of write transistor M1, respectively. Similarly, for read transistor M2, reference to the drain and source of read transistor M2can be changed to the source and drain of read transistor M2, respectively. Other configurations or quantities of transistors in memory cell200A are within the scope of the present disclosure. FIG.2Bis a circuit diagram of a memory cell200B, in accordance with some embodiments. Memory cell200B is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Memory cell200B is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell200B includes a write transistor M1′, read transistor M2, write word line WWL, read word line RWL, write bit line WBL and read bit line RBL. Memory cell200B is a variation of memory cell200A ofFIG.2A, and similar detailed description is therefore omitted. In comparison with memory cell200A ofFIG.2A, write transistor M1′ replaces write transistor M1ofFIG.2A, and similar detailed description is therefore omitted. Write transistor M1′ is shown as a PMOS transistor. In some embodiments, write transistor M1′ is an NMOS transistor. In some embodiments, write transistor M1′ is similar to write transistor M1ofFIG.2A, and similar detailed description is therefore omitted. The operation of memory cell200B is similar to the operation of memory cell200A described above, and similar detailed description is therefore omitted. In comparison with write transistor M1ofFIG.2A, write transistor M1′ includes an oxide channel region210, and similar detailed description is therefore omitted. In some embodiments, one or more transistors having oxide channel regions of the present disclosure include thin film transistors (TFTs). In some embodiments, the oxide channel region210for write transistor M1′ includes an oxide semiconductor material including zinc oxide, cadmium oxide, indium oxide, IGZO, SnO2, TiO2, or combinations thereof, or the like. Other transistor types or oxide materials for write transistor M1′ are within the scope of the present disclosure. In some embodiments, by including write transistor M1′ with an oxide channel region210and an FeFET read transistor M2, memory cell200B has lower leakage current than other approaches that do not include an oxide channel region in the write transistor. In some embodiments, by reducing the leakage current of memory cell200B, memory cell200B has a longer data retention time than other approaches. By having a longer data retention time than other approaches, memory cell200B is refreshed less than other approaches resulting in less power consumption than other approaches. In some embodiments, by reducing the leakage current of memory cell200B, memory cell200B has less write disturbance errors than other approaches. Furthermore, since memory cell200B is similar to memory cell200A, memory cell200B also has the benefits discussed above with respect to memory cell200A. In some embodiments, the oxide channel region210,220,230or240of memory cell200B-200C,300B-300C and400B-400C (FIGS.2B-2C,3B-3C &4B-4C) can be integrated into a back end of line (BEOL) process thereby increasing the memory density of memory cell200B-200C,300B-300C and400B-400C. Other configurations, connections or quantities of transistors in memory cell200B are within the scope of the present disclosure. FIG.2Cis a circuit diagram of a memory cell200C, in accordance with some embodiments. Memory cell200C is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Memory cell200C is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell200C includes write transistor M1′, a read transistor M2′, write word line WWL, read word line RWL, write bit line WBL and read bit line RBL. Memory cell200C is a variation of memory cell200B ofFIG.2B, and similar detailed description is therefore omitted. In comparison with memory cell200B ofFIG.2B, read transistor M2′ replaces read transistor M2ofFIG.2B, and similar detailed description is therefore omitted. Read transistor M2′ is shown as a PMOS transistor. In some embodiments, read transistor M2′ is an NMOS transistor. In some embodiments, read transistor M2′ is similar to read transistor M2ofFIGS.2A-2B, and similar detailed description is therefore omitted. The operation of memory cell200C is similar to the operation of memory cell200A (described above) or memory cell200B, and similar detailed description is therefore omitted. In comparison with read transistor M2ofFIG.2B, read transistor M2′ includes an oxide channel region220, and similar detailed description is therefore omitted. In some embodiments, the oxide channel region220for read transistor M2′ includes an oxide semiconductor material including zinc oxide, cadmium oxide, indium oxide, IGZO, SnO2, TiO2, or combinations thereof, or the like. In some embodiments, the oxide channel region220of read transistor M2′ includes the same oxide semiconductor material as the oxide channel region210of write transistor M1′. In some embodiments, the oxide channel region220of read transistor M2′ includes a different oxide semiconductor material as the oxide channel region210of write transistor M1′. Other transistor types or oxide materials for read transistor M2′ are within the scope of the present disclosure. In some embodiments, read transistor M2′ includes an oxide channel region220, and write transistor M1′ includes a silicon channel region having a silicon body or bulk similar to write transistor M1. In some embodiments, by including write transistor M1′ with an oxide channel region210and read transistor M2′ with an oxide channel region220and as an FeFET, memory cell200C has lower leakage current than other read transistor approaches. In some embodiments, by reducing the leakage current of memory cell200C, memory cell200C has the benefits discussed above with respect to memory cell200B. Furthermore, since memory cell200C is similar to memory cell200A, memory cell200C also has the benefits discussed above with respect to memory cell200A. Other configurations, connections or quantities of transistors in memory cell200C are within the scope of the present disclosure. FIG.3Ais a circuit diagram of a memory cell300A, in accordance with some embodiments. Memory cell300A is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Memory cell300A is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell300A includes write transistor M1, read transistor M2, write word line WWL, read word line RWL, write bit line WBL, read bit line RBL and a transistor M3. Memory cell300A is a variation of memory cell200A ofFIG.2A, and similar detailed description is therefore omitted. In comparison with memory cell200A ofFIG.2A, memory cell300A further includes transistor M3, and similar detailed description is therefore omitted. Transistor M3includes a source terminal coupled to read bit line RBL, a drain terminal coupled to the source terminal of read transistor M2, and a gate terminal configured to receive a control signal CS. In some embodiments, transistor M3is turned on or turned off in response to control signal CS. For example, in some embodiments, during a read operation of a selected memory cell, similar to memory cell300A, the selected memory cell includes a selected transistor M3, and unselected memory cells, similar to memory cell300A, include an unselected transistor M3. In these embodiments, selected transistor M3is turned on in response to a first value of control signal CS, and unselected transistors M3in corresponding unselected cells are turned off in response to a second value of control signal CS. In these embodiments, the second value of control signal CS is inverted from the first value of control signal CS. In these embodiments, the transistors M3in unselected memory cells are turned off thereby reducing leakage current. In comparison with memory cell200A ofFIG.2A, the source terminal of read transistor M2ofFIGS.3A-3Cis coupled with the drain terminal of transistor M3, and is therefore not directly coupled with the read bit line RBL as is shown inFIG.2A. Transistor M3ofFIGS.3A-3Bis enabled or disabled in response to a control signal CS. Transistor M3is configured to electrically couple/decouple read transistor M2to/from the read bit line RBL in response to control signal CS. For example, if control signal CS is logically low, transistor M3is enabled or turned on, and transistor M3thereby electrically couples the source of read transistor M2to the read bit line RBL. For example, if control signal CS is logically high, transistor M3is disabled or turned off, and transistor M3thereby electrically decouples the source of read transistor M2from the read bit line RBL. The operation of memory cell300A is similar to the operation of memory cell200A described above, and similar detailed description is therefore omitted. For example, in comparison with the write operation of memory cell200A ofFIG.2A, during the write operation of memory cell300A, transistor M3is disabled or turned off, and the operation of the other portions of memory cell300A are similar to the write operation of memory cell200A described above, and similar detailed description is therefore omitted. For example, in comparison with the read operation of memory cell200A ofFIG.2A, during the read operation of memory cell300A, transistor M3is enabled or turned on, and the operation of the other portions of memory cell300A are similar to the read operation of memory cell200A described above, and similar detailed description is therefore omitted. Transistor M3is shown as a PMOS transistor. In some embodiments, transistor M3is an NMOS transistor. In some embodiments, transistor M3and at least write transistor M1or read transistor M2, include channel regions that are formed of a same type of material. In some embodiments, transistor M3has a channel region that has a silicon body or bulk. In some embodiments, transistor M3and at least write transistor M1or read transistor M2, include channel regions that have a silicon body or bulk. In some embodiments, by including write transistor M1, read transistor M2(e.g., FeFET), and transistor M3, memory cell300A is similar to memory cell200A. In some embodiments, since memory cell300A is similar to memory cell200A, memory cell300A has the benefits discussed above with respect to memory cell200A. In some embodiments, memory cell300A and memory cells300B-300C (FIGS.3B-3C) have a 3T memory cell structure that is compatible with CMOS processes and is therefore scalable. Other transistor terminals for each of transistors M1, M2, M3, M1′, M2′ and M3′ of the present application are within the scope of the present disclosure. For example, reference to the drains and sources of a same transistor in the present disclosure can be changed to a source and a drain of the same transistor. Other configurations or quantities of transistors in memory cell300A are within the scope of the present disclosure. FIG.3Bis a circuit diagram of a memory cell300B, in accordance with some embodiments. Memory cell300B is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Memory cell300B is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell300B includes write transistor M1′, read transistor M2, write word line WWL, read word line RWL, write bit line WBL, read bit line RBL and transistor M3. Memory cell300B is a variation of memory cell300A ofFIG.3Aand memory cell200B ofFIG.2B, and similar detailed description is therefore omitted. For example, memory cell300B combines features similar to memory cell300A ofFIG.3Aand memory cell200B ofFIG.2B. In comparison with memory cell300A ofFIG.3A, write transistor M1′ ofFIG.2Breplaces write transistor M1ofFIG.3A, and similar detailed description is therefore omitted. Write transistor M1′ is described in memory cell200B ofFIG.2B, and similar detailed description is therefore omitted. Write transistor M1′ is shown as a PMOS transistor. In some embodiments, write transistor M1′ is an NMOS transistor. The operation of memory cell300B is similar to the operation of memory cell300A described above, and similar detailed description is therefore omitted. In some embodiments, by including write transistor M1′ with an oxide channel region210, read transistor M2(e.g., FeFET) and transistor M3, memory cell300B achieves benefits similar to the benefits discussed above with respect to memory cell300A and memory cell200B. Furthermore, since memory cell300B is similar to memory cell200A, memory cell300B also has the benefits discussed above with respect to memory cell200A. Other configurations, connections or quantities of transistors in memory cell300B are within the scope of the present disclosure. FIG.3Cis a circuit diagram of a memory cell300C, in accordance with some embodiments. Memory cell300C is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Memory cell300C is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell300C includes write transistor M1′, read transistor M2′, write word line WWL, read word line RWL, write bit line WBL, read bit line RBL and a transistor M3′. Memory cell300C is a variation of memory cell300B ofFIG.3B, and similar detailed description is therefore omitted. In comparison with memory cell300B ofFIG.3B, read transistor M2′ replaces read transistor M2ofFIG.3Band transistor M3′ replaces transistor M3ofFIG.3B, and similar detailed description is therefore omitted. Read transistor M2′ is described in memory cell200C ofFIG.2C, and similar detailed description is therefore omitted. Read transistor M2′ is shown as a PMOS transistor. In some embodiments, read transistor M2′ is an NMOS transistor. Transistor M3′ is shown as a PMOS transistor. In some embodiments, transistor M3′ is an NMOS transistor. In some embodiments, transistor M3′ is similar to transistor M3ofFIGS.3A-3B, and similar detailed description is therefore omitted. The operation of memory cell300C is similar to the operation of memory cell300A (described above) or memory cell300B, and similar detailed description is therefore omitted. In comparison with transistor M3ofFIG.3B, transistor M3′ includes an oxide channel region230, and similar detailed description is therefore omitted. In some embodiments, the oxide channel region230for transistor M3′ includes an oxide semiconductor material including zinc oxide, cadmium oxide, indium oxide, IGZO, SnO2, TiO2, or combinations thereof, or the like. In some embodiments, the oxide channel region230of transistor M3′ includes the same oxide semiconductor material as the oxide channel region210,220of at least write transistor M1′ or read transistor M2′. In some embodiments, the oxide channel region230of transistor M3′ includes a different oxide semiconductor material as the oxide channel region210,220of at least write transistor M1′ or read transistor M2′. Other transistor types or oxide materials for transistor M3′ are within the scope of the present disclosure. In some embodiments, one of read transistor M2′ or transistor M3′ includes an oxide channel region220or230, and the other of read transistor M2′ or transistor M3′ includes a silicon channel region having a silicon body or bulk similar to read transistor M2or transistor M3, respectively. In some embodiments, by including write transistor M1′ with an oxide channel region210, read transistor M2′ with an oxide channel region220and as an FeFET, and transistor M3′ with an oxide channel region230, memory cell300C achieves benefits similar to the benefits discussed above with respect to memory cell300A and memory cell200C. Furthermore, since memory cell300C is similar to memory cell200A, memory cell300C also has the benefits discussed above with respect to memory cell200A. Other configurations, connections or quantities of transistors in memory cell300C are within the scope of the present disclosure. FIG.4Ais a circuit diagram of a memory cell400A, in accordance with some embodiments. Memory cell400A is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Memory cell400A is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell400A includes write transistor M1, read transistor M2, write word line WWL, read word line RWL, write bit line WBL, read bit line RBL, transistor M3and a transistor M4. Memory cell400A is a variation of memory cell300A ofFIG.3A, and similar detailed description is therefore omitted. In comparison with memory cell300A ofFIG.3A, memory cell400A further includes transistor M4, and similar detailed description is therefore omitted. Transistor M4includes a drain terminal, a gate terminal and a source terminal. The drain terminal of transistor M4is coupled to read write line RWL. The gate terminal of transistor M4is coupled to the drain terminal of write transistor M1, the gate terminal of read transistor M2and node ND1. The source terminal of transistor M4is coupled to a node ND2. In some embodiments, node ND2is electrically coupled to a reference voltage supply. In some embodiments, the reference voltage supply has a reference voltage VSS. In some embodiments, the reference voltage supply corresponds to ground. Transistor M4ofFIGS.4A-4Cis enabled or disabled in response to a voltage of node ND1. In some embodiments, the voltage of node ND1corresponds to the write bit line signal, and thus transistor M4ofFIGS.4A-4Cis enabled or disabled in response to the write bit line signal. Transistor M4ofFIGS.4A-4Cis configured to electrically couple/decouple the read word line RWL to/from node ND2in response to the write bit line signal on the write bit line WBL. For example, if the write bit line signal is logically low, transistor M4is enabled or turned on, and transistor M4thereby electrically couples the read word line RWL to node ND2. For example, if the write bit line signal is logically high, transistor M4is disabled or turned off, and transistor M4thereby electrically decouples the read word line RWL from node ND2. In comparison with memory cell300A ofFIG.3A, the drain terminal of read transistor M2ofFIGS.4A-4Cis coupled with a reference voltage supply. In some embodiments, the reference voltage supply has a reference voltage VSS. In some embodiments, the reference voltage supply corresponds to ground. In comparison with memory cell300A ofFIG.3A, the gate terminal of transistor M3ofFIGS.4A-4Cis coupled with the read word line RWL. Transistor M3ofFIGS.4A-4Cis enabled or disabled in response to a read word line signal on the read word line RWL. Transistor M3ofFIGS.4A-4Cis configured to electrically couple/decouple read transistor M2to/from the read bit line RBL in response to the read word line signal on the read word line RWL. For example, if the read word line signal is logically low, transistor M3is enabled or turned on, and transistor M3thereby electrically couples the source of read transistor M2to the read bit line RBL. For example, if the read word line signal is logically high, transistor M3is disabled or turned off, and transistor M3thereby electrically decouples the source of read transistor M2from the read bit line RBL. The operation of memory cell400A is similar to the operation of memory cell200A described above, and similar detailed description is therefore omitted. For example, in comparison with the write operation of memory cell200A ofFIG.2Aand memory cell300A ofFIG.3A, during the write operation of memory cell400A, transistor M4is enabled or disabled in response to the write bit line signal on the write bit line WBL, transistor M3is enabled or disabled in response to the read word line signal on the read word line RWL, and the operation of the other portions of memory cell400A are similar to the write operation of memory cell200A described above, and similar detailed description is therefore omitted. During a read operation of memory cell400A, the voltage of the read bit line RBL is pre-charged to a logical high, and the read word line RWL is lowered to a logical low causing transistor M3to be enabled or turned on. In some embodiments, if the read transistor M2ofFIGS.4A-4Cis in a low resistance state, then the read transistor M2is turned on or conducting, and the voltage of the read bit line RBL is pulled towards VSS by read transistor M2, and the voltage or current of the read bit line RBL is sensed by a sense amplifier (not shown), and the data associated with the read transistor M2being in a low resistance state (e.g., “1” or “0”) is read out. In some embodiments, if the read transistor M2ofFIGS.4A-4Cis in a high resistance state, then the read transistor M2is turned off or not conducting, and the voltage of the read bit line RBL is not pulled towards VSS by read transistor M2, and the voltage or current of the read bit line RBL is sensed by a sense amplifier (not shown), and the data associated with the read transistor M2being in a high resistance state (e.g., “1” or “0”) is read out. In this embodiment, the change in the voltage of the read bit line RBL is negligible since the read transistor M2is turned off. Afterwards, the read word line RWL is set to a logical high thereby causing transistor M3to turn off. Transistor M4is shown as a PMOS transistor. In some embodiments, transistor M4is an NMOS transistor. In some embodiments, transistor M4and at least write transistor M1, read transistor M2or transistor M3, include channel regions that are formed of a same type of material. In some embodiments, transistor M4has a channel region that has a silicon body or bulk. In some embodiments, by including write transistor M1, read transistor M2(e.g., FeFET), transistor M3and transistor M4, memory cell400A is similar to memory cell200A. In some embodiments, since memory cell400A is similar to memory cell200A, memory cell400A has the benefits discussed above with respect to memory cell200A. In some embodiments, memory cell400A and memory cells400B-400C (FIGS.4B-4C) have a 4T memory cell structure that is compatible with CMOS processes and is therefore scalable. Other transistor terminals for each of transistors M1, M2, M3, M4, M1′, M2′, M3′ and M4′ of the present application are within the scope of the present disclosure. For example, reference to the drains and sources of a same transistor in the present disclosure can be changed to a source and a drain of the same transistor. Other configurations or quantities of transistors in memory cell400A are within the scope of the present disclosure. FIG.4Bis a circuit diagram of a memory cell400B, in accordance with some embodiments. Memory cell400B is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Memory cell400B is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell400B includes write transistor M1′, read transistor M2, write word line WWL, read word line RWL, write bit line WBL, read bit line RBL, transistor M3and transistor M4. Memory cell400B is a variation of memory cell400A ofFIG.4Aand memory cell200B ofFIG.2B, and similar detailed description is therefore omitted. For example, memory cell400B combines features similar to memory cell400A ofFIG.4Aand memory cell200B ofFIG.2B. In comparison with memory cell400A ofFIG.4A, write transistor M1′ ofFIG.2Breplaces write transistor M1ofFIG.4A, and similar detailed description is therefore omitted. Write transistor M1′ is described in memory cell200B ofFIG.2B, and similar detailed description is therefore omitted. Write transistor M1′ is shown as a PMOS transistor. In some embodiments, write transistor M1′ is an NMOS transistor. The operation of memory cell400B is similar to the operation of memory cell400A described above, and similar detailed description is therefore omitted. In some embodiments, by including write transistor M1′ with an oxide channel region210and read transistor M2(e.g., FeFET), transistor M3and transistor M4, memory cell400B achieves benefits similar to the benefits discussed above with respect to memory cell400A and memory cell200B. Furthermore, since memory cell400B is similar to memory cell200A, memory cell300B also has the benefits discussed above with respect to memory cell200A. Other configurations, connections or quantities of transistors in memory cell400B are within the scope of the present disclosure. FIG.4Cis a circuit diagram of a memory cell400C, in accordance with some embodiments. Memory cell400C is an embodiment of a memory cell in array of memory cells102A ofFIG.1expressed in a schematic diagram, and similar detailed description is therefore omitted. Memory cell400C is usable as one or more memory cells in array of memory cells102A ofFIG.1. Memory cell400C includes write transistor M1′, read transistor M2′, write word line WWL, read word line RWL, write bit line WBL, read bit line RBL, transistor M3′ and a transistor M4′. Memory cell400C is a variation of memory cell400B ofFIG.4B, and similar detailed description is therefore omitted. In comparison with memory cell400B ofFIG.4B, read transistor M2′ replaces read transistor M2ofFIG.4B, transistor M3′ replaces transistor M3ofFIG.4Band transistor M4′ replaces transistor M4ofFIG.4B, and similar detailed description is therefore omitted. Read transistor M2′ is described in memory cell200C ofFIG.2C, and similar detailed description is therefore omitted. Read transistor M2′ is shown as a PMOS transistor. In some embodiments, read transistor M2′ is an NMOS transistor. Transistor M3′ is described in memory cell300C ofFIG.3C, and similar detailed description is therefore omitted. Transistor M3′ is shown as a PMOS transistor. In some embodiments, transistor M3′ is an NMOS transistor. Transistor M4′ is shown as a PMOS transistor. In some embodiments, transistor M4′ is an NMOS transistor. In some embodiments, transistor M4′ is similar to transistor M4ofFIGS.4A-4B, and similar detailed description is therefore omitted. The operation of memory cell400C is similar to the operation of memory cell400A (described above) or memory cell400B, and similar detailed description is therefore omitted. In comparison with transistor M4ofFIG.4B, transistor M4′ includes an oxide channel region240, and similar detailed description is therefore omitted. In some embodiments, the oxide channel region240for transistor M4′ includes an oxide semiconductor material including zinc oxide, cadmium oxide, indium oxide, IGZO, SnO2, TiO2, or combinations thereof, or the like. In some embodiments, the oxide channel region240of transistor M4′ includes the same oxide semiconductor material as the oxide channel region210,220or230of at least write transistor M1′, read transistor M2′ or transistor M3′. In some embodiments, the oxide channel region240of transistor M4′ includes a different oxide semiconductor material as the oxide channel region210,220or230of at least write transistor M1′, read transistor M2′ or transistor M3′, respectively. Other transistor types or oxide materials for transistor M4′ are within the scope of the present disclosure. In some embodiments, one of read transistor M2′, transistor M3′ or transistor M4′ includes an oxide channel region220,230or240, and the other of read transistor M2′, transistor M3′ or transistor M4includes a silicon channel region having a silicon body or bulk similar to read transistor M2, transistor M3or transistor M4, respectively. In some embodiments, by including write transistor M1′ with an oxide channel region210, read transistor M2′ with an oxide channel region220and as an FeFET, transistor M3′ with an oxide channel region230and transistor M4′ with an oxide channel region240, memory cell400C achieves benefits similar to the benefits discussed above with respect to memory cell400A and memory cell200C. Furthermore, since memory cell400C is similar to memory cell200A, memory cell400C also has the benefits discussed above with respect to memory cell200A. Other configurations, connections or quantities of transistors in memory cell400C are within the scope of the present disclosure. FIG.5is a cross-sectional view of an integrated circuit500, in accordance with some embodiments. Integrated circuit500is an embodiment of read transistor M2and M2′ ofFIGS.2A-2C,3A-3C and4A-4C, and similar detailed description is therefore omitted. In some embodiments, integrated circuit500includes additional elements not shown for ease of illustration. Integrated circuit500is shown as a planar transistor; however, other transistors are within the scope of the present disclosure. In some embodiments, integrated circuit500is a fin field effect transistor (FinFET), a nanosheet transistor, a nanowire transistor, or the like. In some embodiments, integrated circuit500is an FeFET or the like, and is manufactured as part of a back end of line (BEOL) process. Integrated circuit500includes a substrate502. In some embodiments, substrate502is a p-type substrate. In some embodiments, substrate502is an n-type substrate. In some embodiments, substrate502includes an elemental semiconductor including silicon or germanium in crystal, polycrystalline, or an amorphous structure; a compound semiconductor including silicon carbide, gallium arsenic, gallium phosphide, indium phosphide, indium arsenide, and indium antimonide; an alloy semiconductor including SiGe, GaAsP, AlInAs, AlGaAs, GaInAs, GaInP, and GaInAsP; any other suitable material; or combinations thereof. In some embodiments, the alloy semiconductor substrate has a gradient SiGe feature in which the Si and Ge composition change from one ratio at one location to another ratio at another location of the gradient SiGe feature. In some embodiments, the alloy SiGe is formed over a silicon substrate. In some embodiments, first substrate502is a strained SiGe substrate. In some embodiments, the semiconductor substrate has a semiconductor on insulator structure, such as a silicon on insulator (SOI) structure. In some embodiments, the semiconductor substrate includes a doped epi layer or a buried layer. In some embodiments, the compound semiconductor substrate has a multilayer structure, or the substrate includes a multilayer compound semiconductor structure. In some embodiments, integrated circuit500is a silicon transistor (e.g., has a silicon channel region (not labelled)), and substrate502has a silicon body or bulk. In some embodiments, integrated circuit500is an oxide transistor (e.g., has an oxide channel region210,220,230or240), and substrate502includes an oxide semiconductor material including zinc oxide, cadmium oxide, indium oxide, IGZO, SnO2, TiO2, or combinations thereof, or the like. Integrated circuit500further includes a drain region504and a source region506in substrate502. In some embodiments, at least a portion of source region506or a portion of drain region504extends above substrate502. In some embodiments, the source region506and the drain region504are embedded in substrate502. Drain region504is an embodiment of the drain terminal of read transistor M2and M2′ ofFIGS.2A-2C,3A-3C and4A-4C, and similar detailed description is therefore omitted. Source region506is an embodiment of the source terminal of read transistor M2and M2′ ofFIGS.2A-2C,3A-3C and4A-4C, and similar detailed description is therefore omitted. In some embodiments, the drain region504and source region506ofFIG.5is referred to as an oxide definition (OD) region which defines the source or drain diffusion regions of integrated circuit500or read transistor M2and M2′ ofFIGS.2A-2C,3A-3C and4A-4C, and similar detailed description is therefore omitted. In some embodiments, integrated circuit500is a P-type FeFET transistor, therefore the substrate502is an N-type region, the drain region504is a P-type active region having P-type dopants implanted in substrate502, and the source region506is a P-type active region having P-type dopants implanted in substrate502. In some embodiments, integrated circuit500is an N-type FeFET transistor, therefore the substrate502is a P-type region, the drain region504is an N-type active region having N-type dopants implanted in substrate502, and the source region506is a an N-type active region having N-type dopants implanted in substrate502. In some embodiments, N-type dopants include phosphorus, arsenic or other suitable N-type dopants. In some embodiments, P-type dopants include boron, aluminum or other suitable p-type dopants. Integrated circuit500further includes an insulating layer510on substrate502. In some embodiments, the insulating layer510is between the drain region504and the source region506. In some embodiments, the insulating layer510is a gate dielectric layer. In some embodiments, the insulating layer includes an insulating material including SiO, SiO2or combinations thereof, or the like. In some embodiments, insulating layer510includes a gate oxide or the like. Integrated circuit500further includes a metal layer512over the insulating layer510. In some embodiments, the metal layer512includes Cu, TiN, W or combinations thereof, or the like. In some embodiments, the metal layer512is a conductive layer including doped polysilicon. In some embodiments, integrated circuit500does not include metal layer512. Integrated circuit500further includes a ferroelectric layer520over at least the conductive layer512or the insulating layer510. In some embodiments, where integrated circuit500does not include metal layer512, ferroelectric layer520is on the insulating layer510. Ferroelectric layer520is an embodiment of ferroelectric region202ofFIGS.2A-2C,3A-3Cand4A4C, and similar detailed description is therefore omitted. In some embodiments, ferroelectric layer520includes a ferroelectric material. In some embodiments, a ferroelectric material includes HfO2, HfZrO, HfO, perovskite, SBT, PZT or combinations thereof, or the like. Ferroelectric layer520has polarization states P1or P2that correspond to polarization states P+ or P− inFIG.2A, and similar detailed description is therefore omitted. Polarization state P1points in a first direction Y. Polarization state P2points in a second direction (e.g., negative Y) opposite of the first direction Y. FIG.5shows both polarization states P1and P2. However, in some embodiments, due to the non-volatility of the ferroelectric layer520, once the polarization state P1or P2of integrated circuit500is set based on the gate voltage VG, integrated circuit500includes one of the polarization states P1or P2. The ferroelectric layer520creates a capacitance in integrated circuit500. Furthermore, the MOS transistor of integrated circuit500also has a capacitance. In some embodiments, the capacitance of the ferroelectric layer520and the capacitance of the MOS transistor are matched to operate integrated circuit500in a non-volatile mode. In some embodiments, the capacitance of the ferroelectric layer520is adjusted based on a thickness T1of the ferroelectric layer520. In some embodiments, by changing thickness T1, integrated circuit500can operate in a non-volatile mode or a volatile mode. In some embodiments, the thickness T1of the ferroelectric layer520ranges from about 3 nanometers (nm) to about 50 nm. In some embodiments, as the thickness T1increases, the ability of the ferroelectric layer520to preserve the hysteresis and bi-stable polarization states (e.g., P1or P2) is increased and the leakage current of integrated circuit500decreases. In some embodiments, as the thickness T1decreases, the ability of the ferroelectric layer520to preserve the hysteresis and bi-stable polarization states (e.g., P1or P2) is reduced and the leakage current of integrated circuit500increases. In some embodiments, integrated circuit500does not include the insulating layer510and metal layer512, and the ferroelectric layer520is directly on substrate502. In some embodiments, integrated circuit500does not include the insulating layer510, and the metal layer512is directly on substrate502. Integrated circuit500further includes a gate structure530over the ferroelectric layer520. The gate structure530includes a conductive material such as a metal or doped polysilicon (also referred to herein as “POLY”). In some embodiments, integrated circuit500is an embodiment of write transistor M1and M1′ ofFIGS.2A-2C,3A-3C and4A-4C. In these embodiments, integrated circuit500does not include the ferroelectric layer520. By being included in memory cell array100and memory circuit200A-200C,300A-300C and400A-400C discussed above with respect toFIGS.1,2A-2C,3A-3C and4A-4C, integrated circuit500operates to achieve the benefits discussed above with respect to memory cell array100and memory circuit200A-200C,300A-300C and400A-400C. FIG.6is a functional flow chart of a method600of manufacturing an integrated circuit (IC), in accordance with some embodiments. It is understood that additional operations may be performed before, during, and/or after the method600depicted inFIG.6, and that some other processes may only be briefly described herein. In some embodiments, other order of operations of method600is within the scope of the present disclosure. Method600includes exemplary operations, but the operations are not necessarily performed in the order shown. Operations may be added, replaced, changed order, and/or eliminated as appropriate, in accordance with the spirit and scope of disclosed embodiments. In some embodiments, one or more of the operations of method600is not performed. In some embodiments, the method600is usable to manufacture or fabricate at least memory cell array100(FIG.1), memory cell200A-200C,300A-300C or400A-400C (FIG.2A-2C,3A-3C or4A-4C) or integrated circuit500(FIG.5). In operation602of method600, the drain region504of a transistor is fabricated in substrate502. In some embodiments, the drain region of method600includes at least the drain of read transistor M2or M2′. In some embodiments, the transistor of method600includes at least read transistor M2or M2′. In some embodiments, the drain region is fabricated in a first well within the substrate, and the first well has a dopant opposite of the dopant of the drain region. In some embodiments, the transistor of method600includes at least transistor M1, M1′, M3, M3′, M4or M4′. In some embodiments, the drain region of method600includes at least the drain of transistor M1, M1′, M3, M3′, M4or M4′. In operation604of method600, the source region504of the transistor is fabricated in substrate502. In some embodiments, the source region of method600includes at least the source of read transistor M2or M2′. In some embodiments, the transistor of method600includes at least read transistor M2or M2′. In some embodiments, the source region is fabricated in the first well. In some embodiments, the source region of method600includes at least the source of transistor M1, M1′, M3, M3′, M4or M4′. In some embodiments, at least operation602or604includes the formation of source/drain features that are formed in the substrate. In some embodiments, the formation of the source/drain features includes, a portion of the substrate is removed to form recesses, and a filling process is then performed by filling the recesses in the substrate. In some embodiments, the recesses are etched, for example, a wet etching or a dry etching, after removal of a pad oxide layer or a sacrificial oxide layer. In some embodiments, the etch process is performed to remove a top surface portion of the active region. In some embodiments, the filling process is performed by an epitaxy or epitaxial (epi) process. In some embodiments, the recesses are filled using a growth process which is concurrent with an etch process where a growth rate of the growth process is greater than an etch rate of the etch process. In some embodiments, the recesses are filled using a combination of growth process and etch process. For example, a layer of material is grown in the recess and then the grown material is subjected to an etch process to remove a portion of the material. Then a subsequent growth process is performed on the etched material until a desired thickness of the material in the recess is achieved. In some embodiments, the growth process continues until a top surface of the material is above the top surface of the substrate. In some embodiments, the growth process is continued until the top surface of the material is co-planar with the top surface of the substrate. In some embodiments, a portion of substrate502is removed by an isotropic or an anisotropic etch process. The etch process selectively etches substrate502without etching gate structure530. In some embodiments, the etch process is performed using a reactive ion etch (RIE), wet etching, or other suitable techniques. In some embodiments, a semiconductor material is deposited in the recesses to form the source/drain features. In some embodiments, an epi process is performed to deposit the semiconductor material in the recesses. In some embodiments, the epi process includes a selective epitaxy growth (SEG) process, CVD process, molecular beam epitaxy (MBE), other suitable processes, and/or combination thereof. The epi process uses gaseous and/or liquid precursors, which interacts with a composition of the substrate. In some embodiments, the source/drain features include epitaxially grown silicon (epi Si), silicon carbide, or silicon germanium. Source/drain features of the IC device associated with gate structure530are in-situ doped or undoped during the epi process in some instances. When source/drain features are undoped during the epi process, source/drain features are doped during a subsequent process in some instances. The subsequent doping process is achieved by an ion implantation, plasma immersion ion implantation, gas and/or solid source diffusion, other suitable processes, and/or combination thereof. In some embodiments, source/drain features are further exposed to annealing processes after forming source/drain features and/or after the subsequent doping process. In some embodiments, source/drain features have n-type dopants that include phosphorus, arsenic or other suitable n-type dopants. In some embodiments, the n-type dopant concentration ranges from about 1×1012atoms/cm2 to about 1×1014atoms/cm2. In some embodiments, source/drain features have p-type dopants that include boron, aluminum or other suitable p-type dopants. In some embodiments, the p-type dopant concentration ranges from about 1×1012atoms/cm2 to about 1×1014atoms/cm2. In operation606of method600, an insulating layer510is fabricated on the substrate502. In some embodiments, at least fabricating the insulating layer510of operation610includes performing one or more deposition processes to form one or more dielectric material layers. In some embodiments, a deposition process includes a chemical vapor deposition (CVD), a plasma enhanced CVD (PECVD), an atomic layer deposition (ALD), or other process suitable for depositing one or more material layers. In operation608of method600, a conductive layer is deposited on the insulating layer510. In some embodiments, the conductive layer of method600is metal layer512. In some embodiments, the conductive layer of operation608is formed using a combination of photolithography and material removal processes to form openings in an insulating layer (not shown) over the substrate. In some embodiments, the photolithography process includes patterning a photoresist, such as a positive photoresist or a negative photoresist. In some embodiments, the photolithography process includes forming a hard mask, an antireflective structure, or another suitable photolithography structure. In some embodiments, the material removal process includes a wet etching process, a dry etching process, an RIE process, laser drilling or another suitable etching process. The openings are then filled with conductive material, e.g., copper, aluminum, titanium, nickel, tungsten, or other suitable conductive material. In some embodiments, the openings are filled using CVD, PVD, sputtering, ALD or other suitable formation process. In operation610of method600, a ferroelectric layer520is formed on at least the insulating layer510or the conductive layer (metal layer512). In some embodiments, at least operation606or608is not performed. In some embodiments, operations606and608are not performed, and the ferroelectric layer520is formed directly on substrate502. In some embodiments, operation606is not performed and the conductive layer (e.g., metal layer512) is deposited on substrate502. In some embodiments, operation608is not performed and the ferroelectric layer520is deposited on insulating layer510. In operation612of method600, a gate region530of the transistor is fabricated. In some embodiments, fabricating the gate region includes performing one or more deposition processes to form one or more conductive material layers. In some embodiments, fabricating the gate regions includes forming gate electrodes. In some embodiments, gate regions are formed using a doped or non-doped polycrystalline silicon (or polysilicon). In some embodiments, the gate regions include a metal, such as Al, Cu, W, Ti, Ta, TiN, TaN, NiSi, CoSi, other suitable conductive materials, or combinations thereof. FIG.7is a flowchart of a method700of operating a circuit, in accordance with some embodiments. In some embodiments,FIG.7is a flowchart of method700of operating a memory circuit, such as memory cell array100ofFIG.1or memory cell200A-200C,300A-300C or400A-400C (FIG.2A-2C,3A-3C or4A-4C) or integrated circuit500(FIG.5). It is understood that additional operations may be performed before, during, and/or after the method700depicted inFIG.7, and that some other processes may only be briefly described herein. In some embodiments, other order of operations of method700is within the scope of the present disclosure. Method700includes exemplary operations, but the operations are not necessarily performed in the order shown. Operations may be added, replaced, changed order, and/or eliminated as appropriate, in accordance with the spirit and scope of disclosed embodiments. In some embodiments, one or more of the operations of method700is not performed. In operation702of method700, a write operation of a memory cell is performed. In some embodiments, the memory cell of method700includes memory cell200A-200C,300A-300C or400A-400C. In some embodiments, the memory cell of method700includes at least a memory cell of memory cell array100. In some embodiments, operation702includes at least operation704,706,708or710. In operation704of method700, a write bit line signal is set on a write bit line WBL. In some embodiments, the write bit line signal of method700includes a write bit line signal of write bit line WBL. In some embodiments, the write bit line signal corresponds to a stored data value in the memory cell. In operation706of method700, a write transistor is turned on in response to a write word line signal thereby electrically coupling the write bit line WBL to a gate of a read transistor. In some embodiments, the write transistor of method700includes at least write transistor M1or M1′. In some embodiments, the read transistor of method700includes at least read transistor M2or M2′. In some embodiments, the gate of read transistor of method700includes at least the gate terminal of read transistor M2or M2′. In some embodiments, the write word line signal of method700includes a write word line signal of write word line WWL. In some embodiments, the read transistor of method700includes integrated circuit500. In some embodiments, the write transistor of method700includes integrated circuit500. In operation708of method700, the stored data value of the memory cell is set by adjusting a polarization state of the read transistor thereby turning on or off the read transistor. In some embodiments, the polarization state of the read transistor of method700includes the polarization state P+ or P− of at least read transistor M2or M2′. In some embodiments, the polarization state of the read transistor of method700includes the polarization state P1or P2of integrated circuit500. In some embodiments, the polarization state corresponds to the stored data value of the memory cell. In operation710of method700, the write transistor is turned off in response to the write word line signal thereby electrically decoupling the write bit line and the gate of the read transistor from each other. In some embodiments, operation710further includes holding the stored data value in the memory cell. In operation712of method700, a read operation of the memory cell is performed. In some embodiments, operation712includes at least operation714,716,718or720. In operation714of method700, a voltage of a read bit line RBL is pre-discharged to a first voltage (VSS) or the voltage of the read bit line RBL is pre-charged to a second voltage (VDD) different from the first voltage. In some embodiments, the first voltage of method700includes reference voltage VSS. In some embodiments, the second voltage of method700includes supply voltage VDD. In operation716of method700, a voltage of a read word line RWL is adjusted from a third voltage to a fourth voltage. In some embodiments, the voltage of the read word line RWL is the read word line signal. In some embodiments, the third voltage of method700includes a voltage of a logically high signal. In some embodiments, the third voltage of method700includes a supply voltage VDD. In some embodiments, the fourth voltage of method700includes a voltage of a logically low signal. In some embodiments, the fourth voltage of method700includes a reference voltage VSS. In operation718of method700, the voltage of the read bit line is sensed in response to adjusting the voltage of the read word line from the third voltage to the fourth voltage thereby outputting the stored data value in the memory cell. In some embodiments, rather than sensing the voltage of the read word line, operation718includes sensing the current of the read bit line in response to adjusting the voltage of the read word line from the third voltage to the fourth voltage thereby outputting the stored data value in the memory cell. In some embodiments, the stored data value of the memory cell has a first logical value corresponding to a first resistance state of the read transistor, or a second logical value corresponding to a second resistance state of the read transistor. In some embodiments, the second logical value is opposite of the first logical value. In some embodiments, the second resistance state is opposite of the first resistance state. In some embodiments, first logical value is one of logical 1 or logical 0, and the second logical value is the other of logical 0 or logical 1. In some embodiments, the first resistance state is one of the low resistance state or the high resistance state and the second resistance state is the other of the high resistance state or the low resistance state. In some embodiments, adjusting the voltage of the read word line RWL from the third voltage to the fourth voltage of operation718comprises turning on a first transistor in response to a first control signal or the voltage of the read word line being the fourth voltage thereby electrically coupling the read bit line to a source of the read transistor. In some embodiments, the first transistor of method700includes transistor M3or M3′. In some embodiments, the first control signal of method700includes control signal CS. In some embodiments, the source of the read transistor of method700includes the source terminal of read transistor M2or M2′. In operation720of method700, the voltage of the read word line is adjusted from the fourth voltage to the third voltage. In some embodiments, adjusting the voltage of the read word line from the fourth voltage to the third voltage of operation720comprises turning off the first transistor in response to the first control signal or the voltage of the read word line being the third voltage thereby electrically decoupling the read bit line and the source of the read transistor from each other. By operating method700, the memory circuit operates to achieve the benefits discussed above with respect to memory cell array100ofFIG.1or memory cell200A-200C,300A-300C or400A-400C (FIG.2A-2C,3A-3C or4A-4C) or integrated circuit500(FIG.5). While method700was described above with reference to a single memory cell of memory cell array100, it is understood that method700applies to each row and each column of memory cell array100, in some embodiments. Furthermore, various PMOS or NMOS transistors shown inFIG.2A-2C,3A-3C or4A-4Care of a particular dopant type (e.g., N-type or P-type) are for illustration purposes. Embodiments of the disclosure are not limited to a particular transistor type, and one or more of the PMOS or NMOS transistors shown inFIG.2A-2C,3A-3C or4A-4Ccan be substituted with a corresponding transistor of a different transistor/dopant type. Similarly, the low or high logical value of various signals used in the above description is also for illustration. Embodiments of the disclosure are not limited to a particular logical value when a signal is activated and/or deactivated. Selecting different logical values is within the scope of various embodiments. Selecting different numbers of transistors inFIG.2A-2C,3A-3C or4A-4Cis within the scope of various embodiments. It will be readily seen by one of ordinary skill in the art that one or more of the disclosed embodiments fulfill one or more of the advantages set forth above. After reading the foregoing specification, one of ordinary skill will be able to affect various changes, substitutions of equivalents and various other embodiments as broadly disclosed herein. It is therefore intended that the protection granted hereon be limited only by the definition contained in the appended claims and equivalents thereof. One aspect of this description relates to a memory cell. The memory cell includes a write bit line, a read word line, and a write transistor coupled between the write bit line and a first node. In some embodiments, the memory cell further includes a read transistor coupled to the write transistor by the first node. In some embodiments, the read transistor includes a ferroelectric layer, a drain terminal of the read transistor coupled to the read word line, and a source terminal of the read transistor coupled to a second node. In some embodiments, the write transistor is configured to set a stored data value of the memory cell by a write bit line signal that adjusts a polarization state of the read transistor, the polarization state corresponding to the stored data value. In some embodiments, the write transistor includes a drain terminal of the write transistor coupled to the write bit line; a source terminal of the write transistor coupled to the first node and the read transistor; and a gate terminal of the write transistor coupled to a write word line. In some embodiments, the read transistor further includes a gate terminal of the read transistor coupled to the source terminal of the write transistor by the first node, and the gate terminal of the read transistor is on the ferroelectric layer. In some embodiments, the source terminal of the read transistor is coupled to a read bit line by the second node. In some embodiments, the memory cell further includes a first transistor coupled to the read transistor. In some embodiments, the first transistor includes a drain terminal of the first transistor coupled to the source terminal of the read transistor by the second node; a source terminal of the first transistor coupled to a read bit line; and a gate terminal of the first transistor. In some embodiments, the gate terminal of the first transistor is configured to receive a control signal. In some embodiments, the read transistor includes a channel region of the read transistor; a gate insulating layer over the channel region of the read transistor; and a gate layer on the ferroelectric layer, where the ferroelectric layer is between the gate insulating layer and the gate layer. Another aspect of this description relates to a memory cell. The memory cell includes a write bit line, a write word line, a read word line, and a write transistor of a first type. In some embodiments, the write transistor is coupled to the write bit line, the write word line and a first node. In some embodiments, the write transistor is configured to be enabled or disabled in response to a write word line signal. In some embodiments, the memory cell further includes a read transistor of the first type. In some embodiments, the read transistor includes a drain terminal of the read transistor is coupled to the read word line, and a gate terminal of the read transistor coupled to the write transistor by the first node, and a ferroelectric layer having a polarization state that corresponds to a stored data value in the memory cell. In some embodiments, the write transistor is configured to set the stored data value in the memory cell by a write bit line signal that adjusts the polarization state of the ferroelectric layer. In some embodiments, the read transistor further includes a source terminal of the read transistor coupled to a second node. In some embodiments, the source terminal of the read transistor is coupled to a read bit line by the second node. In some embodiments, the memory cell further includes a first transistor of the first type, coupled to the read transistor. In some embodiments, the first transistor includes a drain terminal of the first transistor coupled to the source terminal of the read transistor by the second node; a source terminal of the first transistor coupled to a read bit line; and a gate terminal of the first transistor configured to receive a control signal. In some embodiments, the write transistor includes an oxide channel region; and the read transistor includes a silicon channel region. In some embodiments, the write transistor includes an oxide channel region; and the read transistor includes another oxide channel region. In some embodiments, the read transistor further includes a gate insulating layer over a channel region of the read transistor; and a gate layer on the ferroelectric layer. In some embodiments, the ferroelectric layer is between the gate insulating layer and the gate layer. In some embodiments, the ferroelectric layer includes a ferroelectric material including HfO2, HfZrO, HfO or combinations thereof. Still another aspect of this description relates to a method of operating a memory cell. The method includes a method of operating a memory cell, the method may include. The method further includes performing a read operation of the memory cell, the performing the read operation of the memory cell may include: pre-discharging a voltage of a read bit line to a first voltage or pre-charging the voltage of the read bit line to a second voltage different from the first voltage, adjusting a voltage of a read word line from a third voltage to a fourth voltage, sensing the voltage of the read bit line in response to adjusting the voltage of the read word line from the third voltage to the fourth voltage thereby outputting a stored data value in the memory cell, and adjusting the voltage of the read word line from the fourth voltage to the third voltage. In some embodiments, adjusting the voltage of the read word line from the third voltage to the fourth voltage includes turning on a first transistor in response to a first control signal or the voltage of the read word line being the fourth voltage thereby electrically coupling the read bit line to a source of a read transistor. In some embodiments, adjusting the voltage of the read word line from the fourth voltage to the third voltage includes turning off a first transistor in response to a first control signal or the voltage of the read word line being the third voltage thereby electrically decoupling the read bit line and a source of a read transistor from each other. In some embodiments, the stored data value of the memory cell has a first logical value corresponding to a first resistance state of a read transistor, or a second logical value corresponding to a second resistance state of the read transistor, the second logical value being opposite of the first logical value, the second resistance state being opposite of the first resistance state. In some embodiments, the method further includes performing a write operation of the memory cell. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure.
81,001
11862220
DETAILED DESCRIPTION OF THE EMBODIMENTS FIG.1is a circuit diagram of a memory cell CM of a memory device according to an example embodiment of the inventive concept. Referring toFIG.1, the memory cell MC may include a ferroelectric field effect transistor (FeFET) T1 and a field effect transistor (FET)12. The FeFET T1 may include a first gate structure G1, a first source/drain SD1, and a second source/drain SD2. The first source/drain SD1 may be grounded, and the second source/drain SD2 may be electrically connected to a bit line BL. The first gate structure G1 may be electrically connected to a third source/drain SD3 of the FET T2. The FET T2 may include a second gate structure G2, the third source/drain SD3, and a fourth source/drain SD4. The third source/drain SD3 of the FET T2 may be electrically connected to the first gate structure G1 of the FeFET T1. The fourth source/drain SD4 may be electrically connected to a cell word line CWL. The second gate structure G2 may be electrically connected to a selection word line SWL. FIG.2is a circuit diagram of a memory device100according to an example embodiment of the inventive concept. Referring toFIG.2, the memory device100may include: a first memory cell MC1, a second memory cell MC2, a third memory cell MC3, a fourth memory cell MC4, a fifth memory cell MC5, a sixth memory cell MC6, a seventh memory cell MC7, an eighth memory cell MC8, and a ninth memory cell MC9; a first bit line BL1, a second bit line BL2, and a third bit line BL3; a first cell word line CWL1, a second cell word line CWL2, and a third cell word line CWL3; and a first selection word line SWL1, a second selection word line SWL2, and a third selection word line SWL3. AlthoughFIG.2illustrates nine memory cells, three bit lines, three cell word lines, and three selection word lines, the numbers of memory cells, bit lines, cell word lines, and selection word lines included in the memory device100are not limited thereto and may be variously modified. The second source/drain SD2 of the FeFET T1 of each of the first memory cell MC1, the fourth memory cell MC4, and the seventh memory cell MC7 may be electrically connected to the first bit line BL1. The fourth source/drain SD4 of the FET T2 of each of the first memory cell MC1, the fourth memory cell MC4, and the seventh memory cell MC7 may be electrically connected to the first cell word line CWL1. The second source/drain SD2 of the FeFET T1 of each of the second memory cell MC2, the fifth memory cell MC5, and the eighth memory cell MC8 may be electrically connected to the second bit line BL2. The fourth source/drain SD4 of the FET T2 of each of the second memory cell MC2, the fifth memory cell MC5, and the eighth memory cell MC8 may be electrically connected to the second cell word line CWL2. The second source/drain SD2 of the FeFET T1 of each of the third memory cell MC3, the sixth memory cell MC6, and the ninth memory cell MC9 may be electrically connected to the third bit line BL3. The fourth source/drain SD4 of the FET T2 of each of the third memory cell MC3, the sixth memory cell MC6, and the ninth memory cell MC9 may be electrically connected to the third cell word line CWL3. The second gate structure G2 of the FET T2 of each of the first memory cell MC1, the second memory cell MC2, and the third memory cell MC3 may be electrically connected to the first selection word line SWL1. The second gate structure G2 of the FET T2 of each of the fourth memory cell MC4, the fifth memory cell MC5, and the sixth memory cell MC6 may be electrically connected to the second selection word line SWL2. The second gate structure G2 of the FET T2 of each of the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9 may be electrically connected to the third selection word line SWL3. FIG.3is a circuit diagram of the memory device according to an example embodiment of the inventive concept during a write operation. Referring toFIG.3, to perform a write operation on the fifth memory cell MC5, a switching voltage Vs may be applied to the second selection word line SWL2. On the contrary, 0 V may be applied to the first selection word line SWL1 and the third selection word line SWL3. Therefore, the FET T2 of each of the fourth memory cell MC4, the fifth memory cell MC5, and the sixth memory cell MC6 may be turned on, and the first memory cell MC1, the second memory cell MC2, the third memory cell MC3, the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9 may be turned off. Accordingly, the first gate structure G1 of the FeFET T1 of each of the first memory cell MC1, the second memory cell MC2, the third memory cell MC3, the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9 may be in a floating state. Therefore, a polarization state of the first memory cell MC1, the second memory cell MC2, the third memory cell MC3, the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9 may be not changed. In addition, a write voltage VW may be applied to the second cell word line CWL2. On the contrary, 0 V may be applied to the first cell word line CWL1 and the third cell word line CWL3. In addition, 0 V may be applied to the first bit line BL1, the second bit line BL2, and the third bit line BL3. When the FET T2 of each of the fourth memory cell MC4 and the sixth memory cell MC6 is turned on, 0 V may be applied to the first gate structure G1 of the FeFET T1. As 0 V is applied to the first source/drain SD1, the first gate structure G1, and the second source/drain SD2 of the FeFET T1 of each of the fourth memory cell MC4 and the sixth memory cell MC6, the polarization state of the fourth memory cell MC4 and the sixth memory cell MC6 may be not changed. When the fifth memory cell MC5 is turned on, the write voltage VW may be applied to the first gate structure G1 of the FeFET T1. As 0 V is applied to the first source/drain SD1 and second source/drain SD2 of the FeFET T1 and the write voltage VW is applied to the first gate structure G1 of the FeFET T1, the polarization state of the fifth memory cell MC5 may be changed. Accordingly, the write operation may be performed only on the fifth memory cell MC5, and may not affect the first memory cell MC1, the second memory cell MC2, the third memory cell MC3, the fourth memory cell MC4, the sixth memory cell MC6, the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9. That is, write disturbs among the memory cells may be prevented. FIG.4is a circuit diagram of the memory device100according to an example embodiment of the inventive concept during a read operation. Referring toFIG.4, to perform the read operation on the fifth memory cell MC5, the switching voltage Vs may be applied to the second selection word line SWL2. On the contrary, 0 V may be applied to the first selection word line SWL1 and the third selection word line SWL3. Therefore, the FET T2 of each of the fourth memory cell MC4, the fifth memory cell MC5, and the sixth memory cell MC6 may be turned on, and the first memory cell MC1, the second memory cell MC2, the third memory cell MC3, the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9 may be turned off. Accordingly, the first gate structure G1 of the FeFET T1 of each of the first memory cell MC1, the second memory cell MC2, the third memory cell MC3, the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9 may be in a floating state. Accordingly, a current may not flow between the second source/drain SD2 and the first source/drain SD1 of each of the first memory cell MC1, the second memory cell MC2, the third memory cell MC3, the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9. In addition, a read voltage VRmay be applied to the second cell word line CWL2. On the contrary, 0 V may be applied to the first cell word line CWL1 and the third cell word line CWL3. In addition, a drain voltage VDmay be applied to the second bit line BL2. On the contrary, 0 V may be applied to the first bit line BL1 and the third bit line BL3. When the FET T2 of each of the fourth memory cell MC4 and the sixth memory cell MC6 is turned on, 0 V may be applied to the first gate structure G1 of the FeFET T1. As 0 V is applied to the first source/drain SD1, the first gate structure G1, and the second source/drain SD2 of the FeFET T1 of each of the fourth memory cell MC4 and the sixth memory cell MC6, a current may not flow between the second source/drain SD2 and the first source/drain SD1 of the FeFET T1 of each of the fourth memory cell MC4 and the sixth memory cell MC6. When the fifth memory cell MC5 is turned on, the read voltage VRmay be applied to the first gate structure G1 of the FeFET T1. As 0 V and the drain voltage VDare respectively applied to the first source/drain SD1 and the second source/drain SD2 of the FeFET T1 and the read voltage VRis applied to the first gate structure G1 of the FeFET T1, a current may flow between the second source/drain SD2 and the first source/drain SD1 of the FeFET T1 of the fifth memory cell MC5. Accordingly, the read operation may be performed only on the fifth memory cell MC5, and the first memory cell MC1, the second memory cell MC2, the third memory cell MC3, the fourth memory cell MC4, the sixth memory cell MC6, the seventh memory cell MC7, the eighth memory cell MC8, and the ninth memory cell MC9 may not affect the read operation. That is, read disturbs among the memory cells may be prevented. FIG.5Ais a top-plan view of the memory device100according to an example embodiment of the inventive concept.FIG.5Bis a cross-sectional view of the memory device100according to an example embodiment of the inventive concept, taken along line A-A′ shown inFIG.5A.FIG.5Cis a cross-sectional view of the memory device100according to an example embodiment of the inventive concept, taken along line B-B′ shown inFIG.5B. Referring toFIGS.5A to5C, the memory device100according to an example embodiment or may be formed of may include a substrate110. The substrate110may include a semiconductor material such as a Group IV semiconductor material, a Group III-V semiconductor material, or a Group II-VI semiconductor material. The Group IV semiconductor material may include or may be formed of, for example, silicon (Si), germanium (Ge), or Si—Ge. The Group III-V semiconductor material may include or may be formed of, for example, gallium arsenide (GaAs), indium phosphide (InP), gallium phosphide (GaP), indium arsenide (InAs), indium antimonide (InSb), or indium gallium arsenide (InGaAs). The Group II-VI semiconductor material may include or may be formed of, for example, zinc telluride (ZnTe) or cadmium sulfide (CdS). The memory device100may further include the FeFET T1 on the substrate110. The FeFET T1 may include the first gate structure G1 on the substrate110, the first source/drain SD1 at one side of the first gate structure G1, and the second source/drain SD2 at another side of the first gate structure G1. In some embodiments, as shown inFIG.5C, a bottom portion of the first gate structure G1 may be recessed into the substrate110. In other words, the FeFET T1 may be a recessed transistor. When the bottom portion of the first gate structure G1 is recessed into the substrate110, an area of a ferroelectric layer124may increase. Accordingly, a polarization distribution of the ferroelectric layer124may decrease. Therefore, a distribution of threshold voltages of the FeFET T1 may decrease. Thus, a distribution of operation characteristics of the memory device100may decrease. In another embodiment, unlike shown inFIG.5C, the first gate structure G1 may be on a plane surface of the substrate110. That is, the FeFET T1 may include a planar transistor. In another embodiment, unlike shown inFIG.5C, the first gate structure G1 may be on a protruding fin structure on the substrate110. That is, the FeFET T1 may include a fin-type transistor. The first gate structure G1 may include the ferroelectric layer124and a gate layer126stacked on the substrate110. The ferroelectric layer124may include a ferroelectric material. The ferroelectric material may include or may be formed of, for example, hafnium oxide (HfO2), doped HfO2, for example, Si-doped HfO2or Al-doped HfO2, zirconium dioxide (ZrO2), doped ZrO2, for example, lithium (Li)-doped ZrO2or Mg-doped ZrO2), HfxZr1-xO2(0<x<1), or ATiO3(where A includes barium (Ba), strontium (Sr), calcium (Ca), or lead (Pb)). The gate layer126may include or be formed of polysilicon, a metal, or a metal nitride. The gate layer126may include or may be, for example, tungsten (W). In some embodiments, the gate layer126may include a bottom portion126aon the ferroelectric layer124and a top portion126bon the bottom portion126a. In some embodiments, the bottom portion126aof the gate layer126may be recessed into the substrate110, and the top portion126bof the gate layer126may be not recessed into the substrate110. The first gate structure G1 may further include a first gate dielectric layer122between the ferroelectric layer124and the substrate110. The first gate dielectric layer122may include or may be formed of silicon oxide (Sift), silicon nitride (SiN), or a high-k material. The high-k material may include or may be formed of, for example, aluminum oxide (Al2O3), HfO2, yttrium oxide (Y2O3), zirconium dioxide (ZrO2), titanium oxide (TiO2), or a combination thereof. In some embodiments, the first gate dielectric layer122may extend in a second horizontal direction (the Y direction). It will be understood that when an element is referred to as being “connected” or “coupled” to or “on” another element, it can be directly connected or coupled to or on the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, or as “contacting” or “in contact with” another element, there are no intervening elements present at the point of contact. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). As shown inFIG.5C, the first source/drain SD1 and the second source/drain SD2 may be in the substrate110. The first source/drain SD1 and the second source/drain SD2 may include doped regions in the substrate110. In another embodiment, unlike shown inFIG.5C, the first source/drain SD1 and the second source/drain SD2 may be on the substrate110. In this case, the first source/drain SD1 and the second source/drain SD2 may include doped epitaxial layers on the substrate110. The first source/drain SD1 and the second source/drain may include a doped semiconductor material. The memory device100may further include a device isolation layer120. The device isolation layer120may electrically isolate two FeFETs T1 from each other. The two FeFETs T1 extend in the second horizontal direction (the Y direction) and are adjacent to each other in a first horizontal direction (the X direction). The device isolation layer120may include or may be silicon oxide, silicon nitride, a low-k material, or a combination thereof. The low-k material may include or may be formed of, for example, fluorinated tetraethylorthosilicate (FTEOS), hydrogen silsesquioxane (HSQ), bis-benzocyclobutene (BCB), tetramethyl orthosilicate (TMOS), octamethylcyclotetrasiloxane (OMCTS), hexamethyldisiloxane (HMDS), trimethylsilyl borate (TMSB), diacetoxyditertiarybutosiloxane (DADBS), trimethylsilyl phosphate (TMSP), polytetrafluoroethylene (PTFE), Tonen SilaZene (TOSZ), fluoride silicate glass (FSG), polypropylene oxide, carbon doped silicon oxide (CDO), organo silicate glass (OSG), SiLK, amorphous fluorinated carbon, silica aerogel, silica xerogel, mesoporous silica, or a combination thereof. The memory device100may further include a ground line GND that is in contact with the first source/drain SD1. The ground line GND may extend in the second horizontal direction (the Y direction). The ground line GND may include a metal or a metal nitride. For example, the ground line GND may include or may be formed of tungsten (W), aluminum (Al), copper (Cu), gold (Au), silver (Ag), titanium (Ti), titanium nitride (TiN), tantalum nitride (TaN), or a combination thereof. The memory device100may further include a bit line BL that is in contact with the second source/drain SD2. The bit line BL may extend in the second horizontal direction (the Y direction). In some embodiments, the ground line GND and the bit line BL may extend in parallel to each other. The bit line BL may include or may be formed of a metal or a metal nitride. For example, the bit line BL may include or may be formed of W, Al, Cu, Au, Ag, Ti, TiN, TaN, or a combination thereof. The memory device100may further include a first interlayer insulating layer130covering the ground line GND and the bit line BL and surrounding the top portion126bof the gate layer126. The first interlayer insulating layer130may extend between the ground line GND and the top portion126bof the gate layer126, between the ground line GND and a channel connection layer152c, between the bit line BL and the top portion126bof the gate layer126, and between the bit line BL and the channel connection layer152c. The first interlayer insulating layer130may also extend between the ground line GND and the selection word line SWL, and between the bit line BL and the selection word line SWL. The first interlayer insulating layer130may include or may be formed of silicon oxide, silicon nitride, a low-k material, or a combination thereof. The low-k material may include, for example, fluorinated tetraethylorthosilicate (FTEOS), hydrogen silsesquioxane (HSQ), bis-benzocyclobutene (BCB), tetramethylorthosilicate (TMOS), octamethylcyclotetrasiloxane (OMCTS), hexamethyldisiloxane (HMDS), trimethylsilyl borate (TMSB), diacetoxyditertiarybutosiloxane (DADBS), trimethylsilyl phosphate (TMSP), polytetrafluoroethylene (PTFE), Tonen SilaZene (TOSZ), Fluoride Silicate Glass (FSG), polypropylene oxide, carbon doped silicon oxide (CDO), organo silicate glass (OSG), SiLK, amorphous fluorinated carbon, silica aerogel, silica xerogel, mesoporous silica, or a combination thereof. As shown inFIG.5B, the memory device100may further include a first channel152aextending in the vertical direction (the Z direction) from the first gate structure G1. The first channel152amay extend in the vertical direction (the Z direction) from the top portion126bof the gate layer126to the cell word line CWL. The first channel152amay include or may be formed of a semiconductor material, for example, Si, Ge, and the like. In another embodiment, the first channel152amay include a transition metal dichalcogenide (TMD). For example, the first channel152amay include or may be formed of MX2, where M may include molybdenum (Mo), tungsten (W), or copper (Cu). X may include sulfur (S), selenium (Se), or tellurium (Te). In another embodiment, the first channel152amay include an oxide semiconductor material. For example, the first channel152amay include or may be formed of indium zinc oxide (IZO), zinc tin oxide (ZTO), yttrium zinc oxide (YZO), indium gallium zinc oxide (IGZO), or the like. In some embodiments, as shown inFIG.5B, the memory device100may further include a second channel152bextending in the vertical direction (the Z direction) from the first gate structure G1. The second channel152bmay extend in the vertical direction (the Z direction) from the top portion126bof the gate layer126to the cell word line CWL. The second channel152bmay extend in parallel to the first channel152a. In some embodiments, the second channel152bmay include a material that is the same as the first channel152a. The second channel152bmay include or may be formed of a semiconductor material, for example, Si, Ge, and the like. In another embodiment, the second channel152bmay include a TMD. For example, the second channel152bmay include or may be formed of MX2, where M may include Mo, W, or Cu. X may include S, Se, or Te. In another embodiment, the second channel152bmay include an oxide semiconductor material. For example, the second channel152bmay include or may be formed of IZO, ZTO, YZO, IGZO, or the like. In some embodiments, the memory device100may further include a channel connection layer152cconnecting the first channel152ato the second channel152band on the top portion126bof the gate layer126of the first gate structure G1. As shown inFIG.5C, the channel connection layer152cmay extend in a first horizontal direction (the X direction). In some embodiments, as shown inFIG.5C, the channel connection layer152cextending in the first horizontal direction (the X direction) may contact the top portions126bof the gate layers126of the plurality of first gate structures G1. However, in other embodiments, unlike shown inFIG.5C, the channel connection layer152cmay be cut such that one channel connection layer152cmay contact the top portion126bof the gate layer126of only one of the first gate structures G1. In some embodiments, the channel connection structure152cmay include materials that are the same as materials of the first channel152aand the second channel152b. The channel connection layer152cmay include or may be formed of a semiconductor material, for example, Si, Ge, and the like. In other embodiments, the channel connection layer152cmay include a TMD. For example, the channel connection layer152cmay include MX2, where M may include Mo, W, or Cu. X may include S, Se, or Te. In another embodiment, the channel connection layer152cmay include an oxide semiconductor material. For example, the channel connection layer152cmay include or may be formed of IZO, ZTO, YZO, IGZO, or the like. The memory device100may further include the selection word line SWL at a side of the first channel152a. The selection word line SWL may extend in the first horizontal direction (the X direction). The ground line GND and the bit line BL may extend in the second horizontal direction (the Y direction). The first horizontal direction (the X direction) may be not parallel to the second horizontal direction (the Y direction). AlthoughFIGS.5A to5Cillustrate that the first horizontal direction (the X direction) is perpendicular to the second horizontal direction (the Y direction), in another embodiment, unlike inFIG.5C, the first horizontal direction (the X direction) may be not perpendicular to the second direction (the Y direction). In some embodiments, the selection word line SWL may pass between the first channel152aand the second channel152b. The selection word line SWL may include a metal or a metal nitride. For example, the selection word line SWL may include or may be formed of W, Al, Cu, Au, Ag, Ti, TiN, TaN, or a combination thereof. The memory device100may further include a second gate dielectric layer154between the first channel152aand the selection word line SWL. In some embodiments, the second gate dielectric layer154may further extend between the second channel152band the selection word line SWL. In some embodiments, the second gate dielectric layer154may further extend between the channel connection layer152cand the selection word line SWL. In other words, the second gate dielectric layer154may contact two side surfaces and a bottom surface of the selection word line SWL. The second gate dielectric layer154may include: a portion on the first channel152a, extending in the vertical direction (the Z direction) from the channel connection layer152cto the cell word line CWL; a portion on the second channel152b, extending in the vertical direction (the Z direction) from the channel connection layer152cto the cell word line CWL; and a portion on the channel connection layer152c, extending in the second horizontal direction (the Y direction) from the first channel152ato the second channel152b. The portion of the second gate dielectric layer154that is in contact with the channel connection layer152cmay extend in the first horizontal direction (the X direction). The second gate dielectric layer154may include Sift, SiN, or a high-k material. The high-k material may include, for example, aluminum oxide (Al2O3), HfO2, yttrium oxide (Y2O3), zirconium dioxide (ZrO2), titanium oxide (TiO2), or a combination thereof. The memory device100may further include the FET T2, which is vertically arranged (in the Z direction) on the FeFET T1. The FET T2 may include the first channel152a, the second channel152b, the channel connection layer152c, and the second gate dielectric layer154. The channel connection layer152cmay form the second gate structure G2 of the FET T2. The channel connection layer152c(i.e., the second gate structure G2) is electrically connected to the first gate structure G1 of the FeFET T1. The first channel152aand the second channel152bmay include a drain region and a source region. Accordingly, the first channel152aand the second channel152bcorrespond to the third source/drain SD3 and the fourth source/drain SD4 of the FET T2. The third source/drain SD3 of the FET T2 may be electrically connected to the first gate structure G1 of the FeFET T1. The fourth source/drain SD4 may be electrically connected to a cell word line CWL. The second gate structure G2 may be electrically connected to the selection word line SWL. The memory device100may further include a second interlayer insulating layer140on the first interlayer insulating layer130. The second interlayer insulating layer140may contact the first channel152aand the second channel152b. The second interlayer insulating layer140may include or may be formed of silicon oxide, silicon nitride, a low-k material, or a combination thereof. The low-k material may include, for example, fluorinated tetraethyl orthosilicate (FTEOS), hydrogen silsesquioxane (HSQ), bis-benzocyclobutene (BCB), tetramethylorthosilicate (TMOS), octamethylcyclotetrasiloxane (OMCTS), hexamethyldisiloxane (HMDS), trimethylsilyl borate (TMSB), diacetoxyditertiarybutosiloxane (DADBS), trimethylsilyl phosphate (TMSP), polytetrafluoroethylene (PTFE), Tonen SilaZene (TOSZ), fluoride silicate glass (FSG), polypropylene oxide, carbon doped silicon oxide (CDO), organo silicate glass (OSG), SiLK, amorphous fluorinated carbon, silica aerogel, silica xerogel, mesoporous silica, or a combination thereof. The memory device100may further include a third gate dielectric layer158on a top surface of the selection word line SWL. The third gate dielectric layer158may be between the selection word line SWL and the cell word line CWL. The third gate dielectric layer158may extend in the first horizontal direction (the X direction). The third gate dielectric layer158may include a material that is the same as or different from the material of the second gate dielectric layer154. The third gate dielectric layer158may include or may be formed of silicon oxide, silicon nitride, a low-k material, or a combination thereof. The low-k material may include, for example, fluorinated tetraethylorthosilicate (FTEOS), hydrogen silsesquioxane (HSQ), bis-benzocyclobutene (BCB), tetramethylorthosilicate (TMOS), octamethylcyclotetrasiloxane (OMCTS), hexamethyldisiloxane (HMDS), trimethylsilyl borate (TMSB), diacetoxyditertiarybutosiloxane (DADBS), trimethylsilyl phosphate (TMSP), polytetrafluoroethylene (PTFE), Tonen SilaZene (TOSZ), fluoride silicate glass (FSG), polypropylene oxide, carbon doped silicon oxide (CDO), organo silicate glass (OSG), SiLK, amorphous fluorinated carbon, silica aerogel, silica xerogel, mesoporous silica, or a combination thereof. The high-k material may include, for example, aluminum oxide (Al2O3), HfO2, yttrium oxide (Y2O3), zirconium dioxide (ZrO2), titanium oxide (TiO2), or a combination thereof. The memory device100may further include the cell word line CWL on the top of the first channel152a. The cell word line CWL may be on the top of the second channel152b. The cell word line CWL may be on the top of the second gate dielectric layer154. The cell word line CWL may be on the top of the third gate dielectric layer158. The cell word line CWL may extend in the second horizontal direction (the Y direction). The cell word line CWL may extend in parallel to the ground line GND and the bit line BL. The cell word line CWL may include or may be formed of a metal or a metal nitride. For example, the cell word line CWL may include W, Al, Cu, Au, Ag, Ti, TiN, TaN, or a combination thereof. The memory device100may further include a third interlayer insulating layer160filling a gap between the cell word lines CWL adjacent to each other. The third interlayer insulating layer160may be on the third gate dielectric layer158. The third interlayer insulating layer160may include or may be formed of silicon oxide, silicon nitride, or a low-k material. The low-k material may include, for example, fluorinated tetraethylorthosilicate (FTEOS), hydrogen silsesquioxane (HSQ), bis-benzocyclobutene (BCB), tetramethylorthosilicate (TMOS), octamethylcyclotetrasiloxane (OMCTS), hexamethyldisiloxane (HMDS), trimethylsilyl borate (TMSB), diacetoxyditertiarybutosiloxane (DADBS), trimethysilyl phosphate (TMSP), polytetrafluoroethylene (PTFE), Tonen SilaZene (TOSZ), fluoride silicate glass (FSG), polypropylene oxide, carbon doped silicon oxide (CDO), organo silicate glass (OSG), SiLK, amorphous fluorinated carbon, silica aerogel, silica xerogel, mesoporous silica, or a combination thereof. FIGS.6A,7A,8A,9A,10A, and11Aare top-plan views illustrating a method of manufacturing a memory device, according to an embodiment of the inventive concept.FIGS.6B,7B,8B,9B,10B, and11Bare cross-sectional views illustrating a method of manufacturing a memory device, taken along line A-A′ shown inFIGS.6A,7A,8A,9A,10A, and11A.FIGS.6C,7C,8C,9C,10C, and11Care cross-sectional views illustrating a method of manufacturing a memory device, taken along line B-B′ shown inFIGS.6A,7A,8A,9A,10A, and11A. Referring toFIGS.6A to6C, a device isolation trench120T extending in the second horizontal direction (the Y direction) may be formed in the substrate110. A device isolation layer120may be formed in the device isolation trench120T. In addition, a first line trench122T extending in the second horizontal direction (the Y direction) may be formed in the substrate110. The first gate dielectric layer122may be formed in the first line trench122T. A portion of the first gate dielectric layer122may be etched, and the ferroelectric layer124and the bottom portion126aof the gate layer126may be formed in the etched portion. Referring toFIGS.7A to7C, the first source/drain SD1 and the second source/drain SD2 may be formed in the substrate110. Next, the ground line GND on the first source/drain SD1 and the bit line BL on the second source/drain SD2 may be formed. Next, the first interlayer insulating layer130may be formed on the first gate dielectric layer122, the ferroelectric layer124, the bottom portion126aof the gate layer126, the ground line GND, and the bit line BL. Referring toFIGS.8A to8C, the top portion126bof the gate layer, which penetrates through the first interlayer insulating layer130and contacts the bottom portion126aof the gate layer, may be formed. Next, the second interlayer insulating layer140may be formed on the top portion126bof the gate layer and the first interlayer insulating layer130. Next, a stop layer145may be formed on the second interlayer insulating layer140. The stop layer145may include a material different from materials of the first interlayer insulating layer130and the second interlayer insulating layer140. For example, when the first interlayer insulating layer130and the second interlayer insulating layer140include silicon oxide, the stop layer145may include silicon nitride. Referring toFIGS.9A to9C, a second line trench140T extending in the first horizontal direction (the X direction) may be formed in the stop layer145and the second interlayer insulating layer140. The channel layer152may be formed on a bottom surface and two side walls of the second line trench140T and a top surface of the stop layer145. Next, the second gate dielectric layer154may be formed on the channel layer152. Next, a selection word line layer SWLp may be formed on the second gate dielectric layer154. Referring toFIGS.9A to9CandFIGS.10A to10C, the stop layer145, the channel layer152, the second gate dielectric layer154, and the selection word line layer SWLp may be ground such that the second interlayer insulating layer140is exposed. The first channel152a, the second channel152b, and the channel connection layer152cmay be formed from the channel layer152. The selection word line SWL may be formed from the selection word line layer SWLp. As a top portion of the selection word line SWL is selectively etched, a level of the top of the selection word line SWL may be lower than a level of the top of the second interlayer insulating layer140in the vertical direction (the Z direction). In some embodiments, a level of the top end of the selection word line SWL may be lower than a level of the top of the first channel152aand a level of the top of the second channel152bthe vertical direction (the Z direction). In some embodiments, the level of the top of the selection word line SWL may be lower than a level of the top of the second gate dielectric layer154the vertical direction (the Z direction). Referring toFIGS.11A to11C, the third gate dielectric layer158may be formed on the top of the selection word line SWL. For example, the third gate dielectric layer158may be formed on the selection word line SWL, the second gate dielectric layer154, the first channel152a, the second channel152b, and the second interlayer insulating layer140. Next, a top portion of the third gate dielectric layer158may be removed such that the second interlayer insulating layer140is exposed. By doing so, the third gate dielectric layer158may be formed in a space surrounded by the second gate dielectric layer154and the selection word line SWL. Referring toFIGS.5A to5C, the third interlayer insulating layer160may be formed on the third gate dielectric layer158. Furthermore, the cell word line CWL penetrating through the third interlayer insulating layer160may be formed on the second interlayer insulating layer140, the first channel152a, the second channel152b, the second gate dielectric layer154, and the third gate dielectric layer158. The memory device100shown inFIGS.5A to5Cmay be manufactured according to the manufacturing methods described with reference toFIGS.5A,6A,7A,8A,9A,10A, and11A,FIGS.5B,6B,7B,8B,9B,10B, and11B, andFIGS.5C,6C,7C,8C,9C,10C, and11C. While the inventive concept has been particularly shown and described with reference to embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the following claims.
35,169
11862221
DETAILED DESCRIPTION Memory devices may experience various conditions when operating as part of electronic devices such as mobile devices, personal computers, wireless communication devices, servers, internet-of-things (IoT) devices, vehicles or vehicle components, etc. In some cases, one or more memory cells of a memory device may become imprinted, which may refer to various conditions where a memory cell of a memory device becomes predisposed toward storing one logic state over another, resistant to being written to a different logic state (e.g., a logic state different than a stored logic state prior to a write operation), or both. A likelihood of a memory cell becoming imprinted with a logic state may be related to a duration of storing a logic state, or a temperature of the memory cell while storing a logic state, or both, among other factors or combinations of factors. In some examples, a memory device may experience imprinting from being exposed to an elevated temperature over a duration, such as being located in a hot vehicle, located in direct sunlight, or other environments, where such conditions may be referred to as a static bake (e.g., when one or more memory cells are maintained at a particular logic state during the elevated temperature exposure). In some cases, a static bake may imprint (e.g., thermally imprint) memory cells such that they become biased toward or stuck in a first state (e.g., a physical state corresponding to a first logic state) over another state (e.g., a physical state corresponding to a second logic state). In some examples, memory cells may store logic states, or may be in physical states (e.g., a charge state, a material state) that may be associated with data or may not be associated with data, in an as-manufactured condition. The memory cells may experience some amount of imprinting prior to the memory device being installed in a system or operated in the system, such as an imprinting over time while idle or unpowered in a warehouse, which may cause degraded performance or failures upon initial (or later) operation. In some examples, imprinting may be inadvertently or maliciously caused by operating parameters or access patterns, among other techniques. Imprinted memory cells may be associated with adverse performance when compared with non-imprinted memory cells. For example, imprinted memory cells may resist charge flow during access operations (e.g., during a read operation, during a write operation), may resist changes in polarization during access operations, may resist changes in material properties such as changes in atomic distribution or arrangement, changes in electrical resistance, or changes in threshold voltage, or may be associated with other behaviors that are different than non-imprinted memory cells (e.g., behaviors that are asymmetric with respect to different logic states). For example, when a write operation is performed on an imprinted memory cell in an effort to write a target logic state, the memory cell may not store the target logic state, or a memory device may be otherwise unable to be read the memory cell as storing the target state (e.g., despite a write operation being performed), which may result in access errors (e.g., write errors, read errors) or data corruption, among other issues. Although some imprinted memory cells may be recovered (e.g., unimprinted, unstuck, repaired, normalized, equalized) by applying recovery pulses (e.g., voltage pulses, current pulses) to the memory cells, some techniques for imprint recovery may be associated with relatively high power consumption, or relatively high peak current that can affect the memory cells or other components or both, among other adverse characteristics. In accordance with examples as disclosed herein, a memory device may be configured to perform an imprint recovery procedure that includes applying one or more recovery pulses (e.g., voltage pulses) to memory cells, where each recovery pulse is associated with a voltage polarity and includes a first portion (e.g., a first duration) with a first voltage magnitude and a second portion (e.g., a second duration, following the first duration) with a second voltage magnitude that is lower than the first voltage magnitude. In some examples (e.g., for an FeRAM architecture), the first voltage magnitude may correspond to a voltage that imposes a polarization on a memory cell (e.g., on a ferroelectric capacitor, a polarization corresponding to the associated voltage polarity, a saturation polarization) and the second voltage magnitude may correspond to a voltage magnitude that is high enough to maintain the polarization (e.g., to prevent a reduction of polarization) of the memory cell. Maintaining the polarization of the memory cell for a duration of the recovery pulse may support the memory cell returning to a non-imprinted (e.g., normalized, equalized, symmetric) state and, by reducing the recovery pulse to the second voltage magnitude, power consumption is reduced compared to maintaining the recovery pulse at the first voltage magnitude, among other benefits. In some examples, such recovery techniques may include staggering (e.g., offsetting) the durations of recovery pulses applied to different memory cells to reduce peak power consumption (e.g., peak current draw) as compared with examples in which such durations are aligned or otherwise overlapping, among other benefits. Features of the disclosure are initially described in the context of systems, dies, and memory cell properties with reference toFIGS.1through4. Features of the disclosure are further described in the context of switch and hold biasing techniques with reference toFIGS.5and6. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram and flowcharts that relate to switch and hold biasing for memory cell imprint recovery as described with reference toFIGS.7through10. FIG.1illustrates an example of a system100that supports switch and hold biasing for memory cell imprint recovery in accordance with examples as disclosed herein. The system100may include a host device105, a memory device110, and a plurality of channels115coupling the host device105with the memory device110. The system100may include one or more memory devices110, but aspects of the one or more memory devices110may be described in the context of a single memory device (e.g., memory device110). The system100may include portions of an electronic device, such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a vehicle, or other systems. For example, the system100may illustrate aspects of a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, a vehicle controller, or the like. The memory device110may be a component of the system100that is operable to store data for one or more other components of the system100. Portions of the system100may be examples of the host device105. The host device105may be an example of a processor (e.g., circuitry, processing circuitry, a processing component) within a device that uses memory to execute processes (where such “other circuitry” is hereinafter also referred to in the specification and claims as a “processor”), such as within a computing device, a mobile computing device, a wireless device, a graphics processing device, a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, a vehicle controller, a system on a chip (SoC), or some other stationary or portable electronic device, among other examples. In some examples, the host device105may refer to the hardware, firmware, software, or a combination thereof that implements the functions of an external memory controller120. In some examples, the external memory controller120may be referred to as a host (e.g., host device105). A memory device110may be an independent device or a component that is operable to provide physical memory addresses/space that may be used or referenced by the system100. In some examples, a memory device110may be configurable to work with one or more different types of host devices. Signaling between the host device105and the memory device110may be operable to support one or more of: modulation schemes to modulate the signals, various pin configurations for communicating the signals, various form factors for physical packaging of the host device105and the memory device110, clock signaling and synchronization between the host device105and the memory device110, timing conventions, or other functions. The memory device110may be operable to store data for the components of the host device105. In some examples, the memory device110(e.g., operating as a secondary-type device to the host device105, operating as a dependent-type to the host device105) may respond to and execute commands provided by the host device105through the external memory controller120. Such commands may include one or more of a write command for a write operation, a read command for a read operation, a refresh command for a refresh operation, or other commands. The host device105may include one or more of an external memory controller120, a processor125, a basic input/output system (BIOS) component130, or other components such as one or more peripheral components or one or more input/output controllers. The components of the host device105may be coupled with one another using a bus135. The processor125may be operable to provide functionality (e.g., control functionality) for the system100or the host device105. The processor125may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or a combination of these components. In such examples, the processor125may be an example of a central processing unit (CPU), a graphics processing unit (GPU), a general purpose GPU (GPGPU), or an SoC, among other examples. In some examples, the external memory controller120may be implemented by or be a part of the processor125. The BIOS component130may be a software component that includes a BIOS operated as firmware, which may initialize and run various hardware components of the system100or the host device105. The BIOS component130may also manage data flow between the processor125and the various components of the system100or the host device105. The BIOS component130may include instructions (e.g., a program, software) stored in one or more of read-only memory (ROM), flash memory, or other non-volatile memory. The memory device110may include a device memory controller155and one or more memory dies160(e.g., memory chips) to support a capacity (e.g., a desired capacity, a specified capacity) for data storage. Each memory die160(e.g., memory die160-a, memory die160-b, memory die160-N) may include a local memory controller165(e.g., local memory controller165-a, local memory controller165-b, local memory controller165-N) and a memory array170(e.g., memory array170-a, memory array170-b, memory array170-N). A memory array170may be a collection (e.g., one or more grids, one or more banks, one or more tiles, one or more sections) of memory cells, with each memory cell being operable to store one or more bits of data. A memory device110including two or more memory dies160may be referred to as a multi-die memory or a multi-die package or a multi-chip memory or a multi-chip package. The device memory controller155may include components (e.g., circuitry, logic) operable to control operation of the memory device110. The device memory controller155may include the hardware, the firmware, or the instructions that enable the memory device110to perform various operations and may be operable to receive, transmit, or execute commands, data, or control information related to the components of the memory device110. The device memory controller155may be operable to communicate with one or more of the external memory controller120, the one or more memory dies160, or the processor125. In some examples, the device memory controller155may control operation of the memory device110described herein in conjunction with the local memory controller165of the memory die160. A local memory controller165(e.g., local to a memory die160) may include components (e.g., circuitry, logic) operable to control operation of the memory die160. In some examples, a local memory controller165may be operable to communicate (e.g., receive or transmit data or commands or both) with the device memory controller155. In some examples, a memory device110may not include a device memory controller155, and a local memory controller165or the external memory controller120may perform various functions described herein. As such, a local memory controller165may be operable to communicate with the device memory controller155, with other local memory controllers165, or directly with the external memory controller120, or the processor125, or a combination thereof. Examples of components that may be included in the device memory controller155or the local memory controllers165or both may include receivers for receiving signals (e.g., from the external memory controller120), transmitters for transmitting signals (e.g., to the external memory controller120), decoders for decoding or demodulating received signals, encoders for encoding or modulating signals to be transmitted, or various other components operable for supporting described operations of the device memory controller155or local memory controller165or both. The external memory controller120may be operable to enable communication of information (e.g., data, commands, or both) between components of the system100(e.g., between components of the host device105, such as the processor125, and the memory device110). The external memory controller120may process (e.g., convert, translate) communications exchanged between the components of the host device105and the memory device110. In some examples, the external memory controller120, or other component of the system100or the host device105, or its functions described herein, may be implemented by the processor125. For example, the external memory controller120may be hardware, firmware, or software, or some combination thereof implemented by the processor125or other component of the system100or the host device105. Although the external memory controller120is depicted as being external to the memory device110, in some examples, the external memory controller120, or its functions described herein, may be implemented by one or more components of a memory device110(e.g., a device memory controller155, a local memory controller165) or vice versa. The components of the host device105may exchange information with the memory device110using one or more channels115. The channels115may be operable to support communications between the external memory controller120and the memory device110. Each channel115may be an example of a transmission medium that carries information between the host device105and the memory device110. Each channel115may include one or more signal paths (e.g., a transmission medium, a conductor) between terminals associated with the components of the system100. A signal path may be an example of a conductive path operable to carry a signal. For example, a channel115may be associated with a first terminal (e.g., including one or more pins, including one or more pads) at the host device105and a second terminal at the memory device110. A terminal may be an example of a conductive input or output point of a device of the system100, and a terminal may be operable to act as part of a channel. Channels115(and associated signal paths and terminals) may be dedicated to communicating one or more types of information. For example, the channels115may include one or more command and address (CA) channels186, one or more clock signal (CK) channels188, one or more data (DQ) channels190, one or more other channels192, or a combination thereof. In some examples, signaling may be communicated over the channels115using single data rate (SDR) signaling or double data rate (DDR) signaling. In SDR signaling, one modulation symbol (e.g., signal level) of a signal may be registered for each clock cycle (e.g., on a rising or falling edge of a clock signal). In DDR signaling, two modulation symbols (e.g., signal levels) of a signal may be registered for each clock cycle (e.g., on both a rising edge and a falling edge of a clock signal). In some cases, one or more memory cells of a memory array170may become imprinted, which may refer to various conditions where a memory cell becomes predisposed toward storing one logic state over another, resistant to being written to a different logic state (e.g., a logic state different than a logic state stored prior to a write operation), or both. A likelihood of a memory cell becoming imprinted with a logic state may be related to a duration of storing a logic state (e.g., a continuous duration, an uninterrupted duration), a temperature of the memory cell while storing a logic state, inadvertent or malicious access patterns, or other factors. Although some imprinted memory cells may be recovered (e.g., unimprinted, unstuck, repaired, normalized, equalized) by applying recovery pulses (e.g., voltage pulses, current pulses) to the memory cells, some techniques for imprint recovery may be associated with relatively high power consumption. In accordance with examples as disclosed herein, a memory device110(e.g., a device memory controller155, a local memory controller165) may be configured to perform an imprint recovery procedure that includes applying one or more recovery pulses (e.g., voltage pulses) to memory cells, where each recovery pulse is associated with a voltage polarity and includes a first portion (e.g., a first duration) with a first voltage magnitude and a second portion (e.g., a second duration, following the first duration) with a second voltage magnitude that is lower than the first voltage magnitude. In some examples (e.g., for an FeRAM configuration), the first voltage magnitude may correspond to a voltage that imposes a saturation polarization on a memory cell (e.g., on a ferroelectric capacitor, a polarization corresponding to the associated voltage polarity) and the second voltage magnitude may correspond to a voltage magnitude that is high enough to maintain the saturation polarization (e.g., to prevent a reduction of polarization) of the memory cell. Maintaining the saturation polarization of the memory cell for a duration of the recovery pulse may support the memory cell returning to a non-imprinted (e.g., equalized, symmetric) state and, by reducing the recovery pulse to the second voltage magnitude, power consumption (e.g., of the memory device110, of the system100) is reduced compared to maintaining the recovery pulse at the first voltage magnitude. In some examples, such recovery techniques may include staggering (e.g., offsetting) the first durations of recovery pulses applied to different memory cells to reduce peak power consumption (e.g., peak current draw by the memory device110) as compared with examples in which such first durations are aligned or otherwise overlapping. FIG.2illustrates an example of a memory die200that supports switch and hold biasing for memory cell imprint recovery in accordance with examples as disclosed herein. The memory die200may be an example of the memory dies160described with reference toFIG.1. In some examples, the memory die200may be referred to as a memory chip, a memory device, or an electronic memory apparatus. The memory die200may include one or more memory cells205that may each be programmable to store different logic states (e.g., programmed to one of a set of two or more possible states). For example, a memory cell205may be operable to store one bit of information at a time (e.g., a logic 0 or a logic 1). In some examples, a memory cell205(e.g., a multi-level memory cell) may be operable to store more than one bit of information at a time (e.g., a logic 00, logic 01, logic 10, a logic 11). In some examples, the memory cells205may be arranged in an array, such as a memory array170described with reference toFIG.1. In some examples, a memory cell205may store a state (e.g., a polarization state, a dielectric charge) representative of the programmable states in a capacitor. The memory cell205may include a logic storage component, such as capacitor240, and a switching component245(e.g., a cell selection component). A first node of the capacitor240may be coupled with the switching component245and a second node of the capacitor240may be coupled with a plate line220. The switching component245may be an example of a transistor or any other type of switch device that selectively establishes or de-establishes electronic communication between two components. In FeRAM architectures, the memory cell205may include a capacitor240(e.g., a ferroelectric capacitor) that includes a ferroelectric material to store a charge (e.g., a polarization) representative of the programmable state. In some other examples, a memory cell205may store a logic state using a configurable material, which may be referred to as a memory element, a memory storage element, a material element, a material memory element, a material portion, a polarity-written material portion, and others. A configurable material of a memory cell205may have one or more variable and configurable characteristics or properties (e.g., material states) that are representative of (e.g., correspond to) different logic states. For example, a configurable material may take different forms, different atomic configurations, different degrees of crystallinity, different atomic distributions, or otherwise maintain different characteristics. In some examples, such characteristics may be associated with different electrical resistances, different threshold voltages, or other properties that are detectable or distinguishable during a read operation to identify a logic state stored by the configurable material. In some examples, a configurable material may refer to a chalcogenide-based storage component. For example, a chalcogenide storage element may be used in phase change memory (PCM) cells or self-selecting memory cells. Chalcogenide storage elements may be examples of resistive memories or thresholding memories. The memory die200may include access lines (e.g., word lines210, digit lines215, and plate lines220) arranged in a pattern, such as a grid-like pattern. An access line may be a conductive line coupled with a memory cell205and may be used to perform access operations on the memory cell205. In some examples, word lines210may be referred to as row lines. In some examples, digit lines215may be referred to as column lines or bit lines. References to access lines, row lines, column lines, word lines, digit lines, bit lines, or plate lines, or their analogues, are interchangeable without loss of understanding. Memory cells205may be positioned at intersections of the word lines210, the digit lines215, or the plate lines220. Operations such as reading and writing may be performed on memory cells205by activating access lines such as a word line210, a digit line215, or a plate line220. By biasing a word line210, a digit line215, and a plate line220(e.g., applying a voltage to the word line210, digit line215, or plate line220), a single memory cell205may be accessed at their intersection. The intersection of a word line210and a digit line215in a two-dimensional or in a three-dimensional configuration may be referred to as an address of a memory cell205. Activating a word line210, a digit line215, or a plate line220may include applying a voltage to the respective line. Accessing the memory cells205may be controlled through a row decoder225, a column decoder230, or a plate driver235, or a combination thereof. For example, a row decoder225may receive a row address from the local memory controller265and activate a word line210based on the received row address. A column decoder230may receive a column address from the local memory controller265and activate a digit line215based on the received column address. A plate driver235may receive a plate address from the local memory controller265and activate a plate line220based on the received plate address. Selecting or deselecting the memory cell205may be accomplished by activating or deactivating the switching component245. The capacitor240may be in electronic communication with the digit line215using the switching component245. For example, the capacitor240may be isolated from digit line215when the switching component245is deactivated, and the capacitor240may be coupled with digit line215when the switching component245is activated. The sense component250may determine a state (e.g., a polarization state, a charge) stored on the capacitor240of the memory cell205and determine a logic state of the memory cell205based on the detected state. The sense component250may include one or more sense amplifiers to amplify the signal output of the memory cell205. The sense component250may compare the signal received from the memory cell205across the digit line215to a reference255(e.g., a reference voltage, a reference line). The detected logic state of the memory cell205may be provided as an output of the sense component250(e.g., to an input/output260), and may indicate the detected logic state to another component of a memory device (e.g., a memory device110) that includes the memory die200. The local memory controller265may control the operation of memory cells205through the various components (e.g., row decoder225, column decoder230, plate driver235, and sense component250). The local memory controller265may be an example of the local memory controller165described with reference toFIG.1. In some examples, one or more of the row decoder225, column decoder230, and plate driver235, and sense component250may be co-located with the local memory controller265. The local memory controller265may be operable to receive one or more of commands or data from one or more different memory controllers (e.g., an external memory controller120associated with a host device105, another controller associated with the memory die200), translate the commands or the data (or both) into information that can be used by the memory die200, perform one or more operations on the memory die200, and communicate data from the memory die200to a host (e.g., a host device105) based on performing the one or more operations. The local memory controller265may generate row signals and column address signals to activate the target word line210, the target digit line215, and the target plate line220. The local memory controller265also may generate and control various signals (e.g., voltages, currents) used during the operation of the memory die200. In general, the amplitude, the shape, or the duration of an applied voltage or current discussed herein may be varied and may be different for the various operations discussed in operating the memory die200. The local memory controller265may be operable to perform one or more access operations on one or more memory cells205of the memory die200. Examples of access operations may include a write operation, a read operation, a refresh operation, a precharge operation, or an activate operation, among others. In some examples, access operations may be performed by or otherwise coordinated by the local memory controller265in response to various access commands (e.g., from a host device105). The local memory controller265may be operable to perform other access operations not listed here or other operations related to the operating of the memory die200that are not directly related to accessing the memory cells205. In some cases, environmental conditions (e.g., a static bake) may shift or change a programmable characteristic of a memory cell205. For example, in an FeRAM application, a static bake may shift or alter the polarization capacity, coercivity, or other aspect of charge mobility of the memory cell205, which may cause the memory cell205to become biased toward a specific logic state (e.g., biased toward being written to or read as a logic 1 state, biased toward being written to or read as a logic 0 state). In a memory application using a configurable material (e.g., material memory elements), these or other conditions may cause a variable and configurable characteristic or property to resist being changed in response to write operations, such as a resistance to being programmed with a different atomic configuration, a resistance to being programmed with a different degree of crystallinity, a resistance to being programmed with a different atomic distribution, or a resistance to being programmed with some other characteristic associated with a different logic state. Such changes in a programmable characteristic may be referred to as an imprinting, and may cause read or write behavior that is different than when imprinting has not occurred (e.g., asymmetric behavior with respect to logic states). For example, when a write operation, intended to change a logic state of a memory cell, is performed on an imprinted memory cell205having an initial state, the memory cell205may remain or return to the initial (e.g., imprinted) state, or may be otherwise read as storing the initial state. For example, if a memory cell205is imprinted in the 0 logic state, the memory cell205may continue to remain in the 0 logic state, or continue to be read as storing the logic 0 state, after an attempt to write the memory cell205with a logic 1 state (e.g., after performing a write operation corresponding to the logic 1 state). In accordance with examples as disclosed herein, components of a memory die200(e.g., a local memory controller265, a row decoder225, a column decoder230, a plate driver235) may be configured to perform an imprint recovery procedure that includes applying one or more recovery pulses (e.g., voltage pulses) to memory cells205, where each recovery pulse is associated with a voltage polarity and includes a first portion (e.g., a first duration) with a first voltage magnitude and a second portion (e.g., a second duration, following the first duration) with a second voltage magnitude that is lower than the first voltage magnitude. In some examples (e.g., for an FeRAM configuration), the first voltage magnitude may correspond to a voltage that imposes a saturation polarization on a memory cell205(e.g., on a ferroelectric capacitor240, a polarization corresponding to the associated voltage polarity) and the second voltage magnitude may correspond to a voltage magnitude that is high enough to maintain the saturation polarization (e.g., to prevent a reduction of polarization) of the memory cell205. Maintaining the saturation polarization of the memory cell205for a duration of the recovery pulse may support the memory cell205returning to a non-imprinted (e.g., equalized, symmetric) state and, by reducing the recovery pulse to the second voltage magnitude, power consumption (e.g., of the memory die200) is reduced compared to maintaining the recovery pulse at the first voltage magnitude. In some examples, such recovery techniques may include staggering (e.g., offsetting) the first durations of recovery pulses applied to different memory cells205(e.g., to different rows of memory cells205, to different columns of memory cells205, to different sections of memory cells205) to reduce peak power consumption (e.g., peak current draw by the memory die200) as compared with examples in which such first durations are aligned or otherwise overlapping. FIGS.3A and3Billustrate examples of non-linear electrical properties of a ferroelectric memory cell with hysteresis plots300-aand300-bin accordance with examples as disclosed herein. The hysteresis plots300-aand300-bmay illustrate examples of a writing process and a reading process, respectively, for a memory cell205employing a ferroelectric capacitor240as described with reference toFIG.2. The hysteresis plots300-aand300-bdepict the charge, Q, stored on the ferroelectric capacitor240as a function of a voltage difference Vcap, between the terminals of the ferroelectric capacitor240(e.g., when charge is permitted to flow into or out of the ferroelectric capacitor according to the voltage difference Vcap). For example, the voltage difference Vcapmay represent the difference in voltage between a plate line side of the capacitor240and a digit line side of the capacitor240(e.g., a difference between a voltage at a plate node and a voltage at a bottom node, which may be referred to as Vplate-Vbottom, as illustrated inFIG.2). A ferroelectric material is characterized by an electric polarization where the material may maintain a non-zero electric charge in the absence of an electric field. Examples of ferroelectric materials include barium titanate (BaTiO3), lead titanate (PbTiO3), lead zirconium titanate (PZT), and strontium bismuth tantalate (SBT). Ferroelectric capacitors240described herein may include these or other ferroelectric materials. Electric polarization within a ferroelectric capacitor240results in a net charge at the surface of the ferroelectric material, and attracts opposite charge through the terminals of the ferroelectric capacitor240. Thus, charge may be stored at the interface of the ferroelectric material and the capacitor terminals. As depicted in the hysteresis plot300-a, a ferroelectric material used in a ferroelectric capacitor240may maintain a positive or negative polarization when there is no net voltage difference between the terminals of the ferroelectric capacitor240. For example, the hysteresis plot300-aillustrates two possible polarization states, a charge state305-aand a charge state310-a, which may represent a negatively saturated polarization state and a positively saturated polarization state, respectively. The charge states305-aand310-amay be at a physical condition illustrating remnant polarization (Pr) values, which may refer to the polarization or charge that remains upon removing the external bias (e.g., voltage). According to the example of the hysteresis plot300-a, the charge state305-amay represent a logic 0 when no voltage difference is applied across the ferroelectric capacitor240, and the charge state310-amay represent a logic 1 when no voltage difference is applied across the ferroelectric capacitor240. In some examples, the logic values of the respective charge states or polarization states may be reversed or interpreted in an opposite manner to accommodate other schemes for operating a memory cell205. A logic 0 or 1 may be written to the memory cell by controlling the electric polarization of the ferroelectric material, and thus the charge on the capacitor terminals, by applying a net voltage difference across the ferroelectric capacitor240. For example, the voltage315may be a voltage equal to or greater than a positive saturation voltage, and applying the voltage315across the ferroelectric capacitor240may result in charge accumulation until the charge state305-bis reached (e.g., writing a logic 0). Upon removing the voltage315from the ferroelectric capacitor240(e.g., applying a zero net voltage across the terminals of the ferroelectric capacitor240), the charge state of the ferroelectric capacitor240may follow the path320shown between the charge state305-band the charge state305-aat zero voltage across the capacitor. In other words, charge state305-amay represent a logic 0 state at an equalized voltage across a ferroelectric capacitor240that has been positively saturated. Similarly, voltage325may be a voltage equal to or lesser than a negative saturation voltage, and applying the voltage325across the ferroelectric capacitor240may result in charge accumulation until the charge state310-bis reached (e.g., writing a logic 0). Upon removing the voltage325from the ferroelectric capacitor240(e.g., applying a zero net voltage across the terminals of the ferroelectric capacitor240), the charge state of the ferroelectric capacitor240may follow the path330shown between the charge state310-band the charge state310-aat zero voltage across the capacitor. In other words, charge state310-amay represent a logic 0 state at an equalized voltage across a ferroelectric capacitor240that has been negatively saturated. In some examples, the voltage315and the voltage325, representing saturation voltages, may have the same magnitude, but opposite polarity across the ferroelectric capacitor240. Although the example of hysteresis plot300-aillustrates a logic 0 corresponding to charge state310-a, and a logic 1 corresponding to charge state305-a, logic states may correspond to different charge states in some examples, such as a logic 0 corresponding to charge state305-aand a logic 1 corresponding to charge state310-a, among other examples. To read, or sense, the stored state of a ferroelectric capacitor240, a voltage may also be applied across the ferroelectric capacitor240. In response to the applied voltage, the subsequent charge Q stored by the ferroelectric capacitor changes, and the degree of the change may depend on the initial polarization state, the applied voltages, intrinsic or other capacitance on access lines, and other factors. In other words, the charge state or access line voltage resulting from a read operation may depend on whether the charge state305-a, or the charge state310-a, or some other charge state was initially stored, among other factors. The hysteresis plot300-billustrates examples of access operations for reading stored charge states (e.g., charge states305-aand310-a). In some examples, a read voltage335may be applied, for example, as a voltage difference via a plate line220and a digit line215as described with reference toFIG.2. The hysteresis plot300-bmay illustrate read operations where the read voltage335is positive voltage difference Vcap(e.g., where Vplate-Vbottomis positive). A positive read voltage across the ferroelectric capacitor240may be referred to as a “plate high” read operation, where a plate line220is taken initially to a high voltage, and a digit line215is initially at a low voltage (e.g., a ground voltage). Although the read voltage335is shown as a positive voltage across the ferroelectric capacitor240, in alternative access operations a read voltage may be a negative voltage across the ferroelectric capacitor240, which may be referred to as a “plate low” read operation. The read voltage335may be applied across the ferroelectric capacitor240while a memory cell205is selected (e.g., by activating a switching component245via a word line210as described with reference toFIG.2). Upon applying the read voltage335to the ferroelectric capacitor240, charge may flow into or out of the ferroelectric capacitor240via the associated digit line215and plate line220, and, in some examples, different charge states or access line voltages may result depending on whether the ferroelectric capacitor240was at the charge state305-a(e.g., a logic 0) or at the charge state310-a(e.g., a logic 1), or some other charge state. When performing a read operation on a ferroelectric capacitor240at the charge state305-a(e.g., a logic 0), additional positive charge may accumulate across the ferroelectric capacitor240, and the charge state may follow path340until reaching the charge and voltage of the charge state305-c. The amount of charge flowing through the capacitor240may be related to the intrinsic or other capacitance of a digit line215(e.g., intrinsic capacitance of the digit line215, capacitance of a capacitor or capacitive element coupled with the digit line215, or a combination thereof), or other access line (e.g., a signal line opposite an amplifier, such as a charge transfer sensing amplifier, from a digit line215). In a “plate high” read configuration, a read operation associated with the charge states305-aand305-c, or more generally a read operation associated with the logic 0 state, may be associated with a relatively small amount of charge transfer (e.g., compared to a read operation associated with the charge states310-aand310-c, or more generally, compared to reading the logic 1 state). As shown by the transition between the charge state305-aand the charge state305-c, the resulting voltage350across the ferroelectric capacitor240may be a relatively large positive value due to the relatively large change in voltage at the capacitor240for the given change in charge. Thus, upon reading a logic 0 in a “plate high” read operation, the digit line voltage, equal to the difference of a plate line voltage, VPL, and Vcap(e.g., Vplate-Vbottom) at the charge state310-c, may be a relatively low voltage. Such a read operation may not change the remnant polarization of the ferroelectric capacitor240that stored the charge state305-aand thus, after performing the read operation, the ferroelectric capacitor240may return to the charge state305-avia path340when the read voltage335is removed (e.g., by applying a zero net voltage across the ferroelectric capacitor240, by equalizing the voltage across the ferroelectric capacitor240). Thus, performing a read operation with a positive read voltage on a ferroelectric capacitor240with a charge state305-amay be considered a non-destructive read process. In some cases, a rewrite operation may not be involved or may be omitted in such scenarios. When performing the read operation on the ferroelectric capacitor240at the charge state310-a(e.g., a logic 1), the stored charge may reverse polarity or may not reverse polarity as a net positive charge accumulates across the ferroelectric capacitor240, and the charge state may follow the path360until reaching the charge and voltage of the charge state310-c. The amount of charge flowing through the ferroelectric capacitor240may again be related to the intrinsic or other capacitance of the digit line215. In a “plate high” read configuration, a read operation associated with the charge states310-aand310-c, or more generally a read operation associated with the logic 1 state, may be associated with a relatively large amount of charge transfer, or a relatively smaller capacitor voltage, Vcap(e.g., compared to a read operation associated with the charge states305-aand305-c, or more generally, compared to reading the logic 0 state). As shown by the transition between the charge state310-aand the charge state310-c, the resulting voltage355may, in some cases, be a relatively small positive value due to the relatively small change in voltage at the capacitor240for the given change in charge. Thus, upon reading a logic 1 in a “plate high” read operation, the digit line voltage, equal to the difference of a plate line voltage, VPL, and Vcap(e.g., Vplate-Vbottom) at the charge state310-c, may be a relatively high voltage. The transition from the charge state310-ato the charge state310-cmay be illustrative of a sensing operation that is associated with a partial reduction or partial reversal in polarization or charge of a ferroelectric capacitor240of a memory cell205(e.g., a reduction in the magnitude of charge Q from the charge state310-ato a charge state310-d). In other words, according to the properties of the ferroelectric material, after performing the read operation the ferroelectric capacitor240may not return to the charge state310-awhen the read voltage335is removed (e.g., by applying a zero net voltage across the ferroelectric capacitor240, by equalizing the voltage across the ferroelectric capacitor240). Rather, when applying a zero net voltage across the ferroelectric capacitor240after a read operation of the charge state310-awith read voltage335, the charge state may follow path365from the charge state310-cto the charge state310-d, illustrating a net reduction in polarization magnitude (e.g., a less negatively polarized charge state than initial charge state310-a, illustrated by the difference in charge between the charge state310-aand the charge state310-d). Thus, performing a read operation with a positive read voltage on a ferroelectric capacitor240with a charge state310-amay be described as a destructive read process. In some cases, a rewrite operation (e.g., applying a voltage325) may be performed after performing such a read operation, which may cause the memory cell to transition from the charge state310-dto the charge state310-a(e.g., indirectly, such as via a charge state310-b). In various examples, such a rewrite operation may be performed after any read operation, or may be performed based on some circumstances (e.g., when a read voltage is opposite from the write voltage associated with a detected logic state). However, in some sensing schemes, a reduced remnant polarization may still be read as the same stored logic state as a saturated remnant polarization state (e.g., supporting detection of a logic 1 from both the charge state310-aand the charge state310-d), thereby providing a degree of non-volatility for a memory cell205with respect to read operations. In other examples (e.g., when a ferroelectric material is able to maintain polarization in the presence of at least some level of a depolarizing field, when a ferroelectric material has sufficient coercivity, not shown), after performing a read operation the ferroelectric capacitor240may return to the charge state310-awhen a read voltage is removed, and performing such a read operation with a positive read voltage on a ferroelectric capacitor240with a charge state310-amay be described as a non-destructive read process. In such cases, rewrite operations may not be expected after such a read operation. The position of the charge state305-cand the charge state310-cafter initiating a read operation may depend on various factors, including the specific sensing scheme and circuitry. In some cases, the charge associated with a read operation may depend on the net capacitance of the digit line215coupled with the memory cell205, which may include an intrinsic capacitance, integrator capacitors, and others. For example, if a ferroelectric capacitor240is electrically coupled with a digit line215initially at 0V and the read voltage335is applied to a plate line220, the voltage of the digit line215may rise when the memory cell205is selected due to charge flowing from the ferroelectric capacitor240to the net capacitance of the digit line215. Thus, in some examples, a voltage measured at a sense component250may not be equal to the read voltage335, or the resulting voltages350or355, and instead may depend on the voltage of the digit line215following a period of charge sharing. The initial state (e.g., charge state, logic state) of the ferroelectric capacitor240may be determined by comparing the voltage of a digit line215, or signal line, where applicable, resulting from the read operation with a reference voltage (e.g., a reference255). In some examples, the digit line voltage may be the difference between the read voltage335and the final voltage across the capacitor240(e.g., (read voltage335-voltage350) when reading the ferroelectric capacitor240having a stored charge state305-a, (read voltage335-voltage355) when reading the ferroelectric capacitor240having a stored charge state310-a). In some examples, the digit line voltage may be the sum of the plate line voltage and the final voltage across the ferroelectric capacitor240(e.g., voltage350when reading the ferroelectric capacitor240having a stored charge state305-a, or voltage355when reading the ferroelectric capacitor240having a stored charge state310-a). In some examples, read operations of a memory cell205may be associated with a fixed voltage of a digit line215, where a charge state of a ferroelectric capacitor240after initiating a read operation may be the same regardless of its initial charge state. For example, in a read operation where a digit line215and plate line220are held at a fixed relative voltage that supports the read voltage335, the ferroelectric capacitor240may proceed to a charge state370for both the case where the ferroelectric capacitor initially stored a charge state305-aand the case where the ferroelectric capacitor initially stored a charge state310-a. Accordingly, rather than using a difference in voltage (e.g., of a digit line215) to detect an initial charge state or logic state, in some examples, the initial charge state or logic state of the ferroelectric capacitor240may be determined based at least in part on the difference in charge associated with the read operation. For example, as illustrated by hysteresis plot300-b, a logic 0 may be detected based on difference in charge, Q, between charge state305-aand charge state370(e.g., a relatively small amount of charge transfer), and a logic 1 may be detected based on a difference in charge, Q, between charge state310-aand charge state370(e.g., a relatively large amount of charge transfer). In some examples, such a detection may be supported by a charge-transfer sensing amplifier, a cascode (e.g., a transistor configured in a cascode arrangement), or other signal development circuitry between a digit line215and a signal line that is coupled with a sense amplifier, where a voltage of the signal line may be based at least in part on the amount of charge transfer of a capacitor240after initiating a read operation (e.g., where the described charge transfer may correspond to an amount of charge that passes through the charge-transfer sensing amplifier, cascode, or other signal development circuitry). In such examples, the voltage of the signal line may be compared with a reference voltage (e.g., at a sense component250) to determine the logic state initially stored by the ferroelectric capacitor240, despite the digit line215being held at a fixed voltage level. In some examples, if a digit line215is held at a fixed read voltage335, a capacitor240may be positively saturated after a read operation irrespective of whether the capacitor240was initially at a charge state305-a(e.g., a logic 0) or initially at a charge state310-a(e.g., a logic 1). Accordingly, after such a read operation, the capacitor240may, at least temporarily, be charged or polarized according to a logic 0 state irrespective of its initial or intended logic state. Thus, a rewrite operation may be expected at least when the capacitor240is intended to store a logic 1 state, where such a rewrite operation may include applying a write voltage325to store a logic 1 state as described with reference to hysteresis plot300-a. Such rewrite operations may be configured or otherwise described as a selective rewrite operation, since a rewrite voltage may not be applied when the capacitor240is intended to store a logic 0 state. In some examples, such an access scheme may be referred to as a “2Pr” scheme, where the difference in charge for distinguishing a logic 0 from a logic 1 may be equal to two times the remnant polarization of a memory cell205(e.g., a difference in charge between charge state305-a, a positively saturated charge state, and charge state310-a, a negatively saturated charge state). The examples of hysteresis plots300-aand300-bmay be illustrative of normalized (e.g., equalized) behavior of a memory cell205including a ferroelectric capacitor240when subjected to write biasing or read biasing. However, based on various operating or environmental conditions, ferroelectric capacitors240may become imprinted with a particular logic state, which may refer to various conditions where a ferroelectric capacitor240becomes predisposed toward storing one logic state over another, resistant to being written to a different logic state (e.g., a logic state different than a stored logic state prior to a write operation), or both. For example, as compared with the hysteresis plots300-aand300-b, an imprinted ferroelectric capacitor240may be associated with a different (e.g., a higher coercivity or shifted coercivity with respect to changing or inverting a polarization state), a reduced saturation polarization, a shallower slope of polarization, or other characteristics that may be asymmetric with respect to different logic states. Memory arrays having imprinted ferroelectric capacitors240may be associated with read errors, write errors, or other behaviors that can impair operations of a memory device, or a system that includes a memory device. In accordance with examples as disclosed herein, imprinted ferroelectric capacitors240may be recovered using various imprint recovery or repair processes, such as applying one or more recovery pulses to memory cells of the memory arrays, where each recovery pulse includes a first portion with a first voltage magnitude and a second portion with a second voltage magnitude that is lower than the first voltage magnitude. FIG.4illustrates an example of non-linear electrical properties of imprinted ferroelectric memory cells with a hysteresis plot400in accordance with examples as disclosed herein. For example, the hysteresis plot400illustrates an example of characteristics of a ferroelectric capacitor240that may shift as a result of imprinting with a state (e.g., an imprinting with a logic 1, an imprinting with a charge state410-a, which may be equal to the charge state310-aor different than the charge state310-adescribed with reference toFIGS.3A and3B), which may be related to an alteration of configuration of electrostatic domains in a ferroelectric memory cell205. The shifted characteristics of the hysteresis plot400, illustrated by imprinted hysteresis curve440, may result from conditions during which a ferroelectric capacitor240has maintained a charge state for a relatively long duration, or maintained a charge state under relatively high temperature conditions, or both (e.g., under static bake conditions), among other conditions associated with memory cell imprint. In some cases, the hysteresis plot400may be an example of a shift from an unimprinted hysteresis curve430to an imprinted hysteresis curve440, which may be associated with various shifts in coercivity of a ferroelectric capacitor240. For example, a ferroelectric capacitor240may experience a shift420, associated with a shift in coercive voltage to change out of an imprinted polarization state (e.g., an increase in coercive voltage magnitude), or a shift425, associated with a shift in coercive voltage to return to an imprinted polarization state (e.g., a decrease in coercive voltage magnitude), or both, in which case a shift420and a shift425may be associated with a same or similar amount of shift (e.g., along the voltage axis) or a different amount of shift. A shift to the imprinted hysteresis curve440may be associated with an increased resistance (e.g., an asymmetric resistance) to changing polarization during a write operation (e.g., associated with applying a voltage315) or during a read operation (e.g., associated with applying a read voltage335), such as a collective increase of resistance of domains from changing polarization state (e.g., where domains are able to have their polarization reversed, but where such a reversal collectively expects a relatively higher voltage bias). For example, according to the hysteresis plot400, when an imprinted ferroelectric capacitor240storing a charge state410-ais biased with a voltage315(e.g., a write voltage associated with writing a logic 0), charge may accumulate until the charge state405-ais reached. Compared with the charge state305-b, which may correspond to a saturated condition of a normalized ferroelectric capacitor240(e.g., in accordance with the unimprinted hysteresis curve430, where polarization of the ferroelectric capacitor may be fully reversed at the voltage315), the charge state405-amay not correspond to a saturated condition, and instead may illustrate an example of a partial polarization reversal in response to the write voltage315. Such a response may be associated with the shift420, corresponding to a change of the coercive voltage associated with positively saturating the ferroelectric capacitor240, in which case the voltage315may not have a high enough magnitude to positively saturate the negatively imprinted ferroelectric capacitor240. Additionally, or alternatively, removing the voltage315from the ferroelectric capacitor240(e.g., applying a zero net voltage across the terminals of the ferroelectric capacitor240after applying the voltage315) may be associated with a reduction in polarization relative to the charge state405-a, such as during conditions in which a degree of imprinting may prevent domains (e.g., charge domains) from remaining in a written state. For example, when the voltage315is removed from the ferroelectric capacitor240, the charge state of the ferroelectric capacitor240may follow the path450shown between the charge state405-aand the charge state405-bat zero voltage across the ferroelectric capacitor240. In various examples, the charge state405-bmay have a lower charge than the charge state305-a(e.g., a charge state of an unimprinted ferroelectric capacitor240corresponding to a logic 0 at an equalized voltage across the ferroelectric capacitor). Moreover, in some examples, the path450may include at least some loss of polarization (e.g., returning towards an imprinted charge state or polarization state when a write bias is removed), which may be referred to as backswitching, drop, or recoil. Such a response may be associated with the shift425, corresponding to a change of the coercive voltage associated with negatively saturating the ferroelectric capacitor240(e.g., or losing a positive polarization), in which case the ferroelectric capacitor240may be unable to maintain at least some magnitude of positive polarization at an equalized voltage (e.g., unable to maintain a positive polarization associated with applying a voltage315, including a relatively lower positive polarization associated with the charge state405-a). Although the hysteresis plot400illustrates the charge state405-bas having a net charge, Q, that is positive, under various circumstances (e.g., various imprint severity, various degrees of coercivity shift, various degrees of polarization reversal among a set of domains of a ferroelectric capacitor240), a net charge of a charge state405-bmay have a positive value or a negative value. Under various circumstances, the charge state405-bmay be illustrative of storing a logic 0 or a logic 1, or may be illustrative of a charge state that may be read by a memory device as storing a logic 0 or a logic 1, or may be considered as an indeterminate state. In other words, as a result of the shift from the unimprinted hysteresis curve430to the imprinted hysteresis curve440, applying the voltage315to an imprinted memory cell205may not successfully write a logic 0 to a ferroelectric capacitor240imprinted with a logic 1, or may not support the ferroelectric capacitor240being successfully read as a logic 0, or both. Although the hysteresis plot400illustrates simplified examples of mechanisms that may be related to imprinting in a ferroelectric capacitor240, other mechanisms or conditions, or combinations thereof, may be associated with memory cell imprint. For example, a memory cell205imprinted with a logic 1 may not be associated with a charge state310-aas described with reference toFIGS.3A and3B, and may have a different charge state410-aafter imprinting (e.g., due to charge degradation during imprint, due to saturation polarization collapse of an imprinted logic state or charge state during imprint itself, due to charge leakage, due to a change in saturation polarization that may change or reduce a charge state410-awhen rewritten with a logic 1 state, or any combination thereof). In another example, imprint may change (e.g., widen) a distribution of polarization reversal voltages across a set of domains in a ferroelectric capacitor240, which may be associated with a shallower slope of Q versus Vcapbetween one polarization state and another (e.g., across a polarization reversal region, in a region associated with a coercive voltage), which may be accompanied by a collective shift in coercivity or a change in polarization reversal capacity. In some examples, imprinting in a ferroelectric capacitor240may be associated with other phenomena, or various combinations of these and other phenomena. To reduce a degree of imprint of memory cells205(e.g., to reduce or eliminate a shift420, or a shift425, or both, to return charge mobility of a memory cell205to a normalized state, to return to an unimprinted hysteresis curve430, to restore a remnant polarization capacity, to normalize coercivity), a memory device110(e.g., a memory die200) may perform an imprint recovery operation that includes one or more imprint recovery pulses. In some examples, imprint recovery may be supported by holding a memory cell205in an opposite polarization state (e.g., opposite from an imprinted state) over a long enough duration to alter the local electrostatic configuration that is causing a memory cell205to revert to the imprinted state. In some examples, recovery may be aided by time under applied bias (e.g., via a recovery pulse) and charge state switching (e.g., bias switching, charge switching, polarization switching, via recovery pulses having different polarities). Regarding time under bias, mobile charge defects may change configuration within a memory cell205in alignment with the applied bias, which may also be aligned with an intended polarization state. In some examples, such a process may scale with total cumulative time under bias. However, the time under bias may be beneficial if the internal electric field aligns with the applied electric field. For example, significant buildup of local charge within a memory cell205may screen an applied field and prevent a local reconfiguration of defects in some parts of the memory cell205. Although unipolar (e.g., non-switching, non-cycling) bias can be used to support imprint recovery, and have some advantages, cycling methods may be more effective in some examples. Regarding charge state switching (e.g., polarization switching), in some examples, repeatedly switching polarity of an applied bias may provide repeated opportunities for domains within the memory cell205to undergo a stochastic switching event. For example, for domains that, according to a probability distribution, may or may not undergo a polarization switching event at a given voltage or bias, a repeated charge switching may provide more opportunities for such a domain to switch polarization, enhancing a probability that such a switching will actually occur. In some examples, state or bias switching may also raise an internal temperature of a memory cell205, which may further enhance defect or domain mobility. Accordingly, both an increase in temperature and repeated opportunities for repolarization may aid imprint recovery of a memory cell205. Mechanisms such as these may contribute to phenomena that may be referred to as “wakeup” or “recovery” from an as-processed (e.g., time zero, initial, starting) imprint state of a memory cell205. Such mechanisms may also contribute to recovery from fatigue, which may be related to charge domains that are symmetrically not participating in a polarization switching process (e.g., not participating in polarization switching whether switching from a logic 0 polarization to a logic 1 polarization or switching from a logic 1 polarization to a logic 0 polarization, which may be associated with a decrease in saturation polarization). In some examples, fatigue recovery may be driven by “waking up” domains within a memory cell205that had not previously been participating in polarization switching. Since fatigue may be defined as loss of polarization signal induced by repeated switching of a polarization state, recovery from fatigue may rely on variation in an applied bias (e.g., higher bias or longer pulses compared with typical or initial operating conditions). In some examples, an imprint recovery pulse may include applying a voltage (e.g., a polarization voltage) across a memory cell205for a duration. For a memory cell205that includes a ferroelectric capacitor, for example, such a voltage may be associated with at least some degree of polarization that is opposite from an imprinted state (e.g., an imprinted charge, an imprinted polarization). For example, referring to the hysteresis plot400, which may illustrate an imprint with a negative polarization (e.g., a negative imprint polarity), an imprint recovery pulse may include applying a voltage associated with a positive polarization (e.g., a positive polarity) for some duration over which polarization behavior may equalize (e.g., to encourage a return symmetric coercive voltages, to encourage a return symmetric polarization characteristics). In some examples, maintaining a relatively high voltage magnitude during an imprint recovery pulse may be associated with unnecessary power consumption. For example, various portions of a memory die200may be associated with charge leakage, including inadvertent leakage paths through dielectric portions of a memory die, or intentional leakage paths that support configured shunting characteristics, among others. Maintaining a relatively high voltage magnitude during an imprint recovery pulse in the presence of such leakage paths may accordingly be associated with relatively high power consumption. However, some memory architectures may not require a relatively high voltage magnitude over an entire duration of an imprint recovery pulse. In an illustrative example, imprint recovery of a ferroelectric capacitor240may be correlated with a duration over which a polarization is maintained at the ferroelectric capacitor240(e.g., as a time under polarization), which may not necessarily involve maintaining a polarizing voltage itself across the ferroelectric capacitor240. Rather, during an imprint recovery pulse, a voltage with a relatively high magnitude may be implemented to establish a level of polarization at the ferroelectric capacitor240and such biasing may be reduced, along a linear region of the associated hysteresis curve (e.g., without reaching or approaching an opposite coercive voltage), in a manner that reduces charge but maintains polarization. Thus, the effectiveness of an imprint recovery pulse may be maintained (e.g., relative to a given degree of polarization), but at a lower voltage that is associated with less charge leakage and therefore lower power consumption. In accordance with examples as disclosed herein, an imprint recovery pulse may include biasing a memory cell205with a first voltage, such as a voltage455, which may be associated with an imprint recovery polarization (e.g., a saturation polarization) of an imprinted memory cell205at a charge state465-a. Although the voltage455is illustrated as having a greater magnitude than the voltage315(e.g., a write voltage), in some examples, the voltage455may have a same magnitude as the voltage315or a lower magnitude than the voltage315. After reaching the charge state465-a, the biasing may be reduced to a voltage having the same polarity as the voltage455but with a lower magnitude, such as a voltage460. The voltage460may be associated with the same degree of polarization as the voltage455, but at a relatively reduced voltage and charge state465-b(e.g., maintaining the polarization as established by the voltage455). Charge leakage in the associated memory device110may be relatively reduced at the relatively lower magnitude voltage460, which may support a reduction in power consumption for the same or similar effectiveness of imprint recovery. In some examples, a magnitude of the voltage455, or of the voltage460, or both may be based on a detected or inferred degree (e.g., severity) of imprint of memory cells205. For example, a relatively higher magnitude of the voltage455may be implemented for conditions associated with a relatively larger shift420(e.g., to ensure a degree of polarization, such a saturation polarization), or a relatively higher magnitude of the voltage460may be implemented for conditions associated with a relatively larger shift425(e.g., to prevent or limit a degree of backswitching after applying the voltage455). In various examples, a memory device110, or a host device105, or both may detect various operating conditions to infer a degree of imprint, which may support the determination of the voltage455, or the voltage460, or both. For example, a memory device110, or a host device105, or both may monitor such conditions as a duration of memory cells205storing certain logic states, or a temperature associated with the memory cells205(e.g., while storing certain logic states), among other conditions associated with a degree of imprint (e.g., detected error conditions). In some examples, the memory device110may determine the values of the voltage455, or the voltage460, or both, for performing an imprint recovery procedure at the memory device110. In some other examples, a host device105may determine the values of the voltage455, or the voltage460, or both, for the memory device110to perform an imprint recovery procedure, and may transmit signaling to the memory device110indicating the voltage455, or the voltage460, or both (e.g., indicating a magnitude of such voltages). In some examples, such determinations of the voltage455or the voltage460may support configuring imprint recovery pulses with a magnitude sufficient to support imprint recovery but without a magnitude associated with undue power consumption. Although some aspects of memory cell imprint are described with reference to ferroelectric memory applications, imprint management in accordance with the present disclosure may also be applicable to other memory technologies that undergo drift or other shifts in characteristics that may be asymmetric with respect to different logic states. For example, material memory elements, such as phase change, resistive, or thresholding memories may undergo material segregation or immobilization as a result of memory cell imprint (e.g., as a result of storing a logic state over a duration, as a result of storing a logic state at an elevated temperature), where such effects may be associated with (e.g., asymmetrically associated with, drift towards) storing or reading a particular logic state over another. In some examples, memory cells205in such applications that are imprinted may be associated with an increased resistance to changing from one configurable material property or characteristic to another, which may correspond to such phenomena as a relatively greater resistance to changes from one threshold voltage to another, a relatively greater resistance to changes from one electrical resistance to another, and other characteristics. In various examples, an imprint recovery operation in accordance with examples as disclosed herein may normalize (e.g., equalize) characteristics of material memory elements, such as normalizing material distributions, moving defects to one end or another, distributing defects more evenly through a cell, or mobilizing a material memory element to undergo atomic reconfiguration, among other examples. FIG.5illustrates an example of a timing diagram500that supports switch and hold biasing for memory cell imprint recovery in accordance with examples as disclosed herein. The timing diagram500illustrates examples of biasing in accordance with voltage pulses505(e.g., imprint recovery pulses) that may be implemented by a memory device110(e.g., a memory die200) during an imprint recovery procedure. In various examples, a memory device110may determine to perform such an imprint recovery procedure (e.g., after a power on, after detecting a failure when reading a reference pattern stored in a memory array170of the memory device110, based on a time or temperature of storing logic states at the memory device110), or the memory device110may perform such an imprint recovery procedure in response to a command from a host device105(e.g., a command transmitted in response to operating conditions detected by the host device105, such as time, temperature, or error conditions, or various combinations thereof). The illustrated biasing may be applied to one or more memory cells205of the memory device110in accordance with various techniques for imprint recovery (e.g., with a voltage source via one or more digit lines215, with a voltage source via one or more plate lines220, while one or more word lines210are activated) where, in various scenarios, such concurrently-biased memory cells205may be associated with the same direction of imprint or different directions of imprint. The voltage pulse505-amay be an example of a recovery pulse associated with a positive polarity. In some examples (e.g., for a unipolar recovery procedure), the voltage pulse505-amay be selected based on the memory device110or a host device105detecting a negative imprint polarity to be corrected by one or more imprint recovery pulses having a positive polarity. In such examples, the voltage pulse505-amay not be followed by a voltage pulse505having a negative polarity (e.g., in an example of the timing diagram500that omits the voltage pulse505-b). In some alternative examples, an initial voltage pulse505having a negative polarity may be implemented in response to detecting a positive imprint polarity. In some other examples, an initial voltage pulse505having a positive polarity, or a negative polarity, may be a default condition and may be followed by one or more voltage pulses having an opposite polarity (e.g., voltage pulse505-b). At t1, the biasing of the voltage pulse505-amay include coupling one or more memory cells205with one or more voltage sources in accordance with a voltage VA. The voltage VAmay correspond to an imprint recovery polarization voltage, such as a voltage455, where a magnitude of the voltage VAmay be determined by the memory device110, or may be determined by a host device105and indicated to the memory device110, or may be a value configured at the memory device110(e.g., a default value, a preconfigured value). Over a duration510, voltage across the memory cells205may settle (e.g., rise, in the example of duration510-a) which may be associated with accumulating charge across ferroelectric capacitors240, or along an intrinsic capacitance between the ferroelectric capacitors240and the voltage sources, among other characteristics associated with the voltage transition during a duration510. At t2, the biasing of the memory cells205may reach the voltage VA, which may be held over a duration515(e.g., a duration515-a, where the memory cells205may be coupled with one or more voltage sources in accordance with a first voltage magnitude during both the duration510-aand the duration515-1). At or before t3, the memory cells205may be polarized in accordance with the voltage VA(e.g., having reached a charge state465-a, which may be associated with a saturation polarization). In some examples, a duration between t1 and t3 (e.g., a combination of a duration510and a duration515) may configured to account for different voltage settling times to ensure a polarization of the memory cells205during a voltage pulse505. In some examples, a duration515may be configured as a relatively high-magnitude voltage hold duration, supporting aspects of imprint recovery at the polarization and the relatively high voltage magnitude associated with the voltage VA, in combination with other aspects of imprint recovery during later durations of a voltage pulse505. In some examples, a duration515may be nearly zero (e.g., a duration associated with reaching a charge state465-a, but not necessarily holding at the charge state465-a), which may support a relatively greater reduction in power consumption during a voltage pulse505(e.g., by limiting a duration at the relatively higher magnitude of voltage VA). At t3, the biasing of the voltage pulse505-amay include initiating a reduction of the biasing (e.g., of the voltage pulse505-a, of the memory cells205, a magnitude reduction) from the voltage VAto a voltage VB. The voltage VBmay correspond to a voltage that maintains a level of polarization associated with the voltage VA(e.g., a voltage higher than a coercive voltage associated with a negative polarization, a voltage or charge state before a depolarization region), such as a voltage460, where a magnitude of the voltage VBmay be determined by the memory device110, or may be determined by a host device105and indicated to the memory device110, or may be a value configured at the memory device110(e.g., a default value, a preconfigured value). In various examples, the reduction in biasing may be implemented at t3 as a decrease in voltage of the voltage sources coupled at t1, or by coupling the memory cells205with one or more different voltage sources, among other techniques. Over a duration520, a voltage across the memory cells205may settle (e.g., fall, in the example of duration520-a) which may be associated with a reduction of charge across a ferroelectric capacitor240(e.g., as a transition from a charge state465-ato a charge state465-b), or along an intrinsic capacitance between the ferroelectric capacitors240and the voltage sources, among other characteristics associated with the voltage setting during a duration520. The setting during a duration520may be configured such that the biasing does not overshoot (e.g., fall below, in the example of duration520-a) the voltage VB, or may be configured such that any overshoot past VBis small enough to avoid or limit a loss of polarization established by the biasing during a duration515. At t4, the biasing of the memory cells205may reach the voltage VB, which may be held over a duration525(e.g., a duration525-a, where the memory cells205may be coupled with one or more voltage sources in accordance with a second magnitude during both the duration520-aand the duration525-a). For example, during the duration525-a, the memory cells205may maintain the level of polarization (e.g., a positive polarization in the case of voltage pulse505-a) established with the voltage VA, but at the lower voltage magnitude of VB. Thus, in some examples, a duration515, a duration520, and a duration525may support at least some of the memory cells205reverting to a normalized condition (e.g., recovering from an imprint with a negative polarization) in accordance with a level of polarization (e.g., positive polarization) associated with the voltage VA, but with the duration520and the duration525being associated with a lower power consumption than the duration515, due at least in part to a reduction in leakage charge. For example, in an illustrative configuration where VAis set to 1.5V and VBis set to 100 mV, the described techniques for switch and hold biasing for imprint recovery during a duration525may reduce power consumption associated with charge leakage by over 99% compared to a duration515(e.g., under circumstances where leakage power is proportional to voltage squared). At t5, the biasing of the voltage pulse505-amay include removing the biasing (e.g., of the voltage pulse505-a, of the memory cells205) of the voltage VB, which may include decoupling the memory cells205from the voltage sources or otherwise equalizing a voltage across the memory cells205. Over a duration525, a voltage across the memory cells205may proceed to zero volts, which may be associated with reducing charge across the ferroelectric capacitors240, or along an intrinsic capacitance between the ferroelectric capacitors240and the voltage sources, among other characteristics associated with a voltage settling (e.g., equalization) during a duration530. In some examples, a voltage pulse505may be followed by one or more other voltage pulses505, including one or more voltage pulses505having a same polarity, or one or more voltage pulses505having an opposite polarity, or various combinations thereof (e.g., a sequence of voltage pulses505having alternating polarities). For example, the timing diagram500illustrates an example where the voltage pulse505-ais followed by a voltage pulse505-b, having an opposite polarity (e.g., a negative polarity), which may support aspects of a bipolar imprint recovery procedure (e.g., where a direction of imprint may not be detected, to support imprint recovery techniques associated with charge state switching). At t7, the biasing of the voltage pulse505-bmay include coupling the one or more memory cells205with one or more voltage sources in accordance with a voltage −VA(e.g., a voltage having the same magnitude as voltage VA, but with an opposite polarity) where, over a duration510-b, voltage across the memory cells205may fall. At t8, the biasing of the memory cells205may reach the voltage −VA, which may be held over a duration515-b. At or before t9, the memory cells205may be polarized in accordance with the voltage −VA. At t9, the biasing of the voltage pulse505-bmay include initiating a reduction of a magnitude of the biasing from the voltage −VAto a voltage −VB(e.g., a voltage having the same magnitude as voltage VB, but with an opposite polarity), where the voltage −VBmay be configured to maintain a level of polarization associated with the voltage −VA. Over a duration520-b, a voltage across the memory cells205may rise and, at t10, the biasing of the memory cells205may reach the voltage VB, which may be held over a duration525-b. During the duration515-b, the duration520-b, and the duration525-b, the memory cells205may maintain the level of polarization (e.g., a negative polarization) established with the voltage −VA, but at the lower voltage magnitude of −VB. Thus, in some examples, the duration515-b, the duration520-b, and the duration525-bmay support at least some of the memory cells205reverting to a normalized condition (e.g., recovering from an imprint with a positive polarization) in accordance with a level of polarization (e.g., a negative polarization) associated with the voltage −VA, but with a power consumption during the duration520-band the duration525-bbeing lower than the duration515-b, due at least in part to a reduction in leakage charge. At t11, the biasing of the voltage pulse505-bmay include removing the biasing of the voltage −VB, which may include decoupling the memory cells205from the voltage sources or otherwise equalizing a voltage across the memory cells205. Although the example of timing diagram500illustrates an example where voltage pulses505-aand505-bare separated by a gap duration535, in some examples, such a gap duration may be omitted. Further, in some examples, the memory cells205may not be explicitly equalized between voltage pulses505. For example, referring to the examples of voltage pulses505-aand505-b, rather than decoupling the memory cells205from voltage sources at t5, the memory cells205may be biased in accordance with the voltage −VA(e.g., in accordance with the operations of t7, but at the timing of t5). The voltages and timing of the operations of timing diagram500are for illustrative purposes and are not meant to indicate a particular relative voltage or a particular relative duration between one operation and another. For example, various operations in accordance with examples as disclosed herein may occur over a duration that is relatively shorter or relatively longer than illustrated, or with voltages that are relatively closer or farther in magnitude, among other differences. Further, various operations illustrated in the timing diagram500may occur over overlapping or concurrent durations in support of the techniques described herein. FIG.6illustrates an example of a timing diagram600that supports switch and hold biasing for memory cell imprint recovery in accordance with examples as disclosed herein. The timing diagram600illustrates an example of imprint recovery biasing in accordance with a staggered application of voltage pulses505, which may be applied to different sections of a memory array (e.g., different rows of memory cells205, different columns of memory cells205, different banks of memory cells205). For example, the timing diagram600illustrates an example of staggering the application of two voltage pulses505(e.g., a voltage pulse505-cassociated with a first section and a voltage pulse505-dassociated with a second section), which may reduce a peak current (e.g., a peak power consumption) associated with an imprint recovery procedure compared with such biasing of the different sections with the same timing. Although the example of timing diagram600illustrates the staggered application of two voltage pulses505(e.g., associated with two different sections of a memory array), the described techniques may be extended to any quantity of voltage pulses505applied in parallel (e.g., with any quantity of sections of memory cells205), which may include various examples of voltage pulses505having the same polarity, or voltage pulses505having different polarities, or various combinations thereof (e.g., a sequence of voltage pulses505applied to a given section in accordance with the same polarity or alternating polarities). The voltage pulse505-cmay be applied in accordance with durations510-c,515-c,520-c,525-c, and535-c, which may be examples of the respective durations described with reference toFIG.5. For example, during the durations510-cand515-c, a first section of one or more memory cells205may be coupled with one or more voltage sources in accordance with the voltage VAand, during the durations520-cand525-c, the first section of one or more memory cells205may be coupled with one or more voltage sources in accordance with the voltage VB. In some examples, the duration510-c, or the durations510-cand515-c, may be associated with relatively high current (e.g., associated with a charge transfer during the duration510-cto settle to the voltage VA, associated with a polarization of memory cells205of the section, associated with charge leakage in accordance with the voltage VA). In some examples, to reduce a peak current associated with an imprint recovery procedure, it may be beneficial to delay the timing of a voltage pulse505for the second section of one or more memory cells205relative to the timing of the voltage pulse505-c. The voltage pulse505-dillustrates an example of such staggering relative to the voltage pulse505-c, where the voltage pulse505-dmay be applied in accordance with durations510-d,515-d,520-d,525-d, and535-d. As illustrated, the durations510-d,515-d,520-d,525-d, and535-dare delayed relative to the timing of the respective durations of the voltage pulse505-c(e.g., in accordance with a delay610). For example, at t1d, memory cells205of the second section may be coupled with one or more voltage sources in accordance with the voltage VA, which may coincide with the memory cells205of the first section being coupled with one or more voltage sources in accordance with the voltage VB(e.g., a voltage magnitude reduction). Thus, in the example of voltage pulses505-cand 505-d, a single section of memory cells205may be coupled with a relatively high voltage magnitude (e.g., a magnitude of the voltage VA) at a time. In another example, voltage pulses505may be staggered in accordance with a reduced timing shift compared to the illustration of timing diagram600(e.g., a delay of less than the delay610). For example, for circumstances in which a current consumption during a duration515is relatively low compared to a duration510, an end of a duration510for one section may be followed by (e.g., directly) a duration510of another section. Referring to the example of operation timing of the voltage pulses505-cand 505-d, such circumstances may correspond to the timing of t1d coinciding with the timing of t2c. More generally, such techniques for staggering may include the duration510-doverlapping, at least in part, with the duration515-c, or the duration520-c, or both. Additionally, or alternatively, such techniques for staggering may include the duration515-doverlapping, at least in part, with the duration520-c, or the duration525-c, or both, among other examples. In some other examples, such staggering may be further tightened to accommodate other combinations of sections (e.g., a greater quantity of sections), such as configuring a time t1d between the times t1c and t2c (e.g., where the duration510-dmay be partially overlapping with the duration510-c), and so on, among other examples. In accordance with these and other examples, imprint recovery procedures may be performed over a shorter overall duration, or across a greater quantity of sections, or both for a given peak current (e.g., a given power consumption). FIG.7shows a block diagram700of a memory device720that supports switch and hold biasing for memory cell imprint recovery in accordance with examples as disclosed herein. The memory device720may be an example of aspects of a memory device as described with reference toFIGS.1through6. The memory device720, or various components thereof, may be an example of means for performing various aspects of switch and hold biasing for memory cell imprint recovery as described herein. For example, the memory device720may include a recovery procedure management component725, a biasing component730, an imprint evaluation component735, a voltage determination component740, a signaling reception component745, a memory condition evaluation component750, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The recovery procedure management component725may be configured as or otherwise support a means for determining to perform an imprint recovery procedure on one or more memory cells of a memory array. The biasing component730may be configured as or otherwise support a means for biasing a memory cell of the one or more memory cells, during a first duration of a voltage pulse, with a first voltage magnitude having a voltage polarity based at least in part on determining to perform the imprint recovery procedure. In some examples, the biasing component730may be configured as or otherwise support a means for reducing the biasing of the memory cell, during a second duration of the voltage pulse after the first duration, from the first voltage magnitude to a second voltage magnitude having the voltage polarity. In some examples, the biasing component730may be configured as or otherwise support a means for holding the biasing of the memory cell, during a third duration of the voltage pulse after the second duration, at the second voltage magnitude having the voltage polarity. In some examples, the biasing of the memory cell may be reduced during the second duration without falling below the second voltage magnitude between the first duration and the third duration. In some examples, the first voltage magnitude may be associated with a polarization of a ferroelectric capacitor of the memory cell, and the second voltage magnitude may be associated with maintaining the polarization of the ferroelectric capacitor. In some examples, the imprint evaluation component735may be configured as or otherwise support a means for identifying an indication of a severity of imprint of the one or more memory cells of the memory array. In some examples, the voltage determination component740may be configured as or otherwise support a means for determining the second voltage magnitude based at least in part on the indication of the severity of imprint, and reducing the biasing of the memory cell during the second duration and holding the biasing of the memory cell during the third duration is based at least in part on the determined second voltage magnitude. In some examples, the memory condition evaluation component750may be configured as or otherwise support a means for determining a duration of storing logic states at the memory array, or a temperature associated with the memory array, or both, and identifying the indication of the severity of imprint is based at least in part on the duration of storing logic states, or the temperature, or both. In some examples, the signaling reception component745may be configured as or otherwise support a means for receiving signaling from a host device, and determining to perform the imprint recovery procedure may be based at least in part on the signaling from the host device. In some examples, the signaling reception component745may be configured as or otherwise support a means for receiving signaling from the host device that indicates the second voltage magnitude, and reducing the biasing of the memory cell during the second duration and holding the biasing of the memory cell during the third duration may be based at least in part on the indicated second voltage magnitude. In some examples, the biasing component730may be configured as or otherwise support a means for biasing the memory cell, during a fourth duration of a second voltage pulse after the third duration, with the first voltage magnitude having a second voltage polarity based at least in part on determining to perform the imprint recovery procedure. In some examples, the biasing component730may be configured as or otherwise support a means for reducing the biasing of the memory cell, during a fifth duration of the second voltage pulse after the fourth duration, from the first voltage magnitude to the second voltage magnitude having the second voltage polarity. In some examples, the biasing component730may be configured as or otherwise support a means for holding the biasing of the memory cell, during a sixth duration of the second voltage pulse after the fifth duration, at the second voltage magnitude having the second voltage polarity. In some examples, the biasing component730may be configured as or otherwise support a means for biasing a second memory cell of the one or more memory cells, during a seventh duration of a third voltage pulse after the first duration, with the first voltage magnitude having the voltage polarity based at least in part on determining to perform the imprint recovery procedure. In some examples, the biasing component730may be configured as or otherwise support a means for reducing the biasing of the second memory cell, during an eighth duration of the third voltage pulse after the seventh duration, from the first voltage magnitude to the second voltage magnitude having the voltage polarity. In some examples, the biasing component730may be configured as or otherwise support a means for holding the biasing of the second memory cell, during a ninth duration of the third voltage pulse after the eighth duration, at the second voltage magnitude having the voltage polarity. In some examples, the seventh duration may be overlapping with the second duration, or the third duration, or both the second duration and the third duration. FIG.8shows a block diagram800of a host device820that supports switch and hold biasing for memory cell imprint recovery in accordance with examples as disclosed herein. The host device820may be an example of aspects of a host device as described with reference toFIGS.1through6. The host device820, or various components thereof, may be an example of means for performing various aspects of switch and hold biasing for memory cell imprint recovery as described herein. For example, the host device820may include an imprint indicator component825, a command transmitter component830, a signaling transmission component840, an imprint evaluation component845, a voltage determination component850, a memory condition evaluation component855, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The imprint indicator component825may be configured as or otherwise support a means for determining, at a host device, a condition indicative of imprinted memory cells of a memory device. The command transmitter component830may be configured as or otherwise support a means for transmitting a command to perform an imprint recovery procedure based at least in part on determining the condition indicative of imprinted memory cells. In some examples, the imprint recovery procedure may include biasing a memory cell of the memory device, during a first duration, with a first voltage magnitude in accordance with a voltage polarity, reducing the biasing of the memory cell, during a second duration after the first duration, from the first voltage magnitude to a second voltage magnitude in accordance with the voltage polarity, and holding the biasing of the memory cell, during a third duration after the second duration, at the second voltage magnitude in accordance with the voltage polarity. In some examples, the first voltage magnitude may be associated with a polarization of a ferroelectric capacitor of the memory cell, and the second voltage magnitude may be configured to maintain the polarization of the ferroelectric capacitor. In some examples, the signaling transmission component840may be configured as or otherwise support a means for transmitting signaling that indicates the second voltage magnitude. In some examples, the imprint evaluation component845may be configured as or otherwise support a means for identifying an indication of a severity of imprint. In some examples, the voltage determination component850may be configured as or otherwise support a means for determining the second voltage magnitude based at least in part on the indication of the severity of imprint. In some examples, the memory condition evaluation component855may be configured as or otherwise support a means for detecting a duration of storing logic states at the memory device, or a temperature associated with the memory device, or both, and identifying the indication of the severity of imprint may be based at least in part on the duration of storing logic states, or the temperature, or both. FIG.9shows a flowchart illustrating a method900that supports switch and hold biasing for memory cell imprint recovery in accordance with examples as disclosed herein. The operations of method900may be implemented by a memory device or its components as described herein. For example, the operations of method900may be performed by a memory device as described with reference toFIGS.1through7. In some examples, a memory device may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally, or alternatively, the memory device may perform aspects of the described functions using special-purpose hardware. At905, the method may include determining to perform an imprint recovery procedure on one or more memory cells of a memory array. The operations of905may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of905may be performed by a recovery procedure management component725as described with reference toFIG.7. At910, the method may include biasing a memory cell of the one or more memory cells, during a first duration of a voltage pulse, with a first voltage magnitude having a voltage polarity based at least in part on determining to perform the imprint recovery procedure. The operations of910may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of910may be performed by a biasing component730as described with reference toFIG.7. At915, the method may include reducing the biasing of the memory cell, during a second duration of the voltage pulse after the first duration, from the first voltage magnitude to a second voltage magnitude having the voltage polarity. The operations of915may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of915may be performed by a biasing component730as described with reference toFIG.7. At920, the method may include holding the biasing of the memory cell, during a third duration of the voltage pulse after the second duration, at the second voltage magnitude having the voltage polarity. The operations of920may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of920may be performed by a biasing component730as described with reference toFIG.7. In some examples, an apparatus as described herein may perform a method or methods, such as the method900. The apparatus may include features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor), or any combination thereof for performing the following aspects of the present disclosure: Aspect 1: A method, apparatus, or non-transitory computer-readable medium including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining to perform an imprint recovery procedure on one or more memory cells of a memory array; biasing a memory cell of the one or more memory cells, during a first duration of a voltage pulse, with a first voltage magnitude having a voltage polarity based at least in part on determining to perform the imprint recovery procedure; reducing the biasing of the memory cell, during a second duration of the voltage pulse after the first duration, from the first voltage magnitude to a second voltage magnitude having the voltage polarity; and holding the biasing of the memory cell, during a third duration of the voltage pulse after the second duration, at the second voltage magnitude having the voltage polarity. Aspect 2: The method, apparatus, or non-transitory computer-readable medium of aspect 1 where the biasing of the memory cell is reduced during the second duration without falling below the second voltage magnitude between the first duration and the third duration. Aspect 3: The method, apparatus, or non-transitory computer-readable medium of any of aspects 1 through 2 where the first voltage magnitude is associated with a polarization of a ferroelectric capacitor of the memory cell and the second voltage magnitude is associated with maintaining the polarization of the ferroelectric capacitor. Aspect 4: The method, apparatus, or non-transitory computer-readable medium of any of aspects 1 through 3, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for identifying an indication of a severity of imprint of the one or more memory cells of the memory array and determining the second voltage magnitude based at least in part on the indication of the severity of imprint, where reducing the biasing of the memory cell during the second duration and holding the biasing of the memory cell during the third duration is based at least in part on the determined second voltage magnitude. Aspect 5: The method, apparatus, or non-transitory computer-readable medium of aspect 4, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining a duration of storing logic states at the memory array, or a temperature associated with the memory array, or both, where identifying the indication of the severity of imprint is based at least in part on the duration of storing logic states, or the temperature, or both. Aspect 6: The method, apparatus, or non-transitory computer-readable medium of any of aspects 1 through 5, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for receiving signaling from a host device, where determining to perform the imprint recovery procedure is based at least in part on the signaling from the host device. Aspect 7: The method, apparatus, or non-transitory computer-readable medium of any of aspects 1 through 6, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for receiving signaling from the host device that indicates the second voltage magnitude, where reducing the biasing of the memory cell during the second duration and holding the biasing of the memory cell during the third duration is based at least in part on the indicated second voltage magnitude. Aspect 8: The method, apparatus, or non-transitory computer-readable medium of any of aspects 1 through 7, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for biasing the memory cell, during a fourth duration of a second voltage pulse after the third duration, with the first voltage magnitude having a second voltage polarity based at least in part on determining to perform the imprint recovery procedure; reducing the biasing of the memory cell, during a fifth duration of the second voltage pulse after the fourth duration, from the first voltage magnitude to the second voltage magnitude having the second voltage polarity; and holding the biasing of the memory cell, during a sixth duration of the second voltage pulse after the fifth duration, at the second voltage magnitude having the second voltage polarity. Aspect 9: The method, apparatus, or non-transitory computer-readable medium of any of aspects 1 through 8, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for biasing a second memory cell of the one or more memory cells, during a seventh duration of a third voltage pulse after the first duration, with the first voltage magnitude having the voltage polarity based at least in part on determining to perform the imprint recovery procedure; reducing the biasing of the second memory cell, during an eighth duration of the third voltage pulse after the seventh duration, from the first voltage magnitude to the second voltage magnitude having the voltage polarity; and holding the biasing of the second memory cell, during a ninth duration of the third voltage pulse after the eighth duration, at the second voltage magnitude having the voltage polarity. Aspect 10: The method, apparatus, or non-transitory computer-readable medium of aspect 9 where the seventh duration is overlapping with the second duration, or the third duration, or both the second duration and the third duration. FIG.10shows a flowchart illustrating a method1000that supports switch and hold biasing for memory cell imprint recovery in accordance with examples as disclosed herein. The operations of method1000may be implemented by a host device or its components as described herein. For example, the operations of method1000may be performed by a host device as described with reference toFIGS.1through6and8. In some examples, a host device may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally, or alternatively, the host device may perform aspects of the described functions using special-purpose hardware. At1005, the method may include determining (e.g., at a host device) a condition indicative of imprinted memory cells of a memory device. The operations of1005may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1005may be performed by an imprint indicator component825as described with reference toFIG.8. At1010, the method may include transmitting a command (e.g., to a memory device) to perform an imprint recovery procedure based at least in part on determining the condition indicative of imprinted memory cells. In some examples, the imprint recovery procedure may include biasing a memory cell of the memory device, during a first duration, with a first voltage magnitude in accordance with a voltage polarity, reducing the biasing of the memory cell, during a second duration after the first duration, from the first voltage magnitude to a second voltage magnitude in accordance with the voltage polarity, and holding the biasing of the memory cell, during a third duration after the second duration, at the second voltage magnitude in accordance with the voltage polarity. The operations of1010may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1010may be performed by a command transmitter component830as described with reference toFIG.8. In some examples, an apparatus as described herein may perform a method or methods, such as the method1000. The apparatus may include features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor), or any combination thereof for performing the following aspects of the present disclosure: Aspect 11: A method, apparatus, or non-transitory computer-readable medium including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining (e.g., at a host device) a condition indicative of imprinted memory cells of a memory device and transmitting a command to perform an imprint recovery procedure based at least in part on determining the condition indicative of imprinted memory cells, where the imprint recovery procedure includes biasing a memory cell of the memory device, during a first duration, with a first voltage magnitude in accordance with a voltage polarity, reducing the biasing of the memory cell, during a second duration after the first duration, from the first voltage magnitude to a second voltage magnitude in accordance with the voltage polarity, and holding the biasing of the memory cell, during a third duration after the second duration, at the second voltage magnitude in accordance with the voltage polarity. Aspect 12: The method, apparatus, or non-transitory computer-readable medium of aspect 11 where the first voltage magnitude is associated with a polarization of a ferroelectric capacitor of the memory cell and the second voltage magnitude is configured to maintain the polarization of the ferroelectric capacitor. Aspect 13: The method, apparatus, or non-transitory computer-readable medium of any of aspects 11 through 12, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for transmitting signaling that indicates the second voltage magnitude. Aspect 14: The method, apparatus, or non-transitory computer-readable medium of any of aspects 11 through 13, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for identifying an indication of a severity of imprint and determining the second voltage magnitude based at least in part on the indication of the severity of imprint. Aspect 15: The method, apparatus, or non-transitory computer-readable medium of aspect 14, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for detecting a duration of storing logic states at the memory device, or a temperature associated with the memory device, or both, where identifying the indication of the severity of imprint is based at least in part on the duration of storing logic states, or the temperature, or both. It should be noted that the methods described herein are possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined. An apparatus is described. The following provides an overview of aspects of the apparatus as described herein: Aspect 16: An apparatus, including: a memory array including a plurality of memory cells; and circuitry coupled with the memory array and configured to cause the apparatus to: determine to perform an imprint recovery procedure on at least a portion of the memory array; bias a memory cell of the plurality, during a first duration of a voltage pulse, with a first voltage magnitude having a voltage polarity based at least in part on determining to perform the imprint recovery procedure; reduce the biasing of the memory cell, during a second duration of the voltage pulse after the first duration, from the first voltage magnitude to a second voltage magnitude having the voltage polarity; and hold the biasing of the memory cell, during a third duration of the voltage pulse after the second duration, at the second voltage magnitude having the voltage polarity. Aspect 17: The apparatus of aspect 16, where the circuitry is configured to reduce the biasing of the memory cell during the second duration without falling below the second voltage magnitude between the first duration and the third duration. Aspect 18: The apparatus of any of aspects 16 through 17, where: the first voltage magnitude is associated with a polarization of a ferroelectric capacitor of the memory cell; and the second voltage magnitude is associated with maintaining the polarization of the ferroelectric capacitor. Aspect 19: The apparatus of any of aspects 16 through 18, where the circuitry is further configured to cause the apparatus to: identify an indication of a severity of imprint of the memory array; determine the second voltage magnitude based at least in part on the indication of the severity of imprint; and reduce the biasing of the memory cell during the second duration and hold the biasing of the memory cell during the third duration based at least in part on the determined second voltage magnitude. Aspect 20: The apparatus of aspect 19, where the circuitry is further configured to cause the apparatus to: detect a duration of storing logic states at the memory array, or a temperature associated with the memory array, or both; and identify the indication of the severity of imprint based at least in part on the duration of storing logic states, or the temperature, or both. Aspect 21: The apparatus of any of aspects 16 through 20, where the circuitry is further configured to cause the apparatus to: receive a command, where determining to perform the imprint recovery procedure is based at least in part on receiving the command. Aspect 22: The apparatus of any of aspects 16 through 21, where the circuitry is further configured to cause the apparatus to: receive an indication of the second voltage magnitude; and reduce the biasing of the memory cell during the second duration and hold the biasing of the memory cell during the third duration based at least in part on the indicated second voltage magnitude. Aspect 23: The apparatus of any of aspects 16 through 22, where the circuitry is further configured to cause the apparatus to: bias the memory cell, during a fourth duration of a second voltage pulse after the third duration, with the first voltage magnitude having a second voltage polarity based at least in part on determining to perform the imprint recovery procedure; reduce the biasing of the memory cell, during a fifth duration of the second voltage pulse after the fourth duration, from the first voltage magnitude to the second voltage magnitude having the second voltage polarity; and hold the biasing of the memory cell, during a sixth duration of the second voltage pulse after the fifth duration, at the second voltage magnitude having the second voltage polarity. Aspect 24: The apparatus of any of aspects 16 through 23, where the circuitry is further configured to cause the apparatus to: bias a second memory cell of the plurality, during a seventh duration of a third voltage pulse after the first duration, with the first voltage magnitude having the voltage polarity based at least in part on determining to perform the imprint recovery procedure; reduce the biasing of the second memory cell, during an eighth duration of the third voltage pulse after the seventh duration, from the first voltage magnitude to the second voltage magnitude having the voltage polarity; and hold the biasing of the second memory cell, during a ninth duration of the third voltage pulse after the eighth duration, at the second voltage magnitude having the voltage polarity. Aspect 25: The apparatus of aspect 24, where the seventh duration is overlapping with the second duration, or the third duration, or both the second duration and the third duration. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths. The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (e.g., in conductive contact with, connected with, coupled with) one another if there is any electrical path (e.g., conductive path) between the components that can, at any time, support the flow of signals (e.g., charge, current voltage) between the components. At any given time, a conductive path between components that are in electronic communication with each other (e.g., in conductive contact with, connected with, coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. A conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors. The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components (e.g., over a conductive path) to a closed-circuit relationship between components in which signals are capable of being communicated between components (e.g., over the conductive path). When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow. The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components from one another, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow. The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOS), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. A switching component (e.g., a transistor) discussed herein may represent a field-effect transistor (FET), and may comprise a three-terminal component including a source (e.g., a source terminal), a drain (e.g., a drain terminal), and a gate (e.g., a gate terminal). The terminals may be connected to other electronic components through conductive materials (e.g., metals, alloys). The source and drain may be conductive, and may comprise a doped (e.g., heavily-doped, degenerate) semiconductor region. The source and drain may be separated by a doped (e.g., lightly-doped) semiconductor region or channel. If the channel is n-type (e.g., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (e.g., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. The transistor may be “off” or “deactivated” when a voltage less than the transistor's threshold voltage is applied to the transistor gate. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions (e.g., code) on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. For example, the various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a processor, such as a DSP, an ASIC, an FPGA, discrete gate logic, discrete transistor logic, discrete hardware components, other programmable logic device, or any combination thereof designed to perform the functions described herein. A processor may be an example of a microprocessor, a controller, a microcontroller, a state machine, or any type of processor. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a computer, or a processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
127,781
11862222
DETAILED DESCRIPTION In order to make the objectives, technical solutions and advantages of the embodiments of the present disclosure clearer, various embodiments of the present disclosure will be described below in detail with reference to the drawings. However, those of ordinary skill in the art may understand that, in the embodiments of the present disclosure, numerous technical details are set forth in order to enable a reader to better understand the present disclosure. However, the technical solutions claimed in the present disclosure can be implemented without these technical details and various changes and modifications based on the embodiments below. In a conventional art, based on a standard of meeting medium and high temperatures, a refresh command at a corresponding time interval is sent according to a data hold time in a medium-high temperature environment. A fixed number of rows are refreshed under each refresh command. Therefore, all row addresses may be refreshed in advance within a corresponding longer data hold time at normal or low temperatures, but the refresh command may still be sent according to a fixed time interval. In this case, refresh may start from Row0again, but this part is the unnecessary refresh within the data hold time, which wastes currents. Referring toFIG.1, a refresh circuit includes: a refresh control module20configured to receive a refresh command11to output a row address refresh signal12, the row address refresh signal12being outputted a number of times of a preset value each time the refresh command11is received; and further configured to receive a temperature signal10ato adjust the preset value, the higher a temperature represented by the temperature signal10a, the greater the adjusted preset value; a row addresser30configured to receive the row address refresh signal12and output a to-be-refreshed single-row address13; and an array refresh device40configured to perform a single-row refresh operation according to the single-row address13and output a single-row refresh end signal14after the end of single-row refresh. Each time the row addresser30receives one row address refresh signal12, it outputs one to-be-refreshed single-row address13. The single-row address13corresponds to each row of an array in the array refresh device40. Each time the row address refresh signal12is received, address information included in the single-row address13is increased by 1; that is, the address information is increased successively according to an order of Row0, Row1and Row2, until all row addresses are sent. Correspondingly, the array refresh device40performs refresh row by row based on the received single-row address13, and when the row addresser30sends all the row addresses, the array refresh device40refreshes all the rows correspondingly. It is to be noted that, if the row addresser30continuously receives the row address refresh signal12after sending all the row addresses, all the row addresses are sent again according to an order of Row0, Row1, Row2. . . . In this embodiment, the refresh control module20includes: a control unit22configured to receive the row address refresh signal12and output a reset signal15; and further configured to count the row address refresh signal12, and output the reset signal15when a count value is equal to the adjusted preset value; and a refresh signal generation unit21configured to receive the refresh command11, the reset signal15and the single-row refresh end signal14and output the row address refresh signal12; output the row address refresh signal12when receiving the refresh command11; detect the reset signal15when receiving the single-row refresh end signal14, output the row address refresh signal12when not receiving the reset signal15, and suspend outputting the row address refresh signal12when receiving the reset signal15. Further, the control unit22includes: a counting subunit23configured to receive the row address refresh signal12, count the received row address refresh signal12, and output the count value16; a regulating subunit24configured to receive the temperature signal10aand the count value16and output an excitation signal17; and configured to adjust the preset value based on the temperature signal10a, and output the excitation signal17when the count value16is equal to the adjusted preset value; and an automatic pulse generator25configured to receive the excitation signal17and output the reset signal15. Referring toFIG.1andFIG.2, in this embodiment, the counting subunit23includes an asynchronous binary addition counter composed of a plurality of D flip-flops connected in series, the regulating subunit24includes an OR gate and a plurality of AND gates, at least one input terminal of the AND gate is connected to a counting terminal of the asynchronous binary addition counter, an output terminal of the AND gate is connected to an input terminal of the OR gate, and an output terminal of the OR gate acts as an output terminal of the regulating subunit24; all input terminals of at least one of the AND gates are connected to a counting terminal of the asynchronous binary addition counter, and when a level of the counting terminal represents a default value, at least one of the AND gates outputs a high level; and one input terminal of at least another one of the AND gates receives the temperature signal10a, and when the level of the counting terminal represents a first value, at least another one of the AND gates outputs a high level, the first value being less than the default value. A number of the D flip-flops forming the asynchronous binary addition counter is related to a maximum number of refreshed rows corresponding to a single refresh command11. The larger the maximum number of refreshed rows, the greater the number of the D flip-flops; the smaller the maximum number of refreshed rows, the smaller the number of the D flip-flops. Specifically, a maximum count value of n D flip-flops is 2n−1, and the maximum count value is required to be greater than or equal to the maximum number of refreshed rows. For example, when the maximum number of refreshed rows is 7, the number of the D flip-flops is at least 3. The D flip-flop has a data input terminal D, a clock input terminal CK, a first data output terminal Q, a second data output terminal QB and a reset terminal RST. The first data output terminal Q is complementary to the second data output terminal QB. A working principle and a connection manner of the asynchronous binary addition counter are described in detail below by taking three D flip-flops as an example. The asynchronous binary addition counter (hereinafter referred to as “counter”) includes a first D flip-flop231as a low order, a second D flip-flop232as a next high order, and a third D flip-flop233as a high order. The second data output terminal QB of each D flip-flop is connected to the data input terminal D and is connected to the clock input terminal CK of a next-stage D flip-flop. The first data output terminal Q of each D flip-flop acts as a counting terminal. The reset terminal RST resets the level after the reset signal15is received, which represents 0. The clock input terminal CK of the first D flip-flop231is configured to receive the row address refresh signal12, and the first D flip-flop231has a first counting terminal Q1. The second D flip-flop232has a second counting terminal Q2. The third D flip-flop233has a third counting terminal Q3. When the D flip-flop represents 1, the corresponding counting terminal is at a high level. When data recorded by a relatively-low-order D flip-flop reaches 2, the relatively-low-order D flip-flop is required to be carried to a next-stage relatively-high-order D flip-flop. In this case, the relatively-low-order D flip-flop is reset, representing 0, and the relatively-high-order D flip-flop is carried, representing 1, which forms “carry 1 to a higher bit when the low bit reaches 2” in binary. Specifically, the counter is in an initial state when not receiving a first row address refresh signal12, and the count value Q3Q2Q1=000. Each time one row address refresh signal12is received, the count value is increased once; that is, the count value Q3Q2Q1is increased according to an order of 000, 001, 010, 011, 100, 101, 110 and 111. The count value Q3Q2Q1, when converted to decimal, represents 0, 1, 2, 3, 4, 5, 6 and 7 respectively. After the counter receives the reset signal, the count value Q3Q2Q1is reset to the initial state of 000. In addition, a number of the AND gate is related to a number of the received temperature signal10a. Different temperature signals10arepresent different temperature intervals, and preset values of the refreshed rows adjusted by the regulating subunit24based on the temperature signals10aare also different; therefore, n temperature signals10acorrespond to n adjusted preset values. In this embodiment, each adjusted preset value corresponds to one AND gate, and n temperature signals require n+1 AND gates. At least two AND gates are provided. In this embodiment, a number of input terminals of the AND gate is less than or equal to a number of the D flip-flops in the counting subunit23. Since the number of the D flip-flops determines a maximum count value of the counting subunit23, an optional range of preset values at different temperatures may be expanded by increasing the number of the D flip-flops, so as to refresh all row addresses in time at extreme temperatures and reduce the waste of refresh currents. A structure of the regulating subunit24and a connection relationship between the regulating subunit24and the counting subunit23are illustrated below with a specific example. In the specific example, the regulating subunit24is configured to receive a first temperature signal T1, a second temperature signal T2and a third temperature signal T3. The first temperature signal T1represents that a current temperature is less than or equal to a first temperature and greater than a second temperature. The second temperature signal T2represents that the current temperature is less than or equal to the second temperature and greater than a third temperature. The third temperature signal T3represents that the current temperature is less than or equal to the third temperature. Correspondingly, when the current temperature is greater than the first temperature, the number of refreshed rows corresponding to each refresh command11is 7; that is, the default value is 7. Within a temperature interval represented by the first temperature signal T1, the number of refreshed rows corresponding to each refresh command11is 6. Within a temperature interval represented by the second temperature signal T2, the number of refreshed rows corresponding to each refresh command11is 5. Within a temperature interval represented by the third temperature signal T3, the number of refreshed rows corresponding to each refresh command11is 3. In order to realize the above specific example, the regulating subunit24according to the embodiment of the present disclosure includes four AND gates. A number of input terminals of the AND gates is equal to the number of the D flip-flops. Specifically, The regulating subunit24includes a first AND gate241, a second AND gate242, a third AND gate243, a fourth AND gate244and an OR gate245. The first AND gate241has a first input terminal241aconnected to the first counting terminal Q1, a second input terminal241bconnected to the second counting terminal Q2, and a third input terminal241cconnected to the third counting terminal Q3. The second AND gate242has a first input terminal242aconfigured to receive the first temperature signal T1, a second input terminal242bconnected to the second counting terminal Q2, and a third input terminal242cconnected to the third counting terminal Q3. The third AND gate243has a first input terminal243aconnected to the first counting terminal Q1, a second input terminal243bconfigured to receive the second temperature signal T2, and a third input terminal243cconnected to the third counting terminal Q3. The fourth AND gate244has a first input terminal244aconnected to the first counting terminal Q1, a second input terminal244bconnected to the second counting terminal Q2, and a third input terminal244cconfigured to receive the third temperature signal T3. When the regulating subunit24does not receive the temperature signal10a, the first input terminal242aof the second AND gate242, the second input terminal243bof the third AND gate243and the third input terminal244cof the fourth AND gate244are always at a low level, and output terminals of the second AND gate242, the third AND gate243and the fourth AND gate244are always at a low level. In this case, only the first counting terminal Q1, the second counting terminal Q2and the third counting terminal Q3are all at a high level. That is, when the count value16of the counting subunit23is 7, the first input terminal241a, the second input terminal241band the third input terminal241cof the first AND gate241can all receive high levels, the first AND gate241can output a high-level signal, and the OR gate245can output a high-level signal, which acts as the excitation signal17of the automatic pulse generator25. The automatic pulse generator25generates the reset signal15after receiving the excitation signal17. The reset signal15, on the one hand, enables the counter to be restored to the initial state, that is, Q3Q2Q1=000; on the other hand, enables the refresh signal generation unit21to suspend outputting the row address refresh signal12. Correspondingly, when the regulating subunit24receives the first temperature signal T1, the first input terminal242aof the second AND gate242is at a high level. In this case, when the second counting terminal Q2and the third counting terminal Q3are both at a high level, that is, the count value16of the counting subunit23is 6, the second AND gate242can output a high-level signal to enable the OR gate245to send an active-high excitation signal17to the automatic pulse generator25. Similarly, when the second input terminal243bof the third AND gate243receives the second temperature signal T2and the first counting terminal Q1and the third counting terminal Q3are both at a high level, that is, the count value16of the counting subunit23is 5, the OR gate245sends the excitation signal17to the automatic pulse generator25. When the third input terminal244cof the fourth AND gate244receives the third temperature signal T3and the first counting terminal Q1and the second counting terminal Q2are both at a high level, that is, the count value16of the counting subunit23is 3, the OR gate245sends the excitation signal17to the automatic pulse generator25. In general, the counting subunit23counts the row address refresh signal12. After the count value reaches the preset value corresponding to the temperature signal10a, the automatic pulse generator25generates the reset signal15, so that the refresh signal generation unit21suspends outputting the row address refresh signal12. The lower the temperature, the smaller the preset value; therefore, as the temperature decreases, fewer row address refresh signals12are outputted under each refresh command11, and a single refresh command11corresponds to a smaller number of refreshed rows. This can reduce a total number of refreshed rows within a relatively-long data hold time and reduce the waste of refresh currents. In this embodiment, the first temperature signal T1represents that the current temperature is less than or equal to 85° C. and greater than 45° C. The second temperature signal T2represents that the current temperature is less than or equal to 45° C. and greater than 0° C. The third temperature signal T3represents that the current temperature is less than or equal to 0° C. In this embodiment, a temperature signal may be set every other fixed temperature interval; that is, a temperature interval range represented by the first temperature signal T1is equal to that represented by the second temperature signal T2. The temperature interval range represented by the first temperature signal T1may range from 20° C. to 50° C., which is, for example, 30° C., 35° C., 40° C., 45° C. or the like. This helps ensure that the data hold time corresponding to different temperature signals is different to such an extent that one is required to be added to or subtracted from the preset value. In addition, this helps prevent a too large difference in the data hold time corresponding to different temperature signals, so as to ensure that all the row addresses can be refreshed within the data hold time and prevent the waste of refresh currents. In addition, a difference between preset values corresponding to adjacent temperature signals may be 1 to 3 rows, for example, 2 rows. The difference is limited to a smaller range, which helps adjust preset values corresponding to the refresh command11at different temperatures more precisely and reduces the waste of refresh currents. A manner in which a specific signal between different functional structures is active is not particularly limited in the present disclosure. The specific signal may be high-level or active-low, or rising-edge or falling-edge active. In this embodiment, the refresh command11is an active-high pulse signal, the single-row refresh end signal14is an active-high pulse signal, and the reset signal15is an active-high pulse signal. In this embodiment, the refresh signal generation unit21includes: a first NOR gate21aconfigured to receive the refresh command11and the single-row refresh end signal14; a refresh window generation unit21bconfigured to receive the refresh command11, the single-row refresh end signal14and the reset signal15and output a refresh window signal18; output the refresh window signal18when receiving the refresh command11; detect the reset signal15when receiving the single-row refresh end signal14, output the refresh window signal18continuously when not receiving the reset signal15, and suspend outputting the refresh window signal18when receiving the reset signal15; and a second NOR gate21cconfigured to receive an output signal of the first NOR gate21aand the refresh window signal18and output the row address refresh signal12. The refresh window signal18is an active-low window signal. A duration of the refresh window signal18is configured to represent a duration corresponding to each refresh command11during which the row address refresh signal12can be sent. In this embodiment, the refresh circuit further includes: a switch unit50, when the switch unit50is turned on, the regulating subunit24in the refresh control module20adjusts the preset value based on the temperature signal10a, and when the switch unit50is turned off, the input terminal of the AND gate receiving the temperature signal10ais at a low level, the AND gate outputs a low level, and the preset value is a default value. The default value depends on a number of the input terminals of the first AND gate241, a number of the D flip-flops included in the counter and a connection relationship between the first AND gate241and the counter. Specifically, in this embodiment, the default value is 7. In addition, the switch unit50includes a fuse unit. The refresh control module20may receive the temperature signal10athrough the fuse unit. Further, the refresh circuit further includes a temperature sensor10configured to detect a temperature of a target chip and output the temperature signal10a. That is, the refresh circuit adjusts a number of refreshed rows corresponding to each refresh command11according to the temperature of the target chip. An operation principle of the refresh circuit is described below with reference to the timing diagram of signal generation of the refresh circuit provided inFIG.3with an example in which the regulating subunit24receives the third temperature signal T3, and the adjusted preset value is 3. It is to be noted that a dashed line with an arrow inFIG.3refers to a causal relationship of signal generation. A start point of the arrow is Cause and an end point is Effect. Specifically, First signal generation process: When an active-high refresh command11is received, an output terminal of the first NOR gate21achanges to a low level. The refresh window generation unit21boutputs an active-low refresh window signal18, and an output terminal of the second NOR gate21cchanges to a high level; that is, the refresh signal generation unit21outputs an active-high row address refresh signal12. Second signal generation process: The row addresser30and the counting subunit23receive the row address refresh signal12. The count value16outputted by the counting subunit23is 1. The row addresser30sends a first to-be-refreshed single-row address13to the array refresh device40. The array refresh device40performs a refresh operation according to the single-row address13. The refresh operation includes a row activation window and a pre-charge window. The row activation window is an active-high window signal. Appearance of a row activation window signal represents the start of the single-row refresh operation. A duration of the row activation window is required to be greater than a row addressing time, so as to ensure effective completion of row addressing. The pre-charge window is an active-high window signal. The end of the pre-charge window represents the end of the single-row refresh operation. A duration of the pre-charge window is required to be greater than a pre-charge time, so as to ensure effective completion of pre-charge. Third signal generation process: The array refresh device40outputs the single-row refresh end signal14after completing the single-row refresh operation on the single-row address13. The refresh window generation unit21bdetects the reset signal15after receiving the single-row refresh end signal14. The count value16is 1 and has not reached 3 in this case; therefore, the reset signal15has not yet been sent, and the refresh window generation unit21bcontinuously outputs an active-low refresh window signal18. At the same time, since the single-row refresh end signal14is an active-high pulse signal, the first NOR gate21aoutputs a low level when receiving the single-row refresh end signal14. In this way, the second NOR gate21ccan output a second row address refresh signal12, and the second signal generation process is repeated; in this case, the count value16changes from 1 to 2. Fourth signal generation process: After the refresh signal generation unit21outputs a third row address refresh signal12, the count value16changes from 2 to 3, the three input terminals of the fourth AND gate244are all at a high level, the fourth AND gate244outputs a high level, and the OR gate245outputs the excitation signal17. The automatic pulse generator25generates the reset signal15under the excitation of the excitation signal17. The reset terminal RST of each D flip-flop receives the reset signal15, and the level is reset to a low level, representing a value of 0. In this case, the count value16is reset from 3 to 0. At the same time, the refresh window generation unit21breceives the reset signal15. Fifth signal generation process: The array refresh device40outputs a third single-row refresh end signal14after completing a single-row refresh action according to the single-row address13corresponding to the third row address refresh signal12. The refresh window generation unit21bdetects the reset signal15after receiving the single-row refresh end signal14. The automatic pulse generator25has sent the reset signal15in this case; therefore, the refresh window signal18of the refresh window generation unit21bis suspended, the output terminal of the refresh window generation unit21bchanges from a low level to a high level, and the output terminal of the second NOR gate21cchanges to a low level; that is, the output of the row address refresh signal12is suspended, so that the number of refreshed rows corresponding to each refresh command11is 3. It is to be noted that the duration of the reset signal15outputted by the automatic pulse generator25should be greater than a row addressing time of the row addresser30and a single-row refresh time of the array refresh device40, which ensures that the reset signal15continues when the refresh window generation unit21breceives the third single-row refresh end signal14, so as to effectively suspend the output of the row address refresh signal12. Only one refresh command11is taken as an example in the above timing diagram. The refresh window generation unit21bmay output the refresh window signal18again when receiving the refresh command11again. The refresh circuit repeats the above signal generation processes. In addition, if the adjusting subunit24receives another temperature signal10aor no longer receives another temperature signal10awithin a refresh period corresponding to the refresh command11to lead to an increase in the preset value of the number of refreshed rows corresponding to the refresh command11, in this case, the refresh circuit performs row address refresh according to the increased preset value as long as the automatic pulse generator25has not generated the reset signal15. Conversely, if the preset value of the number of refreshed rows corresponding to the refresh command11is decreased within the refresh period corresponding to the refresh command11, it is determined whether the current count value16is greater than or equal to the decreased preset value, if yes, the output of the row address refresh signal12is suspended, and if no, single-row address refresh is performed according to the decreased preset value. In this embodiment, the preset value is adjusted based on the temperature signal, so that each refresh command at a high temperature corresponds to a larger number of refreshed rows and each refresh command at a low temperature corresponds to a smaller number of refreshed rows. In a case where a time interval between adjacent refresh commands remains unchanged, in a high-temperature environment, a larger number of rows are refreshed under each refresh command within a shorter data hold time, which helps ensure completion of refresh of all row addresses; in a low-temperature environment, a smaller number of rows are refreshed under each refresh command within a longer data hold time, which helps prevent repeated refresh caused by early completion of refresh of all row addresses, so as to reduce a waste of refresh currents and reduce a refresh current corresponding to each refresh command. Correspondingly, an embodiment of the present disclosure further provides a memory, including the refresh circuit according to any one of the above items. This helps reduce a waste of currents during the refresh of the memory. Those of ordinary skill in the art may understand that the above implementations are specific embodiments for implementing the present disclosure. However, in practical applications, various changes in forms and details may be made thereto without departing from the spirit and scope of the present disclosure. Any person skilled in the art can make respective changes and modifications without departing from the spirit and scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the scope defined by the claims.
27,451
11862223
DESCRIPTION OF EMBODIMENTS As described in the Background section, in existing techniques, a lower temperature may prolong the write time for writing data into a memory and adversely affect the stability of the data to be written. When a conventional memory works at a low temperature, the resistances of the bit line, the word line, the metal connection line (metal contact part), and the like in the memory would increase due to the low temperature. The write time for writing data into the memory would change or increase due to the resistance increase, thereby affecting the write stability of the memory. The present invention presents a semiconductor structure and a preheating method thereof to address the foregoing issues. The semiconductor structure may include a storage chip, a temperature detection unit, and a control chip. The temperature detection unit may be configured to detect the temperature of the storage chip before the storage chip initiates. The control chip may be configured to, before the storage chip initiates, heat the storage chip and determine whether the temperature detected by the temperature detection unit reaches a specified threshold, and, if the temperature reaches the specified threshold, control the storage chip to initiate. In this specification, the “initiation” of a chip may refer to the start or power on of the chip. Therefore “a chip initiates” may mean the chip is started or powered on. The control chip may be configured to cooperate with the temperature detection unit. The control chip may heat the storage chip before the storage chip initiates. The temperature detection unit may detect the temperature of the storage chip before the storage chip initiates. The control chip may determine whether the temperature detected by the temperature detection unit reaches the specified threshold. If the temperature reaches the specified threshold, the control chip may control the storage chip to initiate. Therefore, when the semiconductor structure provided in the present invention works at a low temperature, the storage chip may be heated to the specified threshold by the control chip, thereby preventing an increase of the resistances of the bit line, the word line, and the metal connection line (metal contact part) in the storage chip due to an excessively low temperature, reducing the write time for writing data into the memory at the low temperature and improving the write stability of the memory. To make the foregoing objects, features, and advantages of the present invention clearer and easier to understand, the specific implementations of the present invention will be described through the following detailed description with reference to the accompanying drawings. When describing the embodiments of the present invention in detail, for the sake of illustration, the schematic diagrams may be enlarged partially. The schematic diagrams are exemplary and should not constitute a limitation on the protection scope of the present invention herein. Additionally, three-dimensional spatial sizes of length, width, and depth should be included in actual production. FIGS.1,2,3,4,5,6, and7are schematic structural diagrams of a semiconductor structure according to one or more embodiments of the present invention.FIG.8is a schematic flowchart of a semiconductor structure preheating method according to one or more embodiments of the present invention. Referring toFIG.1, a semiconductor structure is provided. The semiconductor structure may include a storage chip201; a temperature detection unit203, a control chip301. The temperature detection unit203may be configured to detect the temperature of the storage chip201before the storage chip201initiates. The control chip301may be configured to, before the storage chip201initiates, heat the storage chip201and determine whether the temperature detected by the temperature detection unit203reaches a specified threshold, and, if the temperature reaches the specified threshold, control the storage chip201to initiate. The storage chip201may be a memory that can perform data write, data read, and/or data deletion. The storage chip201may be formed by using a semiconductor integrated production process. Specifically, the storage chip201may include a storage array and a peripheral circuit connected to the storage array. The storage array may include several storage units, and a bit line, a word line, and a metal connection line (metal contact part) that are connected to each storage unit. The storage unit may be configured to store data. The peripheral circuit may be a related circuit used when performing an operation on the storage array. In some embodiments, the storage chip201may be a DRAM storage chip. The DRAM storage chip may include several storage units. Each of the storage units may include a capacitor and a transistor. The gate of the transistor may be connected to a word line, the drain of the transistor may be connected to a bit line, and the source of the transistor may be connected to the capacitor. In some embodiments, the storage chip201may be other types of storage chips, and this specification is not limited in this regard. The number of storage chips201may be at least 1. Specifically, the number of storage chips201may be 1 or greater than or equal to 2. When the number of storage chips is greater than or equal to 2, the storage chips may be sequentially stacked vertically to form a stacked storage chip structure. Referring toFIG.2, in some embodiments, the number of storage chips201may be 4. The four storage chips201may be sequentially stacked from bottom to top to form a stacked storage chip structure. Adjacent storage chips201may be bonded together through a bonding process or an adhering process. In some embodiments, a through-silicon via (TSV) may be formed in the storage chip201. The storage chip201may be electrically connected to the control chip301through the through-silicon via (TSV). When multiple storage chips201are stacked, each storage chip201may be connected to the control chip201through a different through-silicon via (TSV). In some embodiments, the storage chip201may be alternatively connected to the control chip301by using a metal lead (formed by a lead bonding process). In some embodiments, the storage chip201may be located on the control chip301and electrically connected to the control chip301. When there is only one storage chip201, the control chip301may be bonded to the storage chip201. When there are multiple storage chips201forming a stacked storage chip structure, the control chip301may be bonded to the storage chip201at the bottom layer in the stacked structure. In some embodiments, the storage chip201and the control chip301may be connected in different manners. Referring toFIGS.4,5,6, and7, the semiconductor structure may further include a line substrate401. The line substrate401may have a connection line. The storage chip201and the control chip301may both be located on the line substrate401, and the storage chip201and the control chip301may be connected by using the connection line in the line substrate401. The line substrate401may be a PCB substrate. Referring toFIGS.1and4, the control chip301may be formed by using the semiconductor integrated production process. The control chip301may be configured to heat the storage chip201, so that the temperature of the storage chip201can reach the specified threshold. The specified threshold may be set in the control chip301based on an actual need or experience. The control chip301may be further configured to control the initiation of the storage chip201, which may include power-on and self-testing, and the performing of related operations on the storage chip201, which may include writing data into the storage chip201, reading data from the storage chip201, deleting data stored in the storage chip201, and the like. The semiconductor structure may further include the temperature detection unit203. The temperature detection unit203may be configured to measure the temperature of the storage chip201before the storage chip201initiates. The temperature detection unit203may be electrically connected to the control chip301. The temperature detected by the temperature detection unit203may be sent to the control chip301, and may serve as the basis for controlling, by the control chip301, the storage chip201to initiate. Specifically, the control chip301may be configured to cooperate with the temperature detection unit203. The control chip301may heat the storage chip201before the storage chip201initiates. The temperature detection unit203may detect the temperature of the storage chip201before the storage chip201initiates. And the control chip301may determine whether the temperature detected by the temperature detection unit203reaches the specified threshold, and, if the temperature reaches the specified threshold, control the storage chip201to initiate. Therefore, when the semiconductor structure provided in the present invention works at a low temperature, the storage chip201may be heated to the specified threshold by using the control chip301, thereby preventing an increase of the resistances of the bit line, the word line, and the metal connection line (metal contact part) in the storage chip due to an excessively low temperature, reducing the write time for writing data into the memory at the low temperature, and improving the write stability of the memory. The temperature detection unit203may include a temperature sensor. The temperature sensor may be configured to sense a temperature and convert the sensed temperature into an electrical signal. In some embodiments, the temperature sensor may be a P-N junction temperature sensor or a capacitive temperature sensor. The temperature sensor may be formed by using the semiconductor integrated production process, and may be located in the storage chip201, in the control chip301, or located on the line substrate401between the storage chip201and the control chip301, as shown inFIG.5. The number of temperature detection units203may be 1 or greater than or equal to 2. The temperature detection unit203may be in the control chip301or in the storage chip201. In some embodiments, the number of temperature detection units203may be 1. Specifically, the temperature detection unit203may be in the control chip301, as shown inFIGS.1and4. Alternatively, the temperature detection unit203may be in the storage chip201. When there is only one storage chip201, the temperature detection unit203may be directly located in the storage chip201. When there are multiple storage chips201forming a stacked structure, the temperature detection unit203may be located in one of the storage chips201, and may be preferably located in the storage chip201at the bottom layer in the stacked structure, as shown inFIGS.2and6. The temperature detection unit203may be on the line substrate401between the storage chip201and the control chip301, as shown inFIG.5. Upon determining that the temperature detected by the temperature detection unit203reaches the specified threshold, the control chip301may control all storage chips201to initiate. When there are multiple storage chips201in the semiconductor structure, the aforementioned control structure and control manner may be relatively simple, and may reduce the write time for writing data into the storage chip201at a low temperature and improve the write stability of the storage chip201. In some embodiments, referring toFIGS.1,2,4,5, and6, the number of temperature detection units203may be 1 and the number of storage chips201may be greater than or equal to 2. Upon determining that the temperature detected by the temperature detection unit203reaches the specified threshold, the control chip301may first control the storage chip201closest to the control chip301to initiate, and then control other storage chips201above to sequentially initiate. Referring toFIG.2, when there are four storage chips201, upon determining that the temperature detected by the temperature detection unit203reaches the specified threshold, the control chip301may first control the storage chip201closest to the control chip301(the storage chip at the bottom layer in the stacked structure) to initiate, and then control the other three storage chips201above to sequentially initiate. When there are multiple storage chips201in the semiconductor structure, the aforementioned control structure and control manner may cause each storage chip201to initiate after the specified threshold temperature is reached, thereby improving the precision of the initiation of each storage chip201, reducing the write time for writing data into each storage chip201at a low temperature and improving the write stability of each storage chip201. In some embodiments, referring toFIGS.3and7, the number of temperature detection units203may be greater than or equal to 2 and the number of storage chips201may be greater than or equal to 2. Each storage chip201may have one temperature detection unit203. The control chip301may sequentially determine whether the temperature detected by each of the temperature detection units203reaches the specified threshold, and, if the temperature detected by one of the temperature detection units203reaches the specified threshold, control the storage chip corresponding to the temperature detection unit203to initiate. There may be four storage chips201in a stacked structure, as shown inFIGS.3and7, and each storage chip201may correspondingly have one temperature detection unit203. Therefore, each temperature detection unit203may detect the temperature of a corresponding storage chip201, thereby obtaining four detected temperatures. The control chip301may sequentially determine whether the temperature detected by each of the four temperature detection units203reaches the specified threshold, and, if the temperature detected by one of the temperature detection units203reaches the specified threshold, control the storage chip corresponding to the temperature detection unit203to initiate. For example, when the temperature detected by the temperature detection unit203in the storage chip201at the bottom layer in the stacked structure first reaches the specified threshold, the control chip301may first control the storage chip201at the bottom layer in the stacked structure to initiate. Then, when the temperature detected by the corresponding temperature detection unit203in the storage chip201at the penultimate layer in the stacked structure also reaches the specified threshold, the control chip301may control the storage chip201at the penultimate layer in the stacked structure to initiate. The initiation of the storage chips201on the two upper layers may be conducted in the same manner. When there are multiple storage chips201in the semiconductor structure, the aforementioned control structure and control manner may further improve the precision of the of each storage chip201, reducing the write time for writing data into each storage chip201at a low temperature and improving the write stability of each storage chip201. In some embodiments, before the control chip301heats the storage chip201, the control chip301may need to initiate. In one example, the control chip301may need to be powered on and self-tested. When the control chip301initiates, the control chip301may not give an instruction to the storage chip201. Only when the temperature detected by the temperature detection unit reaches the specified threshold, the control chip301may control the storage chip201to initiate. The control chip301may heat the storage chip using the heat generated by the control chip301after the control chip301initiates. Therefore, no additional heating circuit may be needed, thereby simplifying the semiconductor structure. The control chip301may perform various operations to generate heat to heat the storage chip201. The operations the control chip301may perform to generate heat may be predetermined according to actual needs. For example, the operations may be predetermined according to desired heating speeds and/or power consumption requirements. The specific operations the control chip may perform are not limited in this specification. In some embodiments, after the control chip301controls the storage chip201to initiate, the control chip301may further control the storage chip201to perform write, read, and erase operations. The control chip301may have a control circuit. The control circuit may be configured to control the storage chip201to initiate and control the storage chip201to perform write, read, and erase operations. In some embodiments, the control chip301may have a heating circuit, configured to heat the storage chip201. Before or after the control chip301heats the storage chip201, the control chip may determine whether the temperature of the storage chip detected by the temperature detection unit reaches the specified threshold. If the temperature does not reach the specified threshold, the control chip may control the heating circuit to heat the storage chip. If the temperature reaches the specified threshold, the control chip may control the heating circuit to stop heating the storage chip. Therefore, the heating process may be accurately controlled, so that the temperature of the storage chip201can be kept near the specified threshold, thereby preventing the temperature of the storage chip201from being excessively high or low, and preventing the write time of the memory from being prolonged under undesirable temperatures. This invention further presents a semiconductor structure preheating method. Referring toFIG.8, the method may include the following steps S101through S105. In step S101, a semiconductor structure may be provided. The semiconductor structure may include a storage chip, a control chip electrically connected to the storage chip, and a temperature detection unit. In step S102, the control chip may initiate. In step S103, before the storage chip initiates, the storage chip may be heated by the control chip. In step S104, the temperature of the storage chip may be detected by the temperature detection unit. In step S105, the control chip may determine whether the temperature detected by the temperature detection unit reaches a specified threshold, and, if the temperature reaches the specified threshold, control the storage chip to initiate. In some embodiments, the number of temperature detection units may be 1 or greater than or equal to 2, and the number of storage chips may be 1 or greater than or equal to 2. When the number of storage chips is greater than or equal to 2, several storage chips may be sequentially stacked vertically. In some embodiments, the number of temperature detection units may be 1. Upon determining that the temperature detected by the temperature detection unit reaches the specified threshold, the control chip may control all storage chips to initiate. In some embodiments, the number of temperature detection units may be 1 and the number of storage chips may be greater than or equal to 2. Upon determining that the temperature detected by the temperature detection unit reaches the specified threshold, the control chip may first control the storage chip closest to the control chip to initiate, and then control other storage chips on the storage chip to sequentially initiate. In some embodiments, the number of temperature detection units may be greater than or equal to 2 and the number of storage chips may be greater than or equal to 2. Each storage chip may have one temperature detection unit. The control chip may sequentially determine whether the temperatures detected by each of the temperature detection units reach the specified threshold, and, if the temperature detected by one of the temperature detection units reaches the specified threshold, control the storage chip corresponding to the temperature detection unit to initiate. In some embodiments, after the control chip controls the storage chip to initiate, the control chip may further control the storage chip to perform write, read, and erase operations. The limitation or description of the same or similar part in the embodiments of the method compared to the foregoing embodiments of the semiconductor structure will not be described repeatedly, the details of which may be referred to a corresponding part in the foregoing semiconductor structure embodiments. Although the present invention has been disclosed above in the preferred embodiments, such preferred embodiments are not intended to limit the present invention. Any person skilled in the art may make a possible change and modification to the technical solutions of the present invention based on the foregoing disclosed method and technical content without departing from the spirit and scope of the present invention. Therefore, any simple alteration, equivalent change, and modification made to the foregoing embodiments based on the technical essence of the present invention without departing from the content of the technical solutions of the present invention fall within the protection scope of the technical solutions of the present invention.
21,265
11862224
DETAILED DESCRIPTION FIG.1is a diagram of a system on chip (SoC) integrated circuit (IC)100according to an embodiment of the present invention, where the SoC IC100may be placed in an electronic device10, and more particularly, may be mounted on a main board (e.g., a printed circuit board (PCB)) of the electronic device10, but the present invention is not limited thereto. As shown inFIG.1, in addition to the SoC IC100, the electronic device10may comprise a dynamic random access memory (DRAM)100D, for example, the DRAM100D may also be mounted on the main board. In addition, the SoC IC100may comprise a non-volatile memory (NVM)100N, a processing circuit110, a physical layer (PHY) circuit120, a pad set130and a static random access memory (SRAM)140, where the processing circuit110may comprise at least one processor (e.g., one or more processors), and the pad set130may comprise a plurality of pads as terminals of the SoC IC100for coupling the SoC IC100to at least one external component (e.g., the DRAM100D). In the architecture shown inFIG.1, the NVM100N can be depicted in the SoC IC100, but the prevent invention is not limited thereto. For example, the NVM100N can be implemented outside the SoC IC100. In addition, the NVM100N can be implemented by way of electrically erasable programmable read-only memory (EEPROM), flash memory, etc., but the present invention is not limited thereto. No matter whether the NVM100N is implemented inside or outside the SoC IC100, the NVM100N may store information for the SoC IC100, and may prevent the information from being lost during power off, where the information may comprise program codes, control parameters, etc. The processing circuit110can load the program codes from the NVM100N to the aforementioned at least one processor, and the program codes running on the aforementioned at least one processor can control the operations of the electronic device10. For example, a first program code of the above-mentioned program codes can be executed on the above-mentioned at least one processor to control the electronic device10to provide services to an user of the electronic device10, but the present invention is not limited thereto. In some embodiments, a second program code of the above-mentioned program codes can be executed on the above-mentioned at least one processor to control the SoC IC100to perform memory calibration for the DRAM100D. In addition, the SRAM140and the DRAM100D can be regarded as the internal memory and the external memory of the SoC IC100, respectively, and more particularly, can temporarily store information for the processing circuit110(e.g., the above-mentioned at least one processor), respectively. For example, the PHY circuit120can communicate with the DRAM100D through the pad set130for the processing circuit110(e.g., the above-mentioned at least one processor), to allow the processing circuit110(e.g., the above-mentioned at least one processor) to access (e.g., write or read) data in the DRAM100D. When the electronic device10is powered up, the SoC IC100(such as the processing circuit110, and more particularly, a calibration control module100C therein) can use the PHY circuit120to perform preparation operations corresponding to multiple preparation phases on the DRAM100D to make the DRAM100D enter an idle state, and more particularly, to make the DRAM100D enter a state of ready-for-use. For example, the multiple preparation phases may comprise: a power-up and initialization phase PHASE_1, where in this phase the processing circuit110(e.g., the calibration control module100C therein) can control the PHY circuit120to apply power to the DRAM100D through the pad set130and perform a series of operations related to initialization on the DRAM100D; a ZQ calibration phase PHASE2, where in this phase the processing circuit110(e.g., the calibration control module100C therein) can control the PHY circuit120to trigger the DRAM100D through the pad set130to perform resistance/impedance calibration regarding a set of data pins {DQ}, for example, the DRAM100D can perform the resistance/impedance calibration with aid of a precision resistor having a predetermined resistance value that is connected to a pin ZQ thereof; and at least one subsequent phase such as one or more subsequent phases. Regarding some implementation details of the first two phases of the multiple preparation phases, please refer to existing DRAM-related standards such as the DDR3 SDRAM standard (e.g., JESD79-3), the DDR4 SDRAM standard (e.g., JESD79-4), etc. After the preparation operations corresponding to the first two phases are completed, the DRAM100D may enter the idle state, but it may not be in the state of ready-for-use. In order to correctly access the DRAM100D, the processing circuit110(e.g., the calibration control module100C therein) can perform preparation operations corresponding to the above-mentioned at least one subsequent phase, and these preparation operations may comprise at least one portion (e.g., a part or all) of the following operations:(1) the processing circuit110(e.g., the calibration control module100C therein) can try to configure the PHY circuit120and/or the DRAM100D according to a plurality of control parameters read from the NVM100N, and more particularly, perform calibration regarding reading (which can be regarded as read training) such as a reading-related calibration operation and calibration regarding writing (which can be regarded as write training) such as a writing-related calibration operation on the PHY circuit120, and utilize at least one test control unit of the test control units POK1and POK2(e.g., one or all of the test control units POK1and POK2) to perform a data access test to determine whether the configuration is completed;(2) in a case that the above-mentioned data access test is unsuccessful, the processing circuit110(e.g., the calibration control module100C therein) can calibrate at least one control parameter (e.g., one or more control parameters) used for controlling the PHY circuit120to access the DRAM100D, such as at least one portion (e.g., a part or all) of the plurality of control parameters, and utilize the above-mentioned at least one test control unit to perform the data access test to determine whether the configuration is completed; wherein, the calibration operation can be performed multiple times until the above-mentioned data access test is successful, to ensure that the SoC IC100can correctly access (e.g., read or write) the DRAM100D through the PHY circuit120after the configuration is completed, but the present invention not limited thereto. For better comprehension, the data access test may comprise a read test and a write test, such as tests of reading and writing regarding predetermined data, and the correctness of a read result and the correctness of a write result can indicate the success of the read test and the write test respectively. As shown inFIG.1, a receiving (Rx) direction and a transmitting (Transmitting, TX) direction of the SoC IC100relative to the DRAM100D can indicate directions of reading and writing, respectively. For example, the processing circuit110(e.g., the calibration control module100C therein) can perform calibration regarding reading, such as the calibration of phase and/or reference voltage, and more particularly, during performing the phase calibration, control the PHY circuit120to adjust a read delay amount stored in a read delay register within a receiver (e.g., a read capture circuit configured to capture data as the read result) therein to correspondingly adjust the number of enabled delay taps among multiple delay taps of the receiver in the PHY circuit120, making the data capturing time point of the SoC side (e.g., the receiver in the PHY circuit120) be aligned to a center of the data eye in the waveforms of a read signal (e.g., a data signal passing through a certain data pin DQ), wherein, the correctness of the read result can indicate that the read test is successful, and this can indicate that the calibration regarding reading is complete. For another example, the processing circuit110(e.g., the calibration control module100C therein) can perform calibration regarding writing, such as phase and/or reference voltage calibration, and more particularly, during performing the phase calibration, control the PHY circuit120to adjust a write delay amount stored in a write delay register within a transmitter therein to correspondingly adjust the number of enabled delay taps among multiple delay taps of the transmitter in the PHY circuit120, to adjust the phase of a write signal (e.g., a data signal passing through a certain data pin DQ) relative to a data strobe signal, making the data capturing time point of the DRAM side (e.g., a receiver in the DRAM100D) be correct, which means that on the DRAM side, the center of the data eye in the waveforms of the write signal is aligned to the edge of the data strobe signal, where the write result being correct can indicate that the write test is successful, which can indicate that the calibration regarding writing is completed. As a result, the DRAM100D can enter the state of ready-for-use. According to some embodiments, the test control unit POK2can perform the read test, and the test control unit POK1can perform the write test, but the invention is not limited thereto. In some embodiments, the implementation of the test control unit POK1and the test control unit POK2may vary. For example, the test control unit POK1can be integrated into the test control unit POK2. For another example, the test control unit POK2can be integrated into the test control unit POK1. According to some embodiments, the PHY circuit120(e.g., the test control unit POK2) can set a mode control register (not shown inFIG.1) in the DRAM100D, to make the DRAM100D enter a test mode or a normal mode. In the test mode, the DRAM100D can switch the internal access path thereof, to make the read or write data stream be redirected from the memory units in the DRAM100D to a set of multi-purpose registers (MPR) (not shown inFIG.1) of the DRAM100D, where these memory units can be used for storing data for the SoC IC100in the normal mode. The PHY circuit120(e.g., the test control unit POK2) can write the predetermined data to the set of MPRs in advance for performing the read test. The PHY circuit120(e.g., the test control unit POK2) can trigger the DRAM100D to continuously and/or repeatedly send the predetermined data back to the PHY circuit120in the SoC IC100during the read test. For example, the predetermined data may comprise a set of alternating bits (such as 01010101 or 10101010, rather than continuous bit1or continuous bit0), and the data signal through a certain data pin DQ can carry a corresponding bit stream (such as {01010101, 01010101, . . . } or {10101010, 10101010, . . . }), allowing the data eye in the waveform of the data signal to be detected, but the present invention is not limited thereto. As the predetermined data is already known to the SoC IC100(e.g., the processing circuit110, the calibration control module100C and/or the PHY circuit120), the PHY circuit120(e.g., the test control unit POK2) can read a read result from the DRAM100D and compare the read result with the predetermined data to determine whether the read result is correct, to further determine whether the calibration regarding reading is complete. In addition, after the calibration regarding reading is completed, as all the read results are regarded as reliable, the processing circuit110(e.g., the calibration control module100C therein) can perform the calibration regarding writing. For example, as any written data (e.g., data to be written) such as the predetermined data is already known to the SoC IC100(e.g., the processing circuit110, the calibration control module100C and/or the PHY circuit120), the calibration control module100C (e.g., the test control unit POK1) can control the PHY circuit120to write the any written data, to read a read result from the DRAM100D, and compare the read result with the any written data such as the predetermined data to determine whether the read result is correct, to further determine whether the calibration regarding writing is completed. FIG.2is a diagram illustrating some implementation details of the SoC IC100shown inFIG.1according to an embodiment of the present invention. The architecture shown inFIG.2(such as a SoC IC200and a processing circuit210, a calibration control program200C, etc. therein) can be regarded as an example of the architecture shown inFIG.1(such as the SoC IC100and the processing circuit110, the calibration control module100C, etc. therein). The above-mentioned at least one processor may be collectively referred to as the processor211in this embodiment. In addition to the processor211, the processing circuit210may further comprise a bus210B and a DRAM controller212, and further comprise at least one additional controller, which may be collectively referred to as a controller213. The DRAM controller212can control the operations of the DRAM100D through the PHY circuit120, and the controller213can control some other operations. In this embodiment, the above-mentioned calibration control module100C can be implemented by way of a calibration control program200C running on the processor211. For example, the second program code among the above-mentioned program codes can be loaded into the processor211to perform the calibration control program200C running on the processor211. For brevity, similar descriptions for this embodiment are not repeated in detail here. FIG.3is a diagram illustrating some implementation details of the SoC IC100shown inFIG.1according to another embodiment of the present invention. The architecture shown inFIG.3(such as a SoC IC300and a processing circuit310, a calibration control circuit300C, etc. therein) can be regarded as an example of the architecture shown inFIG.1(such as the SoC IC100and the processing circuit110, the calibration control module100C, etc. therein). The above-mentioned at least one processor may be collectively referred to as the processor311in this embodiment. In addition to the processor311, the processing circuit310may further comprise the bus210B, a DRAM controller312and the controller213. The DRAM controller312can control the operations of the DRAM100D through the PHY circuit120. In this embodiment, the above-mentioned calibration control module100C can be implemented by way of a hardware circuit, and more particularly, can be implemented as one of multiple sub-circuits of the DRAM controller312, such as the calibration control circuit300C. For brevity, similar descriptions for this embodiment are not repeated in detail here. In some subsequent embodiments, the above-mentioned data eyes can be illustrated as hexagons for better comprehension, where the hexagons illustrated as a multilayer stack may represent the data eyes of a set of data signals passing through the set of data pins {DQ}, respectively, where the PHY circuit120may comprise sub-circuits of multiple slices (comprising respective receivers and transmitters thereof) corresponding to the set of data pins {DQ}, respectively, and the processing circuit110may selectively calibrate one or more slices when needed, but the present invention is not limited thereto. For example, the shape of the data eye in a typical eye diagram may be visualized as a hexagon or any of some other shapes. In addition, the set of data signals can carry a set of bits in any byte of one or more bytes. For example, the one or more bytes may represent the bytes read from the DRAM100D. For another example, the one or more bytes may represent the bytes written to the DRAM100D. Additionally, regarding the above-mentioned reference voltage calibration, the processing circuit110(e.g., the calibration control module100C therein) can calibrate a reference voltage Vref used for determining whether a data bit is the bit0or the bit1. For example, the reference voltage Vref may represent the reference voltage of the data signal of a certain data pin DQ (e.g., any data pin of the set of data pins {DQ}, and more particularly, each data pin of the set of data pins {DQ}), and therefore can be written as the reference voltage VrefDQ for better comprehension. FIG.4is a diagram illustrating a horizontal timing calibration control scheme regarding writing of a method for performing memory calibration according to an embodiment of the present invention. When the DRAM100D is a DDR3 SDRAM, the reference voltage Vref (e.g., the reference voltage VrefDQ) regarding writing may be equal to 750 millivolt (mV for short). When the DRAM100D is a DDR3 SDRAM, the reference voltage Vref (e.g., the reference voltage VrefDQ) regarding writing may be equal to 750 millivolts (mV). As the reference voltage Vref is fixed, the calibration regarding writing may comprise horizontal timing calibration, and can be performed in a per-slice calibration manner, and the above-mentioned at least one control parameter may comprise a horizontal timing control parameter O_X, but the present invention is not limited thereto. For example, the calibration regarding writing can be performed in an all-slice calibration manner. Under the control of the calibration control module100C, the processing circuit110can perform the calibration regarding writing according to the horizontal timing calibration control scheme, and more particularly, can perform operations of the following Steps S31A-S37A:(Step S31A) the processing circuit110can read a default value O_X0of the horizontal timing control parameter O_X from the NVM100N, for being written into the write delay register to be the write delay amount, wherein, regarding the horizontal coordinates, the default value O_X0can correspond to the center point O (e.g., a candidate position O1among the multiple candidate positions O1, O2, O3, O4, O5, etc. thereof) of a predetermined mask MASK_AB to indicate the data capturing time point on the DRAM side (e.g., the receiver in the DRAM100D), and the predetermined mask MASK_AB can be defined by the mask coefficient n and the horizontal timing interval HT (e.g., the delay amount of each delay tap of the multiple delay taps of the transmitter);(Step S32A) the processing circuit110can determine a set of test values corresponding to the predetermined mask MASK_AB according to the default value O_X0of the horizontal timing control parameter O_X, where the set of test values may comprise two test values represented by a test point A and a test point B on the predetermined mask MASK_AB, for example, the respective horizontal coordinates of these test points, such as the horizontal coordinates obtained by adjusting the horizontal timing (e.g., by fixed or unfixed multiples) to the left or right relative to the central point O corresponding to the default value O_X0;(Step S33A) the processing circuit110can respectively write these two test values (such as the above horizontal coordinates) in Step S32A into the write delay register to be the write delay amount to check whether the write test is passed, to determine whether to stop performing the calibration regarding writing, wherein, if the write test can be passed for the two cases that these two test values (such as the above horizontal coordinates) are used as the write delay amount, respectively, the processing circuit110can stop performing the calibration regarding writing, otherwise, the processing circuit110can continue subsequent operations to continue performing the calibration regarding writing at the next candidate position;(Step S34A) the processing circuit110may adjust the default value O_X0of the horizontal timing control parameter O_X according to a predetermined adjustment sequence such as the sequence of the multiple candidate positions O1, O2, O3, O4, O5, etc. to generate a candidate value O_Xc of the horizontal timing control parameter O_X, for being written into the write delay register to be the write delay amount, wherein, regarding the horizontal coordinates, the candidate value O_Xc may correspond to a subsequent candidate position of the multiple candidate positions O1, O2, O3, O4, O5, etc., such as one of the candidate positions O2, O3, O4, O5, etc., to indicate the data capturing time point on the DRAM side (e.g., the receiver in the DRAM100D);(Step S35A) the processing circuit110may determine a set of test values corresponding to the predetermined mask MASK_AB according to the candidate value O_Xc of the horizontal timing control parameter O_X, where the set of test values may comprise two test values represented by the test points A and B on the predeterminedmaskMASK_AB, for example, the respective horizontal coordinates of these test points, such as the horizontal coordinates obtained by adjusting the horizontal timing (e.g., by fixed or unfixed multiples) to the left or right relative to the point corresponding to the candidate value O_Xc (similar to the way of Step S32A) with fixed multiple horizontal timing adjustment (e.g., n times the horizontal timing interval HT);(Step S36A) the processing circuit110can respectively write these two test values (such as the above horizontal coordinates) in Step S35A into the write delay register to be the write delay amount to check whether the write test is passed, to determine whether to stop performing the calibration regarding writing, wherein, if the write test can be passed for the two cases that these two test values (such as the above horizontal coordinates) are used as the write delay amount, respectively, the processing circuit110can stop performing the calibration regarding writing, otherwise, the processing circuit110can perform similar operations to continue performing the calibration regarding writing at the next candidate position, until all candidate positions among the multiple candidate positions O1, O2, O3, O4, O5, etc. are used up;(Step S37A) when it is determined to stop performing the calibration regarding writing, the processing circuit110can update the horizontal timing control parameter O_X in the NVM100N to be the latest candidate value O_Xc, such as the last candidate value O_Xc obtained and used in the loop of Steps S34A-S36A above; where the success of the write test on the test points A and B can indicate that the write test on all possible or available test points in the region enclosed by the predetermined mask MASK_AB is expected to be successful, but the present invention Not limited thereto. For example, if the failure of the write test continues to occur until all candidate positions among the multiple candidate positions O1, O2, O3, O4, O5, etc. are used up, the processing circuit110may issue an error message, rather than executing Step S37A. In addition, in the above operations, the processing circuit110can selectively move the predetermined mask MASK_AB (together with the test points A and B thereon) in multiple rounds to perform the write test corresponding to the predetermined mask MASK_AB according to the multiple candidate positions O1, O2, O3, O4, O5, etc., respectively. For brevity, similar descriptions for this embodiment are not repeated in detail here. According to some embodiments, the multiple candidate positions O1, O2, O3, O4, O5, etc. of the predetermined mask MASK_AB may vary. For example, the number and/or arrangement of candidate positions of the predetermined mask MASK_AB may vary. FIG.5is a diagram illustrating a horizontal timing and reference voltage calibration control scheme regarding writing of the method according to an embodiment of the present invention. In comparison with the horizontal timing calibration control scheme that can provide one-dimensional calibration as shown inFIG.4, this horizontal timing and reference voltage calibration control scheme can provide two-dimensional calibration. For example, when the DRAM100D is a DDR4 SDRAM, the reference voltage Vref (e.g., the reference voltage VrefDQ) regarding writing is adjustable. The calibration regarding writing may comprise the horizontal timing calibration and the reference voltage calibration, and can be performed in the all-slice calibration manner, and the above-mentioned at least one control parameter may comprise the horizontal timing control parameter O_X and a reference voltage parameter O_Y, where the reference voltage parameter O_Y can indicate a predetermined voltage level of the reference voltage Vref for writing, but the invention is not limited thereto. In some embodiments, the reference voltage parameter O_Y can be illustrated as the reference voltage Vref for better comprehension. Under the control of the calibration control module100C, the processing circuit110can perform the calibration regarding writing according to the horizontal timing and reference voltage calibration control scheme, and more particularly, can perform operations of the following Steps S31B-S37B(Step S31B) in addition to reading the default value O_X0of the horizontal timing control parameter O_X from the NVM100N for being written into the write delay register to be the write delay amount, the processing circuit110can read a default value O_Y0of the reference voltage parameter O_Y from the NVM100N for being written into a reference voltage control register to be the predetermined voltage level of the reference voltage Vref, wherein, regarding the horizontal and vertical coordinates, the default value (O_X0, O_Y0) can correspond to the center point O of a predetermined mask MASK_A2D_Tx (e.g., the candidate position O1among multiple candidate positions O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc. thereof) to indicate the data capturing time point and the predetermined voltage level of the reference voltage Vref on the DRAM side (e.g., the receiver in the DRAM100D), and the predetermined mask MASK_A2D_Tx can be defined by the mask coefficients m and n and the horizontal timing interval HT;(Step S32B) the processing circuit110can determine a set of test values corresponding to the predetermined mask MASK_A2D_Tx according to the default values (O_X0, O_Y0), where the set of test values may comprise a series of test values represented by the test points A, B, C, and D on the predetermined mask MASK_A2D_Tx, for example, the respective horizontal and vertical coordinates of these test points, such as the horizontal coordinates obtained (with similar method ofFIG.4) by performing horizontal adjustment (e.g., the adjustment being performed with n times the horizontal timing interval HT) relative to the central point O corresponding to the default value O_X0, and the vertical coordinates obtained by performing vertical adjustments with a fixed proportion (e.g., m %) or non-fixed proportion upward and downward relative to the central point O corresponding to the default value O_Y0, respectively;(Step S33B) the processing circuit110can respectively write this series of test values (e.g., the above coordinates, such as the sets of horizontal and vertical coordinates of these test points) in Step S32B to the write delay register (to be the write delay amount) and the reference voltage control register (to be the predetermined voltage level) to check whether the write test is passed, to determine whether to stop performing the calibration regarding writing, wherein, if the write test can be passed for the four cases that this series of test values (such as the above coordinates) are used as the write delay amount and the predetermined voltage level, respectively, the processing circuit110can stop performing the calibration regarding writing, otherwise, the processing circuit110can continue subsequent operations to continue performing the calibration regarding writing at the next candidate position;(Step S34B) the processing circuit110may adjust the respective default values O_X0and O_Y0of the horizontal timing control parameter O_X and the reference voltage parameter O_Y according to a predetermined adjustment sequence such as a sequence of the multiple candidate positions O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc. to generate the respective candidate values O_Xc and O_Yc of the horizontal timing control parameter O_X and the reference voltage parameter O_Y for being written into the write delay register (to be the write delay amount) and the reference voltage control register (to be the predetermined voltage level), where regarding the horizontal and vertical coordinates, the candidate values (O_Xc, O_Yc) can correspond to a subsequent candidate position of the multiple candidate positions O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc., such as one of the candidate positions O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc., to indicate the data capturing time point and the predetermined voltage level of the reference voltage Vref on the DRAM side (e.g., the receiver in the DRAM100D);(Step S35B) the processing circuit110may determine a set of test values corresponding to the predetermined mask MASK_A2D_Tx according to the candidate values (O_Xc, O_Yc), where the set of test values may comprise a series of test values represented by the test points A, B, C and D on the predetermined mask MASK_A2D_Tx, for example, the respective horizontal and vertical coordinates of these test points, and the method of obtaining the coordinates of this series of test values is similar to that of Step S32B (and the default values (O_X0, O_Y0) are replaced with the candidate values (O_Xc, O_Yc)), so similar descriptions are not repeated in detail here;(Step S36B) the processing circuit110can respectively write this series of test values (e.g., the above coordinates, such as the sets of horizontal and vertical coordinates of these test points) in Step S35B to the write delay register (to be the write delay amount) and the reference voltage control register (to be the predetermined voltage level) to check whether the write test is passed, to determine whether to stop performing the calibration regarding writing, wherein, if the write test can be passed for the four cases that this series of test values (e.g., the above coordinates) are used as the write delay amount and the predetermined voltage level, respectively, the processing circuit110can stop performing the calibration regarding writing, otherwise, the processing circuit110can perform similar operations to continue performing the calibration regarding writing at the next candidate position until all the candidate positions among the multiple candidate positions O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc. are used up;(Step S37B) when it is determined to stop performing the calibration regarding writing, the processing circuit110may update the horizontal timing control parameter O_X and the reference voltage parameter O_Y in the NVM100N to be their respective latest candidate values (O_Xc, O_Yc), such as the last candidate values (O_Xc, O_Yc) obtained and used in the loop of Steps S34B-S36B above;where the success of the write test on the test points A, B, C, and D can indicate that the write test on all possible or available test points in the region enclosed by the predetermined mask MASK_A2D_Tx is expected to be successful, but the present invention is not limited thereto. For example, if the failure of the write test continues to occur until all candidate positions among the multiple candidate positions O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc. are used up, the processing circuit110may issue an error message, rather than executing Step S37B. In addition, in the above operations, the processing circuit110can selectively move the predetermined mask MASK_A2D_Tx (together with the test points A, B, C, and D thereon) in multiple rounds to perform the write test corresponding to the predetermined mask MASK_A2D_Tx according to the multiple candidate positions O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc., respectively. For brevity, similar descriptions for this embodiment are not repeated in detail here. According to some embodiments, the multiple candidate positions O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc. of the predetermined mask MASK_A2D_Tx may vary. For example, the number and/or arrangement of candidate positions of the predetermined mask MASK_A2D_Tx may vary. FIG.6is a diagram illustrating a horizontal timing and reference voltage calibration control scheme regarding reading of the method according to an embodiment of the present invention. In comparison with the embodiment shown inFIG.5, the horizontal timing and reference voltage calibration control scheme of this embodiment uses the predetermined mask MASK_A2D_Rx corresponding to reading instead of the predetermined mask MASK_A2D_Tx corresponding to writing, and can also provide two-dimensional calibration. For example, no matter whether the DRAM100D belongs to DDR3 SDRAM, DDR4 SDRAM, etc., the reference voltage Vref (e.g., the reference voltage VrefDQ) regarding reading is adjustable. The calibration regarding reading may comprise horizontal timing calibration and reference voltage calibration, and can be performed in the all-slice calibration manner, and the above-mentioned at least one control parameter may comprise another horizontal timing control parameter O_X and another reference voltage parameter O_Y, but the present invention is not limited thereto. For example, related symbols such as Vref (e.g., VrefDQ), O, A, B, C, D, O_X, O_Y, O_X0, O_Y0, O_Xc, O_Yc, etc. can be added “(1)” as suffix thereof in any of the embodiments respectively shown inFIG.4andFIG.5to be rewritten as Vref (1) (e.g., VrefDQ (1)), O(1), A(1), B(1), C(1), D(1), O_X(1), O_Y(1), O_X0(1), O_Y0(1), O_Xc(1), O_Yc(1), etc., or can be added “(0)” as suffix thereof in this embodiment to be rewritten as Vref(0) (e.g., VrefDQ(0)), O(0), A(0), B(0), C(0), D(0), O_X(0), O_Y(0), O_X0(0), O_Y0(0), O_Xc(0), O_Yc(0), etc., where the symbols without the suffix “(0)” are used below to illustrate for brevity. Under the control of the calibration control module100C, the processing circuit110can perform the calibration regarding reading according to the horizontal timing and reference voltage calibration control scheme of this embodiment, and more particularly, can perform operations of the following Steps S31C-S37C:(Step S31C) in addition to reading the default value O_X0of the horizontal timing control parameter O_X from the NVM100N for being written into the read delay register to be the read delay amount, the processing circuit110can read the default value O_Y0of the reference voltage parameter O_Y from the NVM100N for being written into another reference voltage control register to be a predetermined voltage level of the reference voltage Vref, where regarding the horizontal and vertical coordinates, the default values (O_X0, O_Y0) can correspond to the center point O (e.g., the candidate position O1among the multiple candidate positions O1, O2, O3, O4, O5, etc. thereof) of the predetermined mask MASK_A2D_Rx to indicate the data capturing time point and the predetermined voltage level of the reference voltage Vref on the SoC side (e.g., the receiver in the PHY circuit120), and the predetermined mask MASK_A2D_Rx can be defined by the mask coefficients x and y and the inter-tap period IP (e.g., the delay amount of each delay tap of the multiple delay taps of the receiver);(Step S32C) the processing circuit110may determine a set of test values corresponding to the predetermined mask MASK_A2D_Rx according to the default values (O_X0, O_Y0), wherein, the set of test values may comprise a series of test values represented by the test points A, B, C and D on the predetermined mask MASK_A2D_Rx, for example, the respective horizontal and vertical coordinates of these test points, such as the horizontal coordinates obtained (with similar method ofFIG.5) by performing horizontal adjustment (e.g., the adjustment being performed with y times the inter-tap period IP) relative to the central point O corresponding to the default value O_X0, and the vertical coordinates obtained by performing vertical adjustments with a fixed proportion (e.g., x %) or non-fixed proportion upward and downward relative to the central point O corresponding to the default value O_Y0, respectively;(Step S33C) the processing circuit110can respectively write this series of test values (e.g., the above coordinates, such as the sets of horizontal and vertical coordinates of these test points) in Step S32C to the read delay register (to be the read delay amount) and the other reference voltage control register (to be the predetermined voltage level) to check whether the read test is passed, to determine whether to stop performing the calibration regarding reading, wherein, if the read test can be passed for the four cases that this series of test values (such as the above coordinates) are used as the read delay amount and the predetermined voltage level, respectively, the processing circuit110can stop performing the calibration regarding reading, otherwise, the processing circuit110can continue subsequent operations to continue performing the calibration regarding reading at the next candidate position;(Step S34C) the processing circuit110may adjust the respective default values O_X0and O_Y0of the horizontal timing control parameter O_X and the reference voltage parameter O_Y according to a predetermined adjustment sequence such as a sequence of the multiple candidate positions O1, O2, O3, O4, O5, etc. to generate the respective candidate values O_Xc and O_Yc of the horizontal timing control parameter O_X and the reference voltage parameter O_Y for being written into the read delay register (to be the read delay amount) and the other reference voltage control register (to be the predetermined voltage level), where regarding the horizontal and vertical coordinates, the candidate values (O_Xc, O_Yc) can correspond to a subsequent candidate position of the multiple candidate positions O1, O2, O3, O4, O5, etc., such as one of the candidate positions O2, O3, O4, O5, etc., to indicate the data capturing time point and the predetermined voltage level of the reference voltage Vref on the SoC side (e.g., the receiver in the PHY circuit120);(Step S35C) the processing circuit110may determine a set of test values corresponding to the predetermined mask MASK_A2D_Rx according to the candidate values (O_Xc, O_Yc), where the set of test values may comprise a series of test values represented by the test points A, B, C and D on the predetermined mask MASK_A2D_Rx, for example, the respective horizontal and vertical coordinates of these test points, and the method of obtaining the coordinates of this series of test values is similar to that of Step S32C (and the default values (O_X0, O_Y0) are replaced with the candidate values (O_Xc, O_Yc)), so similar descriptions are not repeated in detail here;(Step S36C) the processing circuit110can respectively write this series of test values (e.g., the above coordinates, such as the sets of horizontal and vertical coordinates of these test points) in Step S35C to the read delay register (to be the read delay amount) and the other reference voltage control register (to be the predetermined voltage level) to check whether the read test is passed, to determine whether to stop performing the calibration regarding reading, wherein, if the read test can be passed for the four cases that this series of test values (e.g., the above coordinates) are used as the read delay amount and the predetermined voltage level, respectively, the processing circuit110can stop performing the calibration regarding reading, otherwise, the processing circuit110can perform similar operations to continue performing the calibration regarding reading at the next candidate position until all the candidate positions among the multiple candidate positions O1, O2, O3, O4, O5, etc. are used up;(Step S37C) when it is determined to stop performing the calibration regarding reading, the processing circuit110may update the horizontal timing control parameter O_X and the reference voltage parameter O_Y in the NVM100N to be their respective latest candidate values (O_Xc, O_Yc), such as the last candidate values (O_Xc, O_Yc) obtained and used in the loop of Steps S34C-S36C above;where the success of the read test on the test points A, B, C, and D can indicate that the read test on all possible or available test points in the region enclosed by the predetermined mask MASK_A2D_Rx is expected to be successful, but the present invention is not limited thereto. For example, if the failure of the read test continues to occur until all candidate positions among the multiple candidate positions O1, O2, O3, O4, O5, etc. are used up, the processing circuit110may issue an error message, rather than executing Step S37C. In addition, in the above operations, the processing circuit110can selectively move the predetermined mask MASK_A2D_Rx (together with the test points A, B, C, and D thereon) in multiple rounds to perform the read test corresponding to the predetermined mask MASK_A2D_Rx according to the multiple candidate positions O1, O2, O3, O4, O5, etc., respectively. For brevity, similar descriptions for this embodiment are not repeated in detail here. According to some embodiments, the multiple candidate positions O1, O2, O3, O4, O5, etc. of the predetermined mask MASK_A2D_Rx may vary. For example, the number and/or arrangement of candidate positions of the predetermined mask MASK_A2D_Rx may vary. More particularly, the candidate positions O1, O2, O3, O4, O5, etc. shown inFIG.6can be regarded as the candidate positions in one-dimensional arrangement, but the present invention is not limited thereto. When there is a need, the candidate positions in two-dimensional arrangement (e.g., the candidate positions O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, etc. shown inFIG.5) can be used as the candidate positions of the predetermined mask MASK_A2D_Rx. According to some embodiments, the read test involved with the predetermined mask MASK_A2D_Rx may vary. For example, the read test can be implemented by way of a horizontal timing margin test, etc. FIG.7illustrates an example of the reference voltage associated with the predetermined mask MASK_A2D_Rx shown inFIG.6. The reference voltage Vref_P passing through the test points A and B and the reference voltage Vref_N passing through the test points C and D can be expressed with the reference voltage Vref (e.g., the reference voltage Vref(0)) passing through the center point O as follows: Vref_P=Vref*(1+x%); and Vref_N=Vref*(1−x%); wherein, the reference points E and F on the predetermined mask MASK_A2D_Rx may represent the intersections of the predetermined mask MASK_A2D_Rx and a central vertical line (e.g., a vertical line passing through the center point O) thereof, and may be used in the above-mentioned horizontal timing margin test. FIG.8is a diagram illustrating a horizontal timing and reference voltage calibration control scheme regarding reading of the method according to another embodiment of the present invention, where the read test can be implemented as the horizontal timing margin test. Regarding that the center point O of the predetermined mask MASK_A2D_Rx is equal to a certain candidate position (e.g., one of the multiple candidate positions O1, O2, O3, O4, O5, etc.), the processing circuit110can calculate the three time differences TD, TD_P, and TD_N represented by the three horizontal line segments obtained from cutting the central horizontal line (e.g., the horizontal line passing through the center point O), the upper horizontal line (e.g., the horizontal line passing through reference point E), and the lower horizontal line (e.g., the horizontal line passing through reference point F) of the predetermined mask MASK_A2D_Rx by the data eye, respectively, and determine whether the read test is successful according to whether the three time differences TD, TD_P and TD_N are all greater than the width (2*(y*(IP))) of the predetermined mask MASK_A2D_Rx. If the three time differences TD, TD_P and TD_N are all greater than a predetermined horizontal timing margin such as the width (2*(y*(IP))) of the predetermined mask MASK_A2D_Rx, which may indicate that the whole of the predetermined mask MASK_A2D_Rx is located in the data eye, the processing circuit110may determine that the read test is successful; otherwise (e.g., the boundary of the predetermined mask MASK_A2D_Rx exceeds the data eye), the processing circuit110may determine that the read test is unsuccessful. For brevity, similar descriptions for this embodiment are not repeated in detail here. FIG.9illustrates, in the lower half thereof, a fast calibration control scheme of the method according to an embodiment of the present invention, wherein for better comprehension,FIG.9illustrates a scanning calibration control scheme (e.g., performing tests with respect to all possible parameter combinations) in the upper half thereof. The predetermined mask MASK may represent one of the above-mentioned predetermined masks MASK_A2D_Rx, MASK_A2D_Tx, MASK_AB, etc., and the fast calibration control scheme may represent the corresponding control scheme in the above embodiments. As the fast calibration control scheme does not need to perform tests with respect to all possible parameter combinations, the architecture of the present invention can efficiently perform the memory calibration to shorten the boot time of the electronic device10and bring a better user experience. For brevity, similar descriptions for this embodiment are not repeated in detail here. FIG.10illustrates a working flow of the method according to an embodiment of the present invention. The processing circuit110(e.g., the calibration control module100C therein) can perform the operation of the Step S10, the operation of the Step S20and the operations of the Steps S31-S38in the power-up and initialization phase PHASE_1, the ZQ calibration phase PHASE_2, and the above-mentioned at least one subsequent phase such as the phase and/or reference voltage calibration phase PHASE_3, respectively. For better comprehension, Steps S31A-S37A, Steps S31B-S37B, and Steps S31C-S37C in some of the above embodiments can be taken as examples of the Steps S31-S37in the working flow, respectively, but the present invention is not limited thereto. For example, in a situation where the respective default values of all control parameters are quite accurate to make the respective candidate positions (e.g., the respective candidate position counts thereof) of the calibration regarding reading and the calibration regarding writing be sufficient for dealing with any possible parameter drift, the processing circuit110(e.g., the calibration control module100C therein) may execute at least one portion (e.g., a part or all) of Steps S31-S37to perform and complete the calibration regarding reading, and then execute Step S38to determine that it has not completed all calibrations (e.g., the calibration regarding reading and the calibration regarding writing), and execute at least one portion (e.g., a part or all) of Steps S31-S37to perform and complete the calibration regarding writing, and subsequently execute Step S38to determine that all the calibrations are completed. In Step S10, the processing circuit110(e.g., the calibration control module100C) can control the PHY circuit120to apply power to the DRAM100D through the pad set130and to perform the initialization (e.g., the series of operations thereof) on the DRAM100D. In Step S20, the processing circuit110(e.g., the calibration control module100C) can control the PHY circuit120to trigger the DRAM100D through the pad set130to perform the resistance/impedance calibration. In Step S31, the processing circuit110(e.g., the calibration control module100C) can read at least one default value of at least one control parameter (such as the horizontal timing control parameter O_X and/or the reference voltage parameter O_Y) from the NVM100N. For example, when the processing circuit110is performing the calibration regarding reading, the above-mentioned at least one control parameter may comprise the horizontal timing control parameter O_X(0) and the reference voltage parameter O_Y(0), and the above-mentioned at least one default value may comprise the default values (O_X0(0), O_Y0(0)). When the processing circuit110is performing the calibration regarding writing, for example, in a situation where the DRAM100D belongs to DDR4 SDRAM, etc., the above-mentioned at least one control parameter may comprise the horizontal timing control parameter O_X(1) and the reference voltage parameter O_Y(1), and the above-mentioned at least one default value may comprise the default values (O_X0(1), O_Y0(1)); for another example, in a situation where the DRAM100D is a DDR3 SDRAM, the at least one control parameter may comprise the horizontal timing control parameter O_X(1), and the at least one default value may comprise the default value O_X0(1). In step S32, the processing circuit110(e.g., the calibration control module100C) can determine a set of test values corresponding to a predetermined mask MASK according to the at least one default value of the at least one control parameter. For example, when the processing circuit110is performing the calibration regarding reading, the predetermined mask MASK may represent the predetermined mask MASK_A2D_Rx. When the processing circuit110is performing the calibration regarding writing, for example, in a situation where the DRAM100D belongs to DDR4 SDRAM, etc., the predetermined mask MASK may represent MASK_A2D_Tx; for another example, in a case that the DRAM100D belongs to DDR3 SDRAM, the predetermined mask MASK may represent the predetermined mask MASK_AB. In step S33, the processing circuit110(for example, the calibration control module100C) can check whether the test (for example: the read test such as the horizontal timing margin test, for the calibration regarding reading; or the write test, for the calibration regarding writing) is passed. If Yes, Step S38is entered; If No, Step S34is entered. In Step S34, the processing circuit110(e.g., the calibration control module100C) may adjust the default value of the at least one control parameter according to a predetermined adjustment sequence to generate at least one candidate value of the at least one control parameter. For example, when the processing circuit110is performing the calibration regarding reading, the above-mentioned at least one control parameter may comprise the horizontal timing control parameter O_X(0) and the reference voltage parameter O_Y(0), and the above-mentioned at least one candidate value may comprise the default value (O_Xc(0), O_Yc(0)). When the processing circuit110is performing the calibration regarding writing, for example, in a case that the DRAM100D belongs to DDR4 SDRAM, etc., the above-mentioned at least one control parameter may comprise the horizontal timing control parameter O_X(1) and the reference voltage parameter O_Y(1), and the above-mentioned at least one candidate value may comprise the default values (O_Xc(1), O_Yc(1)); for another example, in a case that the DRAM100D belongs to DDR3 SDRAM, the above-mentioned at least one control parameter may comprise the horizontal timing control parameter O_X(1), and the aforementioned at least one candidate value may comprise the default value O_Xc(1). In Step S35, the processing circuit110may determine a set of test values corresponding to the predetermined mask MASK (e.g., one of the predetermined masks MASK_A2D_Rx, MASK_A2D_Tx, MASK_AB, etc., as described in Step S32) according to the at least one candidate value of the at least one control parameter. In Step S36, the processing circuit110(e.g., the calibration control module100C) may check whether the test (for example: the read test such as the horizontal timing margin test, for the calibration regarding reading; or the write test, for the calibration regarding writing) is passed. If Yes, Step S37is entered; if No, Step S34is entered. In Step S37, the processing circuit110(e.g., the calibration control module100C) may update the above-mentioned at least one control parameter in the NVM100N to be the latest candidate value thereof. In Step S38, the processing circuit110(e.g., the calibration control module100C) can check whether all calibrations are completed. If Yes, the working flow comes to the end; if No, Step S31is entered to perform the next calibration. For example, all calibrations may comprise the calibration regarding reading and the calibration regarding writing, and the processing circuit110may perform and complete the calibration regarding reading first. When Step S38is executed for the first time, the processing circuit110may determine that it has not completed all calibrations. In this case, the next calibration may represent the calibration regarding writing. As a result, the processing circuit110can subsequently perform and complete the calibration regarding writing. When Step S38is executed for the second time, the processing circuit110may determine that all calibrations have been completed. For brevity, similar descriptions for this embodiment are not repeated in detail here. For better comprehension, the method can be illustrated by the working flow shown inFIG.10, but the present invention is not limited thereto. According to some embodiments, one or more steps may be added, deleted, or changed in the working flow shown inFIG.10. For example, one or more error handling steps may be inserted in the partial working flow from Step S36to Step S34(e.g., when the determination result of Step S36is “No”) for performing error handling. In the one or more error handling steps, the processing circuit110may first check whether the loop comprising Steps S34, S35, and S36has used up all candidate positions among the multiple candidate positions of the center point O of the predetermined mask MASK, wherein, if this loop has used up all candidate positions (which means that the failure of the read test continues to occur until all candidate positions are used up), the processing circuit110can issue an error message and then execute step S38, otherwise, the processing circuit110can execute step S34to continue the operations of this loop. For brevity, similar descriptions for this embodiment are not repeated in detail here. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
54,990
11862225
DETAILED DESCRIPTION In order to make the objectives, technical solutions and advantages of the embodiments of the present disclosure clearer, various embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. However, a person of ordinary skill in the art would appreciate that, in various embodiments of the present disclosure, many technical details are provided for the reader to better understand the present disclosure. However, even without these technical details and various changes and modifications based on the following embodiments, the technical solutions claimed in the present disclosure can be realized. The offset voltage is mainly caused by a manufacturing process deviation of a functional device, and there may be one or more functional devices having the process deviation. Therefore, how to eliminate the offset voltage introduced by the process deviation has become a focus of current research. With reference toFIG.1, a comparison circuit includes a reference adjustment module10, a signal receiving module20, and a control module30. The reference adjustment module10is configured to receive a first reference signal100and output a second reference signal10a. A voltage value of the second reference signal10ais equal to a voltage value of the first reference signal100multiplied by an equivalent coefficient. The reference adjustment module10is further configured to: receive an adjustment signal30a. and unidirectionally adjust the equivalent coefficient within a preset value interval when the adjustment signal30ais received. A minimum value in the preset value interval is less than 1 and a maximum value in the preset value interval is greater than 1. The signal receiving module20is configured to receive the second reference signal10aand an external signal200, and output a comparison signal20a. The second reference signal10aafter experiencing a mismatch of the signal receiving module20is equivalent to a third reference signal (not illustrated). When a voltage value of the external signal200is greater than a voltage value of the third reference signal, a first comparison signal is output. When the voltage value of the external signal200is smaller than the voltage value of the third reference signal, a second comparison signal is output. The control module30is configured to: receive an enable signal300and the comparison signal20a; and during a period of continuously receiving the enable signal300, when one of the first comparison signal or the second comparison signal is received, output the adjustment signal30a; and when the received comparison signal20ajumps from one of the first comparison signal or the second comparison signal to the other, terminate the output of the adjustment signal30a. In this embodiment, the signal receiving module20may have an offset voltage. The offset voltage comes from an inevitable deviation in the device manufacturing process. The existence of the offset voltage causes the comparison signal20ato actually represent the magnitude relationship between the voltage values of the signal after mismatched and the external signal200. If it is required that the comparison signal20arepresents the magnitude relationship between the voltage values of the first reference signal100and the external signal200, it is necessary to offset the offset voltage by the setting of the equivalent coefficient, so that the voltage value of the third reference signal which has experienced the mismatch of the signal receiving module20is equal to the voltage value of the first reference signal100. In this way, in an operation process of a DRAM circuit, the magnitude relationship between the voltage values of the external signal200and the first reference signal100can be determined by the type of the comparison signal20a, specifically, by the voltage value of the comparison signal20a. The operation process of the DRAM circuit can be divided into an initialization phase, a reset phase and an operation phase. The DRAM circuit performs an operation phase after completing the initialization or reset. In the initialization and reset phases, the DRAM may receive a ZQCL command sent by a DRAM controller for ZQ calibration. In this phase, the ZQ calibration is mainly to calibrate the output driver and chip terminal circuit, specifically, to calibrate an output resistance and ODT resistance of the DRAM. The ZQ calibration in the initialization and reset phases takes a long time. in the operation phase, the DRAM can receive a ZQCS command sent by the DRAM controller to perform the ZQ calibration. In this phase, the ZQ calibration is mainly to calibrate voltage and temperature changes. The ZQ calibration in the operation phase takes a shorter time. During the ZQ calibration, the signal receiving module20in the DRAM circuit does not receive data signal201. In the data writing phase of the operation phase, the signal receiving module20in the DRAM circuit starts to receive the data signal201. In this embodiment, an enable signal300is triggered by the ZQCL command received by the DRAM circuit. The duration of the enable signal300is equal to the duration of the ZQCL command. In this way, during the ZQ calibration period of the initialization and reset phases, the enable signal300is triggered, and the signal receiving module20receives the first reference signal100. During the data writing phase, the enable signal300is terminated, and the signal receiving module20receives the written data signal201. That is to say, a same port of the signal receiving module20can be used to receive the first reference signal100and the data signal201successively, that is, the external signal200can be set as the first reference signal100and the data signal201successively, so as to avoid the occurrence of a conflict between the reception of the first reference signal100and the reception of the data signal201. In other words, no additional functional unit is required to be provided to adjust and control the sequential reception of the first reference signal100and the data signal201, which is beneficial to simplify the comparison circuit and reduce the complexity of the comparison circuit. The enable signal300is triggered by the ZQCL command received by the DRAM circuit, which can be understood as the enable signal300is triggered after the ZQCL command is received and a clock cycle has elapsed. The enable signal300can be either a high-level active signal or a low-level active signal. Accordingly, the enable signal300may be triggered by either jumping from a low level to a high level, or jumping from a high level to a low level. The threshold value range of the preset value interval is related to physical characteristics of the signal receiving module20. Specifically, the larger the offset voltage of the signal receiving module20, the larger a maximum value or the smaller a minimum value in the preset value interval is and the greater the absolute value of the difference between the voltage value of the second reference signal10aand the voltage value of the first reference signal100is. In this way, it is beneficial to make the voltage value interval of the third reference signal include the voltage value of the first reference signal100, so that in the process of unidirectionally adjusting the equivalent coefficient, the voltage value of the third reference signal is determined according to the jump of the comparison signal20aclose to the voltage value of the first reference signal100, and then the equivalent coefficient of the reference adjustment module10is determined to effectively compensate the offset voltage of the signal receiving module20. After the offset voltage is compensated, the third reference signal is equivalent to the first reference signal, and the comparison signal20aactually represents the magnitude relationship between the voltage values of the first reference signal100and the data signal201. In the case where the voltage value of the first reference signal100is a fixed value, the voltage value of the data signal201can be determined according to the type of the comparison signal20a. In this embodiment, the external signal200includes the first reference signal100or a data signal201. When the enable signal300is not received, the data signal201is taken as the external signal200. the comparison circuit further includes a signal input module40that is connected to an input terminal of the signal receiving module20and configured to receive the enable signal300and the first reference signal100. During the period of continuously receiving the enable signal300, the first reference signal100is taken as the external signal200. Specifically, with reference toFIG.2, the signal input module40includes a first MOS transistor M1, a drain of the first MOS transistor M1 is configured to receive the first reference signal100, a gate of the first MOS transistor M1 is configured to receive the enable signal300, and a source of the first MOS transistor M1 is connected to a first input terminal of the signal receiving module20. The first input terminal of the signal receiving module20is further configured to receive the data signal201. A second input terminal of the signal receiving module20is configured to receive the second reference signal10a. That is to say, during the ZQ calibration period, the enable signal300is triggered, the drain of the first MOS transistor M1 receives the first reference signal100, the gate of the first MOS transistor M1 receives the enable signal300, and the enable signal300controls the source and drain of the first MOS transistor M1 to be turned on, and the first reference signal100is input to the first input terminal of the signal receiving module20as an external signal200. In the data writing phase, the drain of the first MOS transistor M1 receives the first reference signal100, the source and drain of the first MOS transistor M1 are turned off, the first input terminal of the signal receiving module20receives the data signal201, and the data signal201is input to the first input terminal of the signal receiving module20as the external signal200. Since both the signal input module and the control module30need to receive the enable signal300, the enable terminal of the signal input module40that is used to receive the enable signal300can be connected to the enable terminal of the control module30that is used to receive the enable signal300. When the signal input module40is the first MOS transistor M1, the gate of the first MOS transistor M1 is taken as the enable terminal of the signal input module40, and the gate of the first MOS transistor M1 is connected to the enable terminal of the control module30. The first MOS transistor M1 can be either an NMOS transistor or a PMOS transistor. When the first MOS transistor M1 is an NMOS transistor, the enable signal300is a high-level active signal. When the first MOS transistor M1 is a PMOS transistor, the enable signal300is a low-level active signal. In this embodiment, with reference toFIG.3, the reference adjustment module10includes an operational amplifier11and a plurality of series resistors. The operational amplifier11has a non-inverting input terminal, an inverting input terminal and an output terminal. The non-inverting input terminal is configured to receive the first reference signal100. A first number of resistors are connected in series between the inverting input terminal and the output terminal. A second number of resistors are connected in series between the inverting input terminal and a ground terminal14. The operation of unidirectionally adjust the equivalent coefficient within a preset value interval includes that: an output terminal of the reference adjustment module10is controlled to be connected to a far ground terminal of a resistor in an order from the output terminal of the operational amplifier11to the ground terminal, or from the ground terminal14to the output terminal of the operational amplifier11. Each resistor has a near ground terminal and a far ground terminal. According to the connection relationship between the resistor and the ground terminal14, the near ground terminal is the terminal of the resistor close to the ground terminal14, and the far ground terminal is the other terminal of the resistor away from the ground terminal14. When the current flows through the resistor, the voltage at the far ground terminal is greater than the voltage at the near ground terminal. Exemplarily, in the order from the output terminal of the operational amplifier11to the ground terminal14, a first resistor121, a second resistor122, a third resistor123, a fourth resistor124, a fifth resistor125, a sixth resistor126and a seventh resistor127are sequentially connected in series between the output terminal of the operational amplifier11and the ground terminal14. The ground terminal14is connected to the near ground terminal of the seventh resistor127. The inverting input terminal of the operational amplifier11is connected to the near ground terminal of the third resistor123and the far ground terminal of the fourth resistor124. The output terminal of the operational amplifier11is connected to the far ground terminal of the first resistor121. When the operational amplifier11receives the first reference signal100, under the action of the operational amplifier11, the voltage value of the output terminal of the operational amplifier11is greater than the voltage value of the first reference signal100, and the voltage value of the inverting input terminal of the operational amplifier11is equal to the voltage value of the first reference signal100. That is to say, in the order from the output terminal of the operational amplifier11to the ground terminal14, the output terminal of the reference adjustment module10is controlled to be connected to the far ground terminal of a resistor, so that the voltage value of the reference adjustment module10at the output terminal is gradually changed from a voltage value greater than the first reference signal100to a voltage value smaller than the first reference signal100, which is equivalent to the transition of the equivalent coefficient of the reference adjustment module10from a value greater than 1 to a value less than 1. In this way, no matter whether the mismatch of the signal receiving module20causes the voltage value of the third reference signal to be greater than the voltage value of the second reference signal10a, or causes the voltage value of the third reference signal to be smaller than the voltage value of the second reference signal10a, the mismatch of the signal receiving module20may be offset by adjusting the equivalent coefficient, so that the voltage value of the third reference signal is equal to the voltage value of the first reference signal100, that is, the third reference signal is equivalent to the first reference signal100. In addition, by controlling the numerical values of the first number and the second number and the resistance value of the resistor, the voltage values of the far ground terminals of different resistors and the voltage value difference between the far ground terminals of different resistors can be adjusted so as to adjust the voltage threshold value range and change gradient at the output terminal of the reference adjustment module10. When the threshold value range of the voltage value at the output terminal of the reference adjustment module10is larger and the change gradient is smaller, no matter whether the mismatched voltage of the signal receiving module20is large or small, by switching the far ground terminal of the resistor connected to the output terminal of the reference adjustment module10, the equivalent coefficient of the reference adjustment module10offsets the mismatch of the signal receiving module20, so that the third reference signal is equivalent to the first reference signal100. In this embodiment, the reference adjustment module10further includes a plurality of switches. The plurality of switches are located between the far ground terminal of each of the resistors and the output terminal of the reference adjustment module10, the switches located between the far ground terminals of the different resistors and the output terminal of the reference adjustment module10are different. The operation of controlling an output terminal of the reference adjustment module10to be connected to a far ground terminal of a resistor include that: one of the plurality of switches is controlled to be turned on. Exemplarily, the reference adjustment module10includes a first switch131, a second switch132, a third switch133, a fourth switch134, a fifth switch135and a sixth switch136. The first switch131is connected to the far ground terminal of the second resistor122, the second switch132is connected to the far ground terminal of the third resistor123, the third switch133is connected to the far ground terminal of the fourth resistor124, and the fourth switch134is connected to the far ground terminal of the fifth resistor125, the fifth switch135is connected to the far ground terminal of the sixth resistor126, and the sixth switch136is connected to the far ground terminal of the seventh resistor127. In this embodiment, the control module30includes a control unit31, the reference adjustment module10, and an enabling unit32. The control unit31is configured to: receive the comparison signal20a; when a voltage value of the current comparison signal20ais the same as a voltage value of the previous comparison signal20a, adjust the parameters of the adjustment signal30aaccording to the preset unidirectional adjustment order, and output the adjustment signal30aafter adjusting the parameter information; and when the voltage value of the current comparison signal20ais different from the voltage value of the previous comparison signal20a, store the parameter information of the adjustment signal30a. The reference adjustment module10is further configured to take the parameter information included in the adjustment signal30aas the equivalent coefficient. The enabling unit32is configured to receive the enable signal300, and enable the control unit31during the period of continuously receiving the enable signal300. The enabled control unit31is configured to eliminate the offset according to the comparison signal20a, i.e., to adjust the equivalent coefficient of the reference adjustment module10. When the voltage value of the current comparison signal20ais different from the voltage value of the previous comparison signal20a, the control unit31no longer sends the adjustment signal30a. That is, the equivalent coefficient of the reference adjustment module10is determined. Based on the determined equivalent coefficient, in the subsequent data writing phase, the input first reference voltage is equivalent to the second reference voltage through the reference adjustment module10. Since the voltage value of the signal may fluctuate due to environmental influences, the voltage values of the comparison signal20aare the same, which actually means that the difference between the voltage values of the different comparison signals20ais smaller than a first preset value. Accordingly, the voltage values of the different comparison signals are different, which actually means that the difference between the voltage values of the different comparison signals20ais greater than a second preset value. The first preset value is a maximum allowable error value set according to actual needs. The second preset value is a minimum change value set according to actual needs. In addition, the parameter information of the adjustment signal30astored by the control unit31can be used to set other reference adjustment modules to offset the mismatch of other similar signal receiving devices. In addition, it is also possible to adjust the equivalent coefficient of the reference adjustment module10within a small threshold value range with the stored parameter information as a center after the signal receiving module20is subjected to the interference which may affect the performance, without adjusting the equivalent coefficient from the extreme value in the maximum threshold value range of the equivalent coefficient. In this way, it is beneficial to shorten the adjustment time of the reference adjustment module10, thereby quickly offsetting the mismatch of the signal receiving module20, and to avoid the adjustment time of the reference adjustment module10exceeding the initialization time of the DRAM circuit, that is, to avoid the occurrence of a conflict between the adjustment of the equivalent coefficient and the reception of the data signal201to ensure the effective operation of the comparison circuit. In this embodiment, the parameter information includes turn-on information of the switches, which represents which switch in the plurality of switches is to be turned on. Specifically, the parameter information is code<N:0>, code<N:0> is a binary number of N+1 bits, the maximum value of N+1 is equal to the number of switches, which bit in code<N:0> is high level means that the corresponding switch is turned on, and the remaining switches are turned off. Exemplarily, there are 6 switches in total, and the parameter information code<5:0>=001000, at this time, the fourth switch134is turned on, the remaining switches are turned off, and the voltage of the reference adjustment module10at the output terminal is the same as the voltage of the fifth resistor125at the far ground terminal. Correspondingly, the above adjustment of the parameter information of the adjustment signal30aaccording to the preset unidirectional adjustment order refers to the adjustment of the parameter information code<N:0> according to the unidirectional adjustment order, so that the parameter values of different digits are sequentially1, so that the corresponding switch is sequentially opened until the comparison signal20ajumps. In this embodiment, the enabling unit32includes a second MOS transistor M2. A drain of the second MOS transistor M2 is connected to the output terminal of the signal receiving module20. A source of the second MOS transistor M2 is connected to the input terminal of the control unit31. A gate of the second MOS transistor M2 is configured to receive the enable signal300. When the second MOS transistor M2 is turned on, the control unit31receives the comparison signal20a, and then outputs adjustment signal30aor terminates the output of the adjustment signal30aaccording to the currently received comparison signal20aand the previously received comparison signal20a. When the second MOS transistor M2 is turned off, the control unit31cannot receive the comparison signal20a, and the control unit31suspends operation. The second MOS transistor M2 can be either an NMOS transistor or a PMOS transistor. When the gate of the first MOS transistor M1 is electrically connected to the gate of the second MOS transistor M2, the first MOS transistor M1 is of the same type as the second MOS transistor M2. When the first MOS transistor M1 and the second MOS transistor M2 are independent of each other, the types of the first MOS transistor M1 and the second MOS transistor M2 may be the same or different. In this embodiment, the adjustment signal30aincludes parameter information, and the reference adjustment module10takes the parameter information in the adjustment signal30aas an equivalent coefficient. In other embodiments, the adjustment signal is a trigger signal that controls one of the plurality of switches to be turned on, which includes: controlling one of the plurality of switches to be turned on according to a preset unidirectional adjustment order. For example, each time an adjustment signal is received, the position of the turned-on switch is incremented by one bit. Specifically, at first, the fifth switch is turned on, and the other switches are turned off. After receiving an adjustment signal, the fourth switch is turned on, and the fifth switch is turned off, After receiving another adjustment signal, the third switch is turned on, and the fourth switch is turned off; and so repeat . . . . In this embodiment, the signal receiving module20includes a signal amplifying unit21and a data comparison unit22. The signal amplifying unit21has a first input terminal that is configured to receive the external signal200, a second input terminal that is configured to receive the second reference signal10a, a first output terminal that is configured to output a reference amplified signal10b, and a second output terminal that is configured to output an external amplified signal202. The absolute value of the difference between the voltage values of the reference amplified signal10band the external amplified signal202is greater than the absolute value of the difference between the voltage values of the second reference signal10aand the external signal200. The non-inverting input terminal of the data comparison unit22is configured to receive the reference amplified signal10b, and the inverting input terminal of the data comparison unit22is configured to receive the external amplified signal202. The data comparison unit is further configured to output the first comparison signal when the voltage value of the external amplified signal202is greater than the voltage value of the reference amplified signal10b, and output the second comparison signal when the voltage value of the external amplified signal202is smaller than the voltage value of the reference amplified signal10b. By amplifying the voltage value difference between the second reference signal10aand the external signal200before comparing the second reference signal10aand the external signal200, the accuracy of the comparison result of the data comparison unit22can be improved. At the same time, the setting of the signal amplifying unit21may introduce an offset voltage. At this time, by adjusting the equivalent coefficient of the reference adjustment module10to compensate the offset voltage, it is helpful to further ensure the accuracy of the comparison result of the data comparison unit22. In this embodiment, the signal amplifying unit includes a differential amplifying circuit that is configured to receive the second reference signal10aand the external signal200, and output the reference amplified signal10band the external amplified signal202. Specifically, the differential amplifying circuit includes a third MOS transistor M3 and a fourth MOS transistor M4. The third MOS transistor M3 and the fourth MOS transistor M4 are of the same type and size. A gate of the third MOS transistor M3 is configured to receive the second reference signal10a. A drain of the third MOS transistor M3 is configured to connect to a first load R1. A gate of the fourth MOS transistor M4 is configured to receive the external signal200. A drain of the fourth MOS transistor M4 is configured to connect to a second load R2. A source of the third MOS transistor M3 and a source of the fourth MOS transistor M4 are connected to a same current source. It should be noted that, in an ideal situation, the size of the third MOS transistor M3 and the size of the fourth MOS transistor M4 may be the same, but in the actual process preparation environment, due to process non-uniformity, there may be a certain degree of deviation in the sizes of the third MOS transistor M3 and the fourth MOS transistor M4, so that in the case where the voltage value difference between the second reference signal10aand the external signal200is zero, the voltage value difference between the reference amplified signal10band the external amplified signal202is not equal to zero. That is, the data comparison unit22outputs an erroneous comparison result. In this embodiment, the resistors with the same resistance value are adopted in the first load R1 and the second load R2. It should be noted that, in an ideal situation, the resistance values of the first load R1 and the second load R2 may be the same. However, in the actual process preparation environment, due to the process non-uniformity, there may be a certain degree of deviation in the resistance values of the first load R1 and the second load R2, so that in the case where the voltage value difference between the second reference signal10aand the external signal200is zero, the voltage value difference between the reference amplified signal10band the external amplified signal202is not zero. That is, the data comparison unit22outputs an erroneous comparison result. In this embodiment, during the period of continuously receiving the enable signal, feedback adjustment is performed on the equivalent coefficient of the reference adjustment module based on the magnitude relationship between the voltage values of the external signal and the third reference signal, so as to continuously adjust the voltage value of the second reference signal, and then continuously adjust the voltage value of the third reference signal which is equivalent to the voltage value of the second reference signal and the offset voltage, and finally make the voltage value of the third reference signal equal to the voltage value of the external signal. In other words, by setting the voltage value of the external signal equal to the voltage value of the first reference signal, so that the voltage value of the third reference signal is equal to the voltage value of the first reference signal, so as to compensate the offset voltage through the adjustment of the equivalent coefficient, the accuracy of the comparison signal generated by the comparison circuit is ensured. Correspondingly, the embodiments of the present disclosure provide a memory including the comparison circuit according to anyone described above. The memory with the above comparison circuit can compensate the offset voltage of the signal receiving module in the ZQ calibration phase, to ensure the accuracy of the comparison signal generated by the comparison circuit in the data writing phase, so as to correctly execute the internal action represented by the external signal according to the right contents of the comparison signal. Compared with some implementations, the technical solutions provided by the embodiments of the present disclosure have the following advantages: In the above technical solution, during the period of continuously receiving the enable signal, feedback adjustment is performed on the equivalent coefficient of the reference adjustment module based on the magnitude relationship between the voltage values of the external signal and the third reference signal, so as to continuously adjust the voltage value of the second reference signal, and then continuously adjust the voltage value of the third reference signal which is equivalent to the voltage value of the second reference signal and the offset voltage, and finally make the voltage value of the third reference signal equal to the voltage value of the external signal. In other words, by setting the voltage value of the external signal equal to the voltage value of the first reference signal, so that the voltage value of the third reference signal is equal to the voltage value of the first reference signal, so as to compensate the offset voltage through the adjustment of the equivalent coefficient and ensure the accuracy of the comparison signal generated by the comparison circuit. In addition, a DRAM controller controls a DRAM circuit to perform ZQ calibration by sending a ZQCL command to the DRAM circuit. ZQ calibration can be performed during the DRAM power-on initialization and reset phases. During the ZQ calibration period, the DRAM circuit does not receive a data signal, and the enable signal is triggered by controlling the ZQCL command, and the duration of controlling enable signal is equal to the duration of the ZQCL command, which can realize the use of the port receiving the data signal to receive the first reference signal while avoiding the occurrence of a conflict between the reception of the first reference signal and the reception of the data signal. That is to say, there is no need to set an additional functional unit to regulate the sequential reception of the first reference signal and the data signal, which is beneficial to simplify the comparison circuit and reduce the complexity of the comparison circuit. In addition, since the ZQ calibration period is used to enable the control module, there is no need to set an additional time period to adjust the equivalent coefficient, the timing sequence is simplified, the initialization time and reset time of the DRAM is shortened, the reading and writing time of the DRAM is maintained, and the operating efficiency of the DRAM is improved. Further, it should be noted that the modules or units for executing operations of the comparison circuit according to the embodiment of the present disclosure, for example, the reference adjustment module, signal receiving module and the control module can be implemented by hardware such as circuits and processors. A person of ordinary skill in the art would understand that the above embodiments are specific embodiments for realizing the present disclosure, and in practical applications, various changes in form and details can be made without departing from the spirit and the scope of the present disclosure. Any person skilled in the art can make changes and modifications without departing from the spirit and scope of the present disclosure, and therefore, the protection scope of the present disclosure shall be subject to the scope defined by the claims.
33,750
11862226
DETAILED DESCRIPTION One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. Memories generally include an array of memory cells, where each memory cell is coupled between at least two access lines. For example, a memory cell may be coupled to access lines, such as a bitline and a wordline. Each access line may be coupled to a large number of memory cells. To select a memory cell, one or more drivers may provide selection signals (e.g., a voltage and/or a current) on the access lines to access storage capacities of the memory cell. By applying voltages and/or currents to the respective access lines, the memory cell may be accessed, such as to write data to the memory cell and/or read data from the memory cell. In some memories, memory cells of the array may be organized into decks of memory cells. A deck of memory cells may be a single plane of memory cells disposed between a layer of wordlines and a layer of bitlines. The array may be a stack of decks that includes any number of decks of memory cells (e.g., 1 deck, 2 decks, 4 decks, any number of decks) as different layers of the array. In some embodiments, a logic state of 1 (e.g., a SET state of a memory cell, which may also be referred to as a SET cell or bit) may correspond to a set of threshold voltages (Vths) lower than a set of threshold voltages associated with a logic state of 0 (e.g., a RESET state of a memory cell, which may also be referred to as a RESET cell or bit). Accordingly, a lower voltage may be used to read SET cells when compared to RESET cells. During operations, the threshold voltage for one or more memory cells may “drift”. That is, as time increases, a higher threshold may now be used to read data when compared to an original starting threshold. According, the memory array may use active media management, such as monitoring tiles (e.g., tiles in a partition) and attempting to mitigate the impacts of drifting by deriving a new demarcation bias voltage (VDM) to be used to read the memory cells. Instead of using active media management (e.g., tile level management), a pre-read scan technique described herein may apply a two-step (or more) read approach. In one embodiment, a first step (e.g., scan step) may include the application of multiple voltages (e.g., read voltages) to a memory array. In some embodiments, the read voltages may be applied in parallel via partitions, with each voltage having a different value, as further described below. The applied read voltages may initiate a series of switching events by activating the group of memory cells storing the data to be read. The switching event may be attributed to a memory cell turning on (e.g., conducting an appreciable amount of current) when the applied voltage across the memory cell exceeds the memory cell's threshold voltage (Vth). The memory cells that have turned on may then be read, for example, as storing logic 1 (e.g., SET cells), and the remaining cells that have not turned on may be read as storing logic 0 (e.g., RESET cells). The read data may then be analyzed to determine which of the multiple read voltages applied was more optimal as further described below. The more optimal voltage may then be used for deriving a more optimal VDM and then applying the VDM during a second read step. Accordingly, the pre-read scan technique may more efficiently provide for read voltages when accessing data in the memory array. Turning now to the figures,FIG.1is a block diagram of a portion of a memory device100. The memory device100may be any suitable form of memory, such as non-volatile memory (e.g., a cross-point memory) and/or volatile memory. The memory device100may include one or more memory cells102, one or more bitlines104(e.g.,104-0,104-1,104-2,104-3), one or more wordlines106(e.g.,106-0,106-1,106-2,106-3), one or more wordline decoders108(e.g., wordline decoding circuitry), and one or more bitline decoders110(e.g., bitline decoding circuitry). The memory cells102, bitlines104, wordlines106, wordline decoders108, and bitline decoders110may form a memory array112. Each of the memory cells102may include a selector and/or a storage element. When a voltage across a selector of a respective memory cell reaches a threshold, the storage element may be accessed to read a data value from and/or write a data value to the storage element. In some embodiments, each of the memory cells102may not include a separate selector and storage element, and have a configuration such that the memory cell nonetheless acts as having a selector and storage element (e.g., may include use of a material that behaves both like a selector material and a storage element material). For ease of discussion,FIG.1may be discussed in terms of bitlines104, wordlines106, wordline decoders108, and bitline decoders110, but these designations are non-limiting. The scope of the present disclosure should be understood to cover memory cells102that are coupled to multiple access lines and accessed through respective decoders, where an access line may be used to store data into a memory cell and read data from the memory cell102. Furthermore, the memory device100may include other circuitry, such as a biasing circuitry configured to bias the bitlines104or wordlines106in a corresponding direction. For example, the bitlines104may be biased with positive biasing circuitry while the wordlines106may be biased with negative biasing circuitry. The bitline decoders110may be organized in multiple groups of decoders. For example, the memory device100may include a first group of bitline decoders114(e.g., multiple bitline decoders110) and/or a second group of bitline decoders116(e.g., different group of multiple bitline decoders110). Similarly, the wordline decoders108may also be arranged into groups of wordline decoders108, such as a first group of wordline decoders118and/or a second group of wordline decoders120. Decoders may be used in combination with each other to drive the memory cells102(e.g., such as in pairs and/or pairs of pairs on either side of the wordlines106and/or bitlines104) when selecting a target memory cell102A from the memory cells102. For example, bitline decoder110-3may operate in conjunction with bitline decoder110′-3and/or with wordline decoders108-0,108′-0to select the memory cell102A. As may be appreciated herein, decoder circuitry on either ends of the wordlines106and/or bitlines104may be different. Each of the bitlines104and/or wordlines106may be metal traces disposed in the memory array112, and formed from metal, such as copper, aluminum, silver, tungsten, or the like. Accordingly, the bitlines104and the wordlines106may have a uniform resistance per length and a uniform parasitic capacitance per length, such that a resulting parasitic load may uniformly increase per length. It is noted that the depicted components of the memory device100may include additional circuitry not particularly depicted and/or may be disposed in any suitable arrangement. For example, a subset of the wordline decoders108and/or bitline decoders110may be disposed on different sides of the memory array112and/or on a different physical side of any plane including the circuitries. The memory device100may also include a control circuit122. The control circuit122may communicatively couple to respective of the wordline decoders108and/or bitline decoders110to perform memory operations, such as by causing the decoding circuitry (e.g., a subset of the wordline decoders108and/or bitline decoders110) to generate selection signals (e.g., selection voltage and/or selection currents) for selecting a target of the memory cells. In some embodiments, a positive voltage and a negative voltage may be provided on one or more of the bitlines104and/or wordlines106, respectively, to a target of the memory cells102. In some embodiments, the decoder circuits may provide biased electrical pulses (e.g., voltage and/or current) to the access lines to access the memory cell. The electrical pulse may be a square pulse, or in other embodiments, other shaped pulses may be used. In some embodiments, a voltage provided to the access lines may be a constant voltage. Activating the decoder circuits may enable the delivery of an electrical pulse to the target of the memory cells102such that the control circuit122is able to access data storage of the target memory cell, such as to read from or write to the data storage. After a target of the memory cells102is accessed, data stored within storage medium of the target memory cell may be read or written. Writing to the target memory cell may include changing the data value stored by the target memory cell. As previously discussed, the data value stored by a memory cell may be based on a threshold voltage of the memory cell. In some embodiments, a memory cell may be “set” to have a first threshold voltage or may be “reset” to have a second threshold voltage. A SET memory cell may have a lower threshold voltage than a RESET memory cell. By setting or resetting a memory cell, different data values may be stored by the memory cell. Reading a target of the memory cells102may include determining whether the target memory cell was characterized by the first threshold voltage and/or by the second threshold voltage. In this way, a threshold voltage window may be analyzed to determine a value stored by the target of the memory cells102. The threshold voltage window may be created by applying programming pulses with opposite polarity biasing to the memory cells102(e.g., in particular, writing to selector device (SD) material of the memory cell) and reading the memory cells102(e.g., in particular, reading a voltage stored by the SD material of the memory cell102) using a signal with a given (e.g., known) fixed polarity. In some embodiments, a selection input may be received from a host device128, such as a host processor reading data from the memory device100to cause the control circuit122to access particular memory cells102. The control circuit122may additionally utilize a pre-read scan technique when reading data that may be stored in the memory cells102, for example, by applying multiple read voltages to the memory cells102based on partitions. In one embodiment, the multiple read voltages may be applied in parallel, with one voltage being used per partition, where each voltage has a different value. The read voltages may be applied via bitlines104and wordlines106. The applied read voltage may then cause an activation event (e.g., switching event) readable via the wordline decoders108and the bitline decoders110. SET cells may activate at a first voltage threshold (Vth) lower than a second Vth of RESET cells. In certain embodiments, the resulting read data (e.g., logic 1's and 0's) may have certain statistical assumptions. For example, the total number of logic 1's may be in a desired range between 35% to 65% of the total number of logic 0's. The read voltages applied may return data that may be outside of the desired range, for example, because the voltages applied may be too high or too low. A more optimal read voltage may result in data closer to or inside of the desired range. The control circuit122may derive the more optimal VDM, and then adjust one or more subsequent voltages based on the derivation. Accordingly, variances due to drift, write endurance (e.g., as more writes are performed later read voltages may be lower), and/or memory cell102distances (e.g., different read voltages may vary based on distances of stored data in bitlines104), for example, may be accounted for using the pre-scan read technique described herein. FIG.2is a diagram illustrating a portion of a memory array130in accordance with an embodiment of the present disclosure. Inside the memory array130, the memory cells are located at intersections of certain lines (e.g., orthogonal lines). The memory array130may be a cross-point array including wordlines106(e.g.,106-0,106-1, . . . ,106-N) and bitlines104(e.g.,104-0,104-1, . . . ,104-M). A memory cell102may be located at each of the intersections of the wordlines106and bitlines104. The memory cells102may function in a two-terminal architecture (e.g., with a particular wordline106and the bitline104combination serving as the electrodes for the memory cell102). Each of the memory cells102may be resistance variable memory cells, such as resistive random-access memory (RRAIVI) cells, conductive-bridging random access memory (CBRAM) cells, phase-change memory (PCM) cells, and/or spin-transfer torque magnetic random-access memory (STT-RAM) cells, among other types of memory cells. Each of the memory cells102may include a memory element (e.g., memory material) and a selector element (e.g., a selector device (SD) material) and/or a material layer that functionally replaces a separate memory element layer and selector element layer. The selector element (e.g., SD material) may be disposed between a wordline contact (e.g., a layer interface between a respective one of the wordlines106and the memory material) and a bitline contact (e.g., a layer interface between a respective one of the bitlines104and the selector element) associated with a wordline or bitline forming the memory cell. Electrical signals may transmit between the wordline contact and the bitline contact when reading or writing operations are performed to the memory cell. The selector element may be a diode, a non-ohmic device (NOD), or a chalcogenide switching device, among others, or formed similar to the underlying cell structure. The selector element may include, in some examples, selector material, a first electrode material, and a second electrode material. The memory element of memory cell102may include a memory portion of the memory cell102(e.g., the portion programmable to different states). For instance, in resistance variable memory cells102, a memory element can include the portion of the memory cell having a resistance that is programmable to particular levels corresponding to particular states responsive to applied programming voltage and/or current pulses. In some embodiments, the memory cells102may be characterized as threshold-type memory cells that are selected (e.g., activated) based on a voltage and/or current crossing a threshold associated with the selector element and/or the memory element. Embodiments are not limited to a particular resistance variable material or materials associated with the memory elements of the memory cells102. For example, the resistance variable material may be a chalcogenide formed of various doped or undoped chalcogenide-based materials. Other examples of resistance variable materials that may be used to form storage elements include binary metal oxide materials, colossal magnetoresistive materials, and/or various polymer-based resistance variable materials, among others. In operation, the memory cells102may be programmed by applying a voltage (e.g., a write voltage) across the memory cells102via selected wordlines106and bitlines104. A sensing (e.g., read) operation may be performed to determine a state of one or more memory cells102by sensing current. For example, the current may be sensed on one or more bitlines104/one or more wordlines106corresponding to the respective memory cells102in response to a particular voltage applied to the selected of the bitlines104/wordlines106forming the respective memory cells102. As illustrated, the memory array130may be arranged in a cross-point memory array architecture (e.g., a three-dimensional (3D) cross-point memory array architecture) that extends in any direction (e.g., x-axis, y-axis, z-axis). The multi-deck cross-point memory array130may include a number of successive memory cells (e.g.,102B,102C,102D) disposed between alternating (e.g., interleaved) decks of wordlines106and bitlines104. The number of decks may be expanded in number or may be reduced in number and should not be limited to the depicted volume or arrangement. Each of the memory cells102may be formed between wordlines106and bitlines104(e.g., between two access lines), such that a respective one of the memory cells102may be directly electrically coupled with (e.g., electrically coupled in series) with its respective pair of the bitlines104and wordlines106and/or formed from electrodes (e.g., contacts) made by a respective portion of metal of a respective pair of bitlines104and wordlines106. For example, the memory array130may include a three-dimensional matrix of individually-addressable (e.g., randomly accessible) memory cells102that may be accessed for data operations (e.g., sense and write) at a granularity as small as a single storage element and/or multiple storage elements. In some cases, the memory array130may include more or fewer bitlines104, wordlines106, and/or memory cells102than shown in the examples ofFIG.2. Each deck may include one or more memory cells102aligned in a same plane. FIG.3is a block diagram of an embodiment of a memory device200where the memory array112has been partitioned into multiple partitions202(e.g., partitions202-a,202-b. . .202-n). Each partition202may include an associated local control circuit204(e.g., local control circuit204-a,204-b. . .204-n). Further, each partition may be subdivided into tiles205(e.g., tiles205-a,205-b. . .205-n). In some embodiments, the local control circuit204may be included in the control circuit122or interface with the control circuit122. Accordingly, each partition202may operate independent from other partitions202, which may enable parallel reads and writes of the memory array112, including the use of parallel pre-read scans. In some embodiments, the memory array112is a 3D cross point (3DXP) memory array, and each individual partition220is a 1 Gigabyte partition. The memory device200may include 16 of the 1 Gigabyte partitions. In some examples, the memory within each partition202may be accessed with 16 bytes of granularity, thus providing 26 bits of memory address information to memory array110. Further, four bits can be used in this example to provide partition identification. It is to be noted that the particular partition sizes, number of partitions, and bits used for the commands and address operations described above are provided as examples only, and in other embodiments different partition sizes, numbers of partitions, and command/address bits may be used. While the pre-scan read techniques described herein may work in any type of stored data, in some embodiments, the data stored in the memory array112may be encoded, for example, by adding certain encoding bits. The encoding data bits may enable faster reads, as further described below. Turning now toFIG.4, the figure illustrates an embodiment of a user data pattern400that may be used by the pre-scan read techniques described herein. The user data pattern400illustrates a non-encoded user data402which may then be encoded as user data404. The non-encoded user data402may be referred to as an input vector in some cases. The encoded user data404may include additional bits (e.g., b1 through b4). The additional bits may be referred to as flip-bits and may indicate a status of the user data, as described below. The encoding technique described may generate an encoded user data having a weight (e.g., a number of bits having the logic state of 1 out of a total number of bits in the user data) within a predetermined interval. In some embodiments, the interval is 50% through (50+50k)% where k is a predetermined factor further described below. In some cases, the interval is expressed as [50%, (50+50k)%]. For example, when k is equal to 4, the interval may be 50% to 62.5% (e.g., [50%, 62.5%]). A different weight other than 50% as a lower bound of the interval may be used. Illustrations inFIG.4refer to 50% as a lower bound of the interval for a more concise description of the depicted features; however, other alternatives and different variations may be contemplated and fall within the scope of this disclosure. By way of example, the user data402is shown as having 16 bits (e.g., a1 through a16). In a case in which k is equal to 4, the predetermined interval for the encoded user data to meet may be of [50%, 62.5%]. Various forms of the encoded user data404, when k=4, are illustrated inFIG.4. The encoding technique may add k number of flip-bits (e.g., b1 through b4 when k=4) to the user data402(e.g., a1 through a16) to generate the encoded user data404. In addition, the original user data pattern may be partitioned into k number of portions (e.g., four portions or segments when k=4). For example, a first portion may include bits a1 through a4. The first portion may be associated with a first flip bit, b1. A second portion may include bits a5 through a8. The second portion may be associated with the second flip bit, b2. A third portion may include bits a9 through a12. The third portion may be associated with the third flip bit, b3. A fourth portion may include bits a13 through a16. The fourth portion may be associated with the fourth flip bit, b4. In some embodiments, initial values of b1 through b4 correspond to the logic state of 1 (e.g., 1111 of the encoded user data pattern406). The logic state of 1 in the flip-bits may indicate that corresponding portions of the original user data are not inverted. Conversely, the logic state of 0 in the flip-bits may indicate that corresponding portions of the original user data are inverted. As described above, the pre-scan read techniques described herein may determine a weight of the encoded user data pattern404as a percentage (e.g., adding the logic 1 bits and dividing the sum by the total number of unencoded bits). For example, the encoded user data560-ahas a weight of 25% (e.g., 4 bits having the logic state of 1 out of 16 bits in the user data), which does not meet the predetermined interval of [50%, 62.5%] when k=4. Further, the encoding technique may vary the logic states of the flip-bits throughout all possible combinations of logic states of the flip-bits to find a particular encoded user data that has a particular weight within the predetermined interval (e.g., an interval of [50%, 62.5%] when k=4). When there are k flip-bits (e.g., k=4), there are a total of 2{circumflex over ( )}k (e.g., 2{circumflex over ( )}4=16) combinations, such as 1111, 1110, 1101, 1100, . . . , 0001, and 0000. When a logic state of a flip-bit corresponds to the logic state of 0, the pre-scan read may invert the logic states of the corresponding portion of the user data and evaluate a weight. As illustrated, user data406does not include any inversions, and thus all flip-bits are set to 1. Inversion of data may then occur. By way of example, when the flip-bits are 1110 as shown in the encoded user data408, the logic states of the fourth portion (e.g., bits a13 through a16) are inverted to 1001 from 0110. Then, the encoding technique may determine that the encoded user data pattern408has a weight of 25% (e.g., 4 bits having the logic state of 1 out of 16 bits in the user data), which does not meet the predetermined condition of the weight within the interval of [50%, 62.5%]. The encoding technique may restore the logic states of the fourth portion back to 0110 and vary the content of the flip-bits to a next combination (e.g., 1101 as shown in the encoded user data410). The encoding technique may invert the logic states of the third portion (e.g., bits a9 through a12) to 1011 from 0100 as shown in the encoded user data410and determine that the encoded user data pattern410has a weight of 38% (e.g., 6 bits having the logic state of 1 out of 16 bits in the user data), which also does not meet the predetermined condition of the weight within the interval of [50%, 62.5%]. The pre-scan read may continue varying the content of the flip-bits, inverting logical values of the bits of corresponding portions of the user data according to the flip-bits, and thereby evaluating weights of the encoded user data until an encoded user data meets the predetermined condition (e.g., the interval of [50%, 62.5%]). For example, the encoded user data412has a weight of 38% and does not meet the predetermined condition of [50%, 62.5%] weight interval. The encoded user data pattern414has the flip-bit contents of 1011 and the second portion of the user data (e.g., bits a5 through a8) are inverted to 1111 from 0000. The weight of the encoded user data414is 50% (e.g., 8 bits having the logic state of 1 out of 16 bits in the user data), which meets the predetermined condition of having the weight between [50%, 62.5%]. The coding technique may stop varying the content of the flip-bits based on determining that the encoded user data pattern414meets the predetermined condition and the coded user data pattern414may be stored in memory cells. The flip-bit contents (e.g., 1011) may then be used to decode the encoded user data when reading the encoded user data from the memory cells. For example, the logic states of bits a5 through a8 (e.g., 1111) of the encoded user data414may be inverted back to their original logic states (e.g., 0000) based on the value of the flip-bit, b2 (e.g., the logic state of 0 of b2 indicating the bits a5 through a8 having been inverted) when reading the encoded user data414. By storing encoded bits at a desired weight range, the techniques described herein may more quickly read the data stored in the memory device100. It may be beneficial to describe a ramping read technique illustrating certain reading of data. Turning now toFIG.5, the figure is a timing diagram or graph450illustrating ramping voltages that may be applied through bitlines104and/or wordlines106to result in the reading of data. In the illustrated embodiment, the graph450includes an X axis representative of time, and a Y axis representative of voltage. As time progresses in the Y direction, voltages LBL and LWL may be applied, representative of bitline104voltages and wordline106voltages respectively, creating bias voltages at a memory cell102. Using a ramping approach, the LBL voltage may start at approximately 0 volts, and then ramp up at time T1to a higher voltage, with LWL going to a negative voltage at time T2. The LBL voltage may then be ramped up to a higher voltage at time T3. Some SET memory cells102may be switched on beginning at time T2, with more SET cells being switched on at time T3, and so on, until all or most all SET cells have switched on. A sensing of memory cells102may then occur at time T4to read data (e.g., based on memory cells102switching). “Snapbacks” may occur as the data is read, which may lead to undesired changes in voltages LBL and/or LWL. For example, read voltages may be disturbed due to snapback discharge effects of the memory cells102. For example, at a time range Tr the snapback effects are shown such that the LBL voltage is lower and the LWL is not as negative. For example, snapbacks452related to SET memory cells102, snapbacks454related to RESET memory cells102, and snapbacks456related to wordline use may be experienced by the memory array112. Using the ramp approach shown in the figure to derive a VDM value via drift analysis may be more complex because of the extra accounting of the sense impact time due, for example, to the snapback effect. For example, a drift tracking system may be used that tracks drift over time for each tile205by analyzing a sense time as a function of drift. For example, SET and RESET threshold voltages and their respective drifts may be a function of different sense times (e.g., time spent at a given sub-threshold bias voltage). However, the drift tracking system may not only use more memory (e.g., because it may have to track all tiles205in a partition202), but also may have to include extra complexity to account for the snapback effect. Rather than tracking drift over time, for example, for each tile205, the techniques described herein may use a modified read process, where a first pre-read scan and data analysis step are performed, before following up with a subsequent read step, as shown inFIG.6. FIG.6illustrates multiple partitions202that may be used to generate pre-read voltages that may then be used to determine a more optimal VDM. In the depicted embodiment, each partition202may receive a different VDM pre-scan read voltage. In some embodiments, all of the partitions202may each receive a different VDM pre-scan read voltage. In other embodiments, a subset of the partitions202may be used. In certain embodiments, the VDM pre-scan read voltages may use the parallelism provided by the partitions202to deliver the VDM pre-scan read voltages in parallel to each partition202(or subset of the partitions202). That is, a first pre-scan read step may be performed in parallel by transmitting multiple VDM pre-scan voltages, one VDM pre-scan voltage per partition. In the depicted embodiment, a timing graph or diagram500illustrates the use of the pre-scan read step. More specifically graph500includes an X axis representative of time and a Y axis representative of voltage (e.g., bitline voltage VBL). During time range T1, multiple partitions202may be receiving different VDM pre-scan voltages in parallel, and data results may then be sensed for the data stored in each partition. Based on the data sensing, a distribution of bits (e.g., percent bits found as having logic 1) may be derived for each partition, for example at time range T2. As mentioned earlier, the bits should have an approximate 50% logic 1 distribution, or be in a range where logic 1 is between 35% to 65% of the total number of bits. Additionally, encodings such as those described inFIG.4may be used to provide for more evenly distributed logic 1 and logic 0 bits. Some of the data distributions may be outside of the desired range. Accordingly, it may be derived that the VDM pre-scan voltages delivered to partitions that resulted in data distributions outside of the desired range may not be as optimal as VDM pre-scan voltages that resulted in more evenly distributed data. In some embodiments, time range T2may be used for further data analysis. For example, certain statistical techniques, such as Montecarlo methods, probability analysis, and so on, may be used at time T2. For example, the control circuitry122may include Montecarlo analysis and/or model building that derives possible results (e.g., a more optimal VDM) by substituting a range of values (e.g., probability distribution such as a bell curve) based on the results from the pre-scan step at time range T1. In other embodiments, the more optimal VDM may be derived by adding/subtracting based on the pre-scan read results. For example, if a read step was going to use a VDM value V, V would now be adjusted up or down based on the value for the more optimal VDM pre-scan voltage. As mentioned, multiple VDM pre-scan voltages may be used. If higher voltages are found to derive improved data distributions then the VDM value V would be adjusted up, and likewise, if lower voltages are found to derive improved data distributions then the VDM value V would be adjusted down. The adjustment amount may be pre-calculated and/or derived for a read request by the control circuitry122based on how “far” the read values are from an ideal data distribution (e.g., 50% distribution of logic 1s). FIG.7is a flowchart of an embodiment of a process550that may be used to apply the pre-scan read techniques described herein. The process550may be implemented, for example, by the control circuit122. In the depicted example, the process550may begin by applying (block552) multiple pre-scan read voltages. In certain embodiments, each partition202of a memory device10may receive a different read voltage. In other embodiments, a subset of the partitions202may be used, and each partition202in the selected subset may then receive a different read voltage. The read voltages may be applied (block552), for example, via the bitlines104and the wordlines106for each partition202. In one embodiment, the read voltages may be applied in parallel. In other embodiments, the read voltages may be applied serially, in parallel, or a combination thereof (a first subset applied serially and/or a second subset applied in parallel). In some embodiments, the some of the read voltages may be the same, for example, for redundancy, while in other embodiments each read voltage may have a different value. The process550may then receive (block554) the data being read, for example, from the partitions202. The process550may derive (block554) the data distribution for the data that was read. In one embodiment, the data distribution may be a metric such as percentage of logic 1's found in the data for a given partition (e.g., codeword in a partition). The derived data distributions (block554) may then be analyzed (block556). For example, the derived data distributions may be compared against a desired range (e.g., 35% to 65%) to see which one or more of the derived data distributions more closely falls inside of the desired range. The process550may then derive (block558) a more optimal VDM to apply in a second read step. For example, Montecarlo methods, probability analysis, and so on, may be used. As mentioned above Montecarlo analysis and/or model building that may be used to derive a more optimal VDM (block558) by substituting a range of values (e.g., probability distribution such as a bell curve) based on the results from the pre-scan step554. In other embodiments, the more optimal VDM may be derived (block558) by adding or subtracting based on the pre-scan read results (block554). For example, if a read step was going to use a VDM value V, V would now be adjusted up or down based on the value for the more optimal VDM pre-scan voltage. The process550may then apply (block560) the more optimal VDM during a second step. In one embodiment, the second step may be a final step for the application of read voltages. In other embodiments, one or more further read voltage steps may be used. By applying a first pre-scan read and a second more optimal VDM read, the techniques described herein may enable a more optimal and efficient read of data stored in the memory device10. While the present disclosure may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the present disclosure is not intended to be limited to the particular forms disclosed. Rather, the present disclosure is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the following appended claims. The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible, or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ,” it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
36,254
11862227
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Preferred embodiments of the present invention will be described in detail hereinbelow with reference to the attached drawings. FIG.1shows a block diagram of a driver circuit including a pair of voltage supply circuits for programming, reading, or erasing operation of memory devices. The driver circuit101, for example, includes a circuit node112that is connected to a respective word line of a memory device114including one or more memory cell arrays (not shown). A memory cell in the memory device114can be programmed by injecting electrical charge into its floating gate to increase the cell threshold voltage therefore. A voltage generator102, for example, includes a voltage pump (not shown) for generating and transmitting various high voltage levels to both a control circuit104and a CMOS transfer circuit106via a voltage node VPP connected. For programming, erasing, or reading the memory cell, the voltage generator102generates a voltage ranging between ground potential and a high voltage value defined as approximately a voltage potential equal to a supply voltage (VDD) multiplied by various ratios, e.g., four times VDD voltage. The value of the ratios is set based on the use of circuit node112in response to the different operations on the memory cell as described inFIGS.3to6. In an exemplary but not-limitative embodiment of the present invention, the control circuit104includes at least one transistor and is adapted to operate the PMOS transfer circuit108to drive the circuit node. The control circuit104is also adapted to apply different control signals to the PMOS transfer circuit108to reduce expected voltage stress on the PMOS transfer circuit108during the programming or erasing operation. The control circuit104can be further adapted to selectively operate transistors in the PMOS transfer circuit108to prevent an electrical leakage from the circuit node112when the circuit node112is discharged to a ground potential level. The CMOS transfer circuit106includes a plurality of PMOS and NMOS transistors and charges the memory device114to the high voltage via the circuit node112upon receiving the high voltage from the voltage generator102for programming or erasing the memory cell. The CMOS transfer circuit106is further adapted to discharge the memory cell to a potential ground level for idle operation. For reading the memory cell, the CMOS transfer circuit106is adapted to apply a voltage to the circuit node112based on a voltage at a VRD1node generated by a voltage source110. In an exemplary but not-limitative embodiment of the present invention, PMOS transfer circuit108includes a plurality of PMOS transistors to drive the circuit node. Upon receiving control signals of the control circuit104, the PMOS transfer circuit108applies a read reference voltage to the circuit node112to determine whether the memory cell is in a programmed state or not (erased). A voltage source110is configured to generate various voltage levels between a ground potential and a standard supply voltage VDD (e.g., 2.5 volts). The voltage source110supplies the same or different voltages to the CMOS transfer circuit106and the PMOS transfer circuit108via VRD1and VRD2nodes, respectively. For the operations of the memory cell, the circuit node112transmits voltages generated by the CMOS transfer circuit106and/or the PMOS transfer circuit108to the memory cell. For example, the circuit node112receives currents from the activated CMOS transfer circuit106and currents from the activated PMOS transfer circuit108. Thus, a voltage at the circuit node112is responsive to the currents received. The memory device114is adapted to include the memory cell array in which the memory cells are connected to respective circuit nodes112. Each of the memory cells has a programmed threshold voltage range and an erased voltage range, as shown inFIGS.7A and7B. Circuits and methods for implementing charge and discharge the circuit node112are described in reference toFIGS.2-6. The circuits and methods set forth in more detail may be implemented within a single integrated circuit die or may be implemented in a multiple-chip module. FIG.2is a schematic form of the driver circuit ofFIG.1according to a preferred embodiment of the present invention. In an exemplary but non-limitative embodiment of the present invention, the CMOS transfer circuit106includes nine stacked P and N-type metal oxide semiconductor transistors. In the CMOS transfer circuit106, the four P-type metal oxide semiconductor (PMOS) transistors CM1, CM2, CM3and CM4are arranged to connect the voltage node VPP to the circuit node112. The CM1, CM2, CM3and CM4are adapted to receive gate signals through their gate terminals coupled to the nodes N1, N2, N3and N4, respectively. The CM1has a source coupled to the VPP node having a voltage ranging from the VDD voltage to a voltage of four times the VDD voltage, a drain coupled to a source of the CM2, and the gate coupled to a node N1to provide a current path through CM1. The CM2has a source connected to the drain of the CM1, a drain coupled to a source of the transistor CM3, and a gate coupled to a node N2to provide a current path between the CM1and CM3. The CM3has a source connected to the drain of the CM2, a drain coupled to a source of the transistor CM4, and a gate coupled to a node N3to provide a current path between the CM2and CM4. The CM4has a source connected to the drain of the CM3, a gate coupled to a node N4to provide a current path through the transistor CM4, and a drain coupled to the circuit node112. In the CMOS transfer circuit106, the five N-type metal oxide semiconductor (NMOS) transistors CM5, CM6, CM7, CM8and CM9are arranged to connect the circuit node112to either a ground node VSS and the VRD1node. The CM5has a drain connected to both the drain of the CM4and the circuit node112, a source coupled to a drain of the CM6, and a gate coupled to the node N4to provide a charge path through the CM5, The CM6has a drain coupled to the source of the CM5, a source coupled to a drain of the CM7, and a gate coupled to a node N5to provide a charge path through the CM6. The CM7has a drain coupled to the source of the CM6, a source coupled to the drains of the CM8and CM9, and a gate coupled to a node N6to provide a charge path through the CM7. The CM8has a drain coupled to the source of the CM7, a source coupled to the ground node VSS, and a gate coupled to a node N7to provide a charge path through the CM8. The CM9has the drain coupled to the source of the CM7, a source coupled to the VRD1node that can have a voltage level between ground and the supply voltage VDD, and a gate coupled to a gate node N8to provide a charge path through the CM9. The PMOS transfer circuit108is arranged between the VRD2node and the circuit node112. The VRD2node can have a voltage level between ground and the standard supply voltage VDD (e.g., 2.5 volts). The VRD1node coupled to the CMOS transfer circuit106and the VRD2node coupled to the PMOS transfer circuit108can have the same or different voltage. In one embodiment, PMOS transfer circuit108includes four stacked PMOS transistors PM1, PM2, PM3and PM4. The PM1has a source coupled to the VRD2node, a drain coupled to a source of the PM2, and a gate coupled to a node N12to provide a charge path through the PM1. The PM2between the PM1and PM3has a source connected to a drain of the PM1, a drain connected to a source of the PM3, and a gate coupled to a node N11to provide a charge path through the PM2. The PM3between the PM2and PM4has a source connected to a drain of the PM2, a drain connected to a source of the PM4, and a gate coupled to a node N10to provide a charge path through the PM3. The PM4between the PM3and the circuit node112has a source connected to a drain of the PM3, a drain connected to the circuit node112, and a gate coupled to a node N9to provide a charge path through the PM4. In one embodiment, when the VRD2node has a voltage equivalent to a supply voltage VDD, the activated PM1, PM2, PM3and PM4allow charge flows from the VRD2node to the circuit node112that is set be charged up to a level predefined. The results of the activation or deactivation of the PM1, PM2, PM3and PM4in providing the charge path are described in detail with reference toFIGS.3-6. The control circuit104is coupled to the PM1, PM2, PM3and PM4via the respective nodes N9, N10, N11and N12to transmit gate control signals based on received external input signals via SW1and SW2. The control circuit104can include one or more MOS transistors for controlling the PMOS transistors based on input signals received. FIG.3shows a charging operation of the driver circuit ofFIG.1to read the memory cell according to one embodiment of the present invention. InFIG.3, paths A and B are provided to supply a read reference voltage of which the range can be about from GND to VDD voltage to the circuit node112. In an exemplary embodiment, during the reading operation of the memory cell, the circuit node112can be charged to a targeted supply voltage level VDD by the path B. The paths A and B are effective by activation of the selected transistors in the CMOS and PMOS transfer circuits106and108, respectively. The path A is effective when a voltage at the VRD1node is less than the VDD voltage minus threshold voltage that is needed to turn on the transistors CM5, CM6, CM7and CM9in the CMOS transfer Circuit106. The CM5, CM6, CM7and CM9are activated by the VDD voltage at their gates via the coupled nodes N4, N5, N6and N8, respectively. The path A enables the supply of current from the VRD1node to the circuit node112via the activated CM5, CM6, CM7and CM9. As a result of this current through the path A, the circuit node112can be charged to a voltage level of which the range is nearly from ground level (GND) to supply voltage VDD minus the threshold voltage. The deactivated CM8prevents unwanted leakage of the charges flowing out of the path A to the ground node VSS. The path B is effective when a voltage at the VRD2node is greater than a threshold voltage that is needed to turn on the transistors PM1, PM2, PM3and PM4in the PMOS transfer circuit108. A current flow on the PM1, PM2, PM3and PM4to the circuit node112when these transistors are activated with ground voltage (GND) on the respective gates from the control circuit104. The control circuit104is coupled to the PM1, PM2, PM3and PM4via the respective nodes N9, N10, N11and N12to transmit the GND signals based on received external input signals of VDD and GND via SW1and SW2, respectively. As a result of this current through the path B, the circuit node112can be charged up to nearly the supply voltage level VDD at the VRD2node. Therefore, a higher read reference voltage to accurately verify the programmed memory cell can be supplied to the circuit node112through the path B compared to when the circuit node112is charged by the path A alone. In an exemplary embodiment, when the path A, path B, or the paths A and B are effective for driving the memory cell to the read reference level, no other paths are provided for charging the circuit node112. Further, unwanted leakage current paths from the circuit node112are prevented. For example, a path between the VPP node to the circuit node112does not carry currents from the VPP node to the circuit node112by turning off the CM1. The CM1is turned off by having a gate coupled to the node N1to receive VDD and a source coupled to the VPP node to receive the VDD. The voltage at the gate of the CM2is ground. The CM3and CM4are turned off by receiving the VDD through their gates such that (i) the current from the VPP node is prevented and (ii) unwanted leakage current paths from the circuit node that is charged is prevented. FIG.4shows a discharging operation of the driver circuit ofFIG.1according to one embodiment of the present invention. A path C is provided by the CM5, CM6, CM7and CM8between the circuit node112and the ground node VSS. The path C enables the current flow from the circuit node112to the VSS node when the CM5, CM6, CM7and CM8have a gate voltage VDD via the nodes N4, N5, N6and N7, respectively. When the path C is effective, no other paths are provided. A path between the circuit node112to the VPP node is inoperative by turning off the CM1and CM4. The CM1, CM3and CM4are turned off as their gates receive the VDD voltage, rendering a magnitude of respective gate-to-source voltages less than a magnitude of the threshold voltage that is needed to turn the CM1and CM4on. Also, a path between the circuit node112and the VRD2node is inoperative by turning off the PM1. The PM1is turned off when its gate receives the VDD voltage from the control circuit104, rendering a magnitude of the respective gate-source voltage less than a magnitude of the threshold voltage that is needed to turn the PM1on. The control circuit104is coupled to the PM1, PM2, PM3and PM4via the respective nodes N9, N10, N11and N12to transmit the VDD, GND, GND, GND signals based on received external input signals of GND via. SW1and SW2. The CM9is deactivated to prevent unwanted loss of charge from the path C when its gate terminal receives a ground voltage at the node N8.FIG.4is merely illustrative, and it should be noted that the selective operations of the transistors can be various to implement inoperative paths for discharging the circuit node112for operating the memory cells. FIG.5shows a charging operation of the driver circuit ofFIG.1to program or erase the memory cell according to one embodiment of the present invention. The path D between the VPP node and the circuit node112supplies a high voltage to the circuit node112to program or write data in the memory cell. The path D is effective by the activated CM1, CM2, CM3and CM4. Given that these transistor's voltage tolerance limit is about a VDD voltage level, a gate voltage of three times the VDD to the CM1, CM2, CM3and CM4reduces the voltage stress on those transistors within its nominal operating voltage of VDD (=4×VDD−3×VDD) while the path D is effective. In other words, the stacked CM1, CM2, CM3and CM4each can operate at its nominal voltage level of VDD (Gate-Source Voltage) for providing the path D by receiving the gate input voltage, a voltage of three times VDD, respectively. When the path D is effective, unwanted leakage currents from the circuit node112are prevented. A path leading to the VRD2node from the circuit node112is inoperative by turning off the PMOS transfer circuit108. For instance, when the path D is effective, a path through the PM4is inoperative as the PM4's gate voltage of 4×VDD at the node N9places a magnitude of the gate-source voltage of the PM4not greater than a Vth (threshold voltage) of the PM4. Also, when the path D is effective, a path through the PM3is inoperative as the PM3's gate voltage of 3×VDD at the node N10drives the common source/drain region between the PM3and PM4to about a voltage of 3×VDD plus a Vth of the PM3. This gate voltage places a magnitude of the gate-source voltage of the PM3not being greater than the PM3threshold voltage. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to the drain-source of the PM4to be not higher than about VDD level (a nominal tolerable voltage stress level). Also, when the path D is effective, a path through the PM2is inoperative as the PM2's gate voltage of 2×VDD at the node N11drives the common source/drain region between the PM2and PM3to about a voltage of 2×VDD plus a threshold voltage Vth of the PM2. This gate voltage places a magnitude of the gate-source voltage of the PM2not being greater than the PM2threshold voltage. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to a drain-source of the PM3to be not higher than about the VDD level. When the path D is effective, a path through the PM1is inoperative as the PM1's gate voltage of VDD at the node N12drives the common source/drain region between PM1and PM2to about VDD plus Vth of the PM1. This gate voltage places a magnitude of the gate-source voltage of the PM1not being greater than the PM1threshold voltage. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to a drain-source of the PM2to be not higher than about the VDD level. The control circuit104is coupled to the PM1, PM2, PM3and PM4via the respective nodes N9, N10, N11and N12to transmit the VDD, 2×VDD, 3×VDD, 4×VDD signals based on received external input signals of GND and VDD via SW1and SW2. Also, when the path D is effective, a path between the circuit node112and the ground node VSS blocks leakage currents from the circuit node112by the deactivated NMOS transistors of CMOS transfer circuit. In one embodiment, when the path D is effective, a path through the CM8is inoperative as the CM8has a gate-source voltage not greater than a threshold voltage of the CM8by having a ground gate voltage through the N7. Also, when the path D is effective, a path through the CM7is inoperative as the CM7's gate voltage at the N6drives the common source/drain region between CM7and CM8to about VDD minus Vth of the CM7. This gate voltage places a magnitude of the gate-source voltage of the CM7not being greater than the CM7threshold voltage. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to a drain-source of the CM8to be not higher than about the VDD level. Also, when the path D is effective, a path through the CM6is inoperative as the CM6's gate voltage at the N5drives the common source/drain region between CM6and CM7to about 2×VDD minus Vth of the CM6. This gate voltage places a magnitude of the gate-source voltage of the CM6not being greater than the CM6threshold voltage. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to a drain-source of the CM7to be not higher than about the VDD level. Also, when the path D is effective, a path through the CM5is inoperative as the CM5's gate voltage at the N4drives the common source/drain region between CM5and CM6to about 3×VDD minus Vth of the CM5. This gate voltage places a magnitude of the gate-source voltage of the CM5not being greater than the CM5threshold voltage. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to a drain-source of the CM6to be not higher than about the VDD level. FIG.6shows a discharging operation of the driver circuit ofFIG.1according to another embodiment of the present invention. During the discharging operation, a path E between the circuit node112and a ground node VSS is provided by activating the CMOS transfer circuit112. The VSS node is set to a ground voltage GND for enabling the charge flow from the circuit node112to the VSS when the CM5. CM6, CM7and CM8have the gate voltage VDD via the nodes N4, N5, N6and N7, respectively. When the path E is effective, no other paths are provided. A path through the CM1is inoperative when the CM1's gate receives a high voltage 4×VDD, rendering the gate-source voltage magnitude through the coupled node N1not higher than the threshold voltage that is needed to turn the PM1on. When the path E is effective, a path through the CM2is inoperative when the CM2's gate voltage from the N2drives the common source/drain region between CM1and CM2to about 3×VDD voltage plus Vth of the CM2. This gate voltage places a magnitude of the gate-source voltage of the PM2not being greater than the threshold voltage to turn on the CM2. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to a drain-source of the CM1to be not higher than about VDD voltage level (a nominal tolerable voltage stress level). When the path E is effective, a path through the CM3is inoperative as the CM3's gate voltage from the N3drives the common source/drain region between CM2and CM3to about 2×VDD plus Vth of the CM3. This gate voltage places a magnitude of the gate-source voltage of the PM3not being greater than the threshold voltage to turn on the CM3. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to a drain-source of the CM2to be not higher than about the VDD level. When the path E is effective, a path through the CM4is inoperative as the CM4's gate voltage from the N4drives the common source/drain region between CM3and CM4to about VDD plus Vth of the CM4. This gate voltage places a magnitude of the gate-source voltage of the CM4not being greater than the threshold voltage to turn on the CM4. Subsequently, the source/drain region's driven voltage reduces voltage stress to be applied to a drain-source of the CM3to be not higher than about the VDD level. It also needs to be noted that the gate voltages of the cascaded PMOS transistors in the CMOS transfer circuit106can be reduced by a predefined ratio depending on the number of cascaded transistors coupled to a high voltage supply node. As a result, the voltage stress on each transistor CM1, CM2, CM3, and CM4can be avoided or substantially reduced. A path between the VRD2node and the circuit node112does not carry a current from the circuit node112by turning off the PM1. The PM1is turned off when the PM1's gate receives the VDD voltage from the control circuit104, placing a magnitude of the gate-source voltage through the coupled node N12not higher than the threshold voltage that is needed to turn the PM1on. The unwanted leakage current from the discharging path is prevented. For instance, current leakage through the path between the circuit node112and the VRD2node is prevented by turning off the PM1. The PM1is turned off because the magnitude of the gate-source voltage of the PM1is not higher than the threshold voltage level to turn on the PM1. The control circuit104is coupled to the PM1, PM2, PM3and PM4via the respective nodes N9, N10, N11and N12to transmit the VDD, GND, GND, GND signals based on received external input signals of GND via. SW1and SW2. To summarize, the proposed circuits enable switching the circuit node112between VDD and GND during the low voltage switching mode as shown inFIGS.3and4, while the circuits operate switching the circuit node112between 4×VDD voltage and GND addressing the high voltage stress issue stated above during the high voltage switching mode, as shown inFIGS.5and6. FIGS.7A and7Bare graphic diagrams illustrating distribution profiles of cell threshold voltages in the embedded flash memory device. The circuit node112in the proposed circuits can be connected to the embedded flash memory and provide a suitable read reference voltage (VRD1or VRD2, collectively VRD) level to the embedded flash memory cell. The flash memory stores data in the form of a cell threshold voltage, the lowest voltage at which the flash memory cell can be switched on. During a read operation to the cell, the cell in an “erased” state is turned on by having a cell threshold voltage less than a read reference voltage (VRD) that is applied to the output circuit node112. In contrast, the cell in a programmed state is turned off by having a cell threshold voltage greater than the read reference voltage (VRD). Due to the process and voltage variation, memory cell threshold voltage of “E” and “P” states can have a voltage window in which the cell's threshold voltage lies. Referring toFIG.7A, the threshold voltage distribution “E” can span from a certain negative voltage to a minimum possible VRD to verify the cell threshold level in “E” state voltage distribution. Also, the threshold voltage distribution “P” can span from a maximum possible VRD to verify the cell threshold level in “P” state voltage distribution to a certain positive voltage. Thus, to determine with high accuracy whether the cell is erased or programmed, a large voltage gap of the cell threshold voltage between the erased and programmed states is preferred, since read reference voltage level at the middle of the gap can safely determine whether it is higher or less than the cell threshold voltage considering the wide distribution of the cell threshold voltage. The proposed circuit in this disclosure can provide a wider range of VRD between GND and VDD voltage compared to a conventional circuit having a range of VRD between GND and VDD-Vth (transistor threshold voltage) as shown inFIG.7A. Thus, as illustrated inFIG.7B, a larger voltage gap of the cell threshold voltage between the erased and programmed states can be enabled, resulting in a much more reliable embedded flash memory cell connected to the proposed circuit by having a larger gap between “E” and “P” states. As a result, embedded flash memory lifetime can be extended significantly by adopting the proposed high voltage switching circuit connected to the embedded flash memory cell.
24,921
11862228
DETAILED DESCRIPTION Regardless of a charge pump or an LDO, the load unit needs to be charged during start, which may generate a surge current to an external input voltage VDD or VPPEX. Referring toFIG.1, for example, when an LDO is started, an output voltage VEQ is initially low and much lower than a target voltage, resulting in a low output voltage of an operational amplifier DIFF. At this time, a gate-to-source voltage difference of an output PMOS transistor is relatively great, resulting in a large on-current of the output PMOS transistor. The load unit C draws a large current from an external voltage source VPPEX. At present, referring toFIG.2, after a flag signal POR, which characterizes the effectiveness of the voltage source, is in an effective state, a plurality of enable signals (DC1_EN . . . DC4_EN) for controlling the start of the multiple power supply circuits are generated simultaneously. That is, after the voltage source is effective, the multiple power supply circuits are started at the same time. If the multiple power supply circuits are started at the same time, the surge currents generated by different power supply circuits during the start will be superimposed to produce a greater surge current. An excessive surge current will pull down the external input voltage VDD or VPPEX, resulting in the output voltage of the power supply circuit not meeting the requirements, which may cause a functional circuit that depends on the output voltage to start not to start normally, or even cause a chip to fail to start. The embodiments of the present disclosure provide a power supply circuit and a memory. A plurality of external signals are received after transmitting a first enable signal used for starting a first-type power supply circuit. Since the second enable signal is generated based on a flag signal and an external signal, a start time of a second-type power supply circuit is later than a start time of the first-type power supply circuit. Moreover, since each external signal corresponds to a second-type power supply circuit, asynchronous start of different second-type power supply circuits may be implemented by controlling start times of different external signals to be different. As such, it is beneficial to reducing the number of power supply circuits that are started at the same time, thereby ensuring the normal start of the chip. In addition, in the embodiments of the present disclosure, external signals are used to control the asynchronous start of the power supply circuits without setting a special delay circuit, which is beneficial to save the chip area. In order to make the objectives, technical solutions and advantages of the embodiments of the present disclosure more clear, various embodiments of the present disclosure will be described in detail below in combination with the accompanying drawings. However, it can be understood by those skilled in the art that in various embodiments of the present disclosure, many technical details have been proposed in order to give the reader a better understanding of the present disclosure. However, the technical solutions claimed in the present disclosure may be implemented even without these technical details and may be implemented through various changes and modifications based on the following various embodiments. Referring toFIG.3, the power supply circuit includes a voltage source10, multiple power supply circuits11and a control circuit. The multiple power supply circuits are connected to the voltage source10. The multiple power supply circuits11have power supply terminals (not shown) and load units (not shown). If the voltage source10is effective and the multiple power supply circuits11are in an enable state, the multiple power supply circuits11pull up voltages of the power supply terminals to a preset voltage, and supplies power to the load units during the pulling up operation. The multiple power supply circuit11include at least one first-type power supply circuit111and second-type power supply circuits112. The first-type power supply circuit111is configured to receive a first enable signal12a, and enters the enable state if the first enable signal12ais received. Each of second-type power supply circuits112is configured to receive a second enable signal12b, and enter the enable state if the second enable signal12bis received. The control circuit12is configured to receive a flag signal POR, and transmit the first enable signal12ato the first-type power supply circuit111if the received flag signal POR is in an effective state, the effective state characterizing the effectiveness of the voltage source10. The control circuit is further configured to receive a plurality of external signals12cafter transmitting the first enable signal12a, each of the external signals12ccorresponding to one of the second-type power supply circuits112, and start times of different external signals12cbeing different, and transmit the second enable signal12bto a corresponding second-type power supply circuit112if the flag signal POR and the external signal12care received. The first-type power supply circuit111and the second-type power supply circuits112are distinguished according to the received enable signals. The first-type power supply circuit111includes at least one power supply circuit11, and the second-type power supply circuits112include at least two power supply circuits. The difference between the first enable signal12aand the second enable signals12bis mainly due to different generating conditions, and the difference between different second enable signals12bis mainly due to different start times. The effectiveness of the voltage source10means that the voltage source10has a rated output voltage. In some embodiments, referring toFIG.4, the flag signal POR is effective at a high level, and the level of the flag signal POR increases with the start of the voltage VDD/VPPEX of the voltage source10. When the voltage of the voltage source10increases to the rated value, the level of the flag signal POR is at the high level, that is, the effective state, and then the level of the flag signal POR falls back from the high level to a low level. In some other embodiments, the flag signal POR is effective at the low level, or, the level of the flag signal POR is at the high level as long as the voltage of the voltage source10is at the rated value. In some embodiments, the control circuit12includes a first enable unit121and a plurality of second enable units122. An output terminal of each of the second enable units122is connected to an enable terminal of one corresponding second-type power supply circuit112. The first enable unit121is configured to receive the flag signal POR, and transmit the first enable signal12aafter the flag signal POR reaches the effective state. The second enable units122are configured to receive the flag signal POR and the external signals12cand transmit the second enable signals12b. The external signals12creceived by different second enable units122are different. It can be understood that functions of the first enable unit121and the second enable units122are limited by functions of the control circuit12. The first enable unit121transmits the first enable signal12aonly after the flag signal POR is in the effective state, and the second enable units122transmit the second enable signals12bonly after receiving the flag signal POR and the external signals12c. In some embodiments, the flag signal POR, the external signals12c, the first enable signal12a, and the second enable signals12bare the high-level effective signals, which means that the flag signal POR, the external signals12c, the first enable signal12a, and the second enable signals12bare effective at high level. The first enable unit121generates the first enable signal12abased on the flag signal POR. The waveform parameter of the flag signal POR and the waveform parameter of the first enable signal12amay be the same or different, and the waveform parameter includes the duration of the high level. The second enable units122generate the second enable signals12bbased on the flag signal POR and the external signals12c. The device structure of the second enable units122may be adjusted according to the level of the flag signal POR at the start time of the external signal12c. Moreover, the start time of the second enable signal12bmay be the same or different from the start time of the corresponding external signal12c. It is only necessary to ensure that the start times of different second enable signals12bgenerated based on different external signals12care different, and the intervals between start times of adjacent second enable signals12bare greater than a preset duration. The setting of the preset duration is related to the current-time change relationship of the surge current generated when a single power supply circuit11is started. In some embodiments, the preset duration is greater than or equal to the duration of the surge current during start of one power supply circuit11. That is, after one power supply circuit11is fully started, another power supply circuit11is started. As such, the surge current may be minimized and the voltage stability of the voltage source10may be ensured. In some other embodiments, the preset duration is less than the duration of the surge current generated during the start of the single power supply circuit11, but the surge current of two power supply circuits11after superposition is less than a preset surge threshold of the voltage source10. The preset surge threshold means that a surge current with a current value less than this threshold will not pull down the voltage of the voltage source10. That is, another power supply circuit11is started before one power supply circuit11is fully started, so that the two surge currents are superimposed, but the superimposed current value will not pull down the output voltage of the voltage source10. In some embodiments, the second enable unit122includes an SR latch. Specifically, referring toFIG.5, the SR latch includes a first NOR gate21and a second NOR gate22, a first input terminal of the first NOR gate21is configured to receive the external signal12c, a second input terminal of the first NOR gate21is connected to an output terminal of the second NOR gate22, an output terminal of the first NOR gate21is connected to a first input terminal of the second NOR gate22, a second input terminal of the second NOR gate22is configured to receive the flag signal POR, the output terminal of the second NOR gate22serves as an output terminal of the second enable unit122, and the output terminal of the second NOR gate22is configured to output the second enable signal12b. It can be understood that when the external signal12cis at a low level and the flag signal POR is at a high level, the second NOR gate22outputs the low level, the first NOR gate21outputs the high level, and the second enable unit122outputs the low level, when the external signal12cremains at the low level and the flag signal POR falls back to the low level, the second enable unit122remains outputting the low level, and when the flag signal POR is at the low level and the external signal12cincreases to the high level, the first NOR gate21outputs the low level, the output terminal of the second NOR gate22outputs the high level, and the second enable unit122outputs the high level, i.e., generating the second enable signal12b. The external signal12cbeing at the low level includes the following two cases. First, the first NOR gate21does not receive the external signal12c, and the external signal12cis pulled down in a suspended state, thus exhibiting the low level. Second, the external signal12creceived by the first NOR gate21is at the low level. In some embodiments, with continued reference toFIG.4, the external signals12cinclude anti-fuse address signals, and the start times of different anti-fuse address signals are different. The anti-fuse address signals are high-level signals inside the chip. By using anti-fuse address signals as external signals, there is no need to provide an additional circuit inside the chip to generate external signals, which is beneficial to save the chip area. Moreover, there is no need to provide an additional circuit to receive the external signals from the outside of the chip, thereby avoiding failure of normal start of the chip caused by fluctuations in the external signals, and improving the start stability of the chip. In some embodiments, the control circuit12is further configured to receive m anti-fuse address signals, recorded as first anti-fuse address signals, and select n anti-fuse address signals therefrom, recorded as second anti-fuse address signals. Intervals between start times of adjacent second anti-fuse address signals are the same, and the intervals between the start times of the adjacent second anti-fuse address signals are greater than intervals between start times of adjacent first anti-fuse address signals. The second anti-fuse address signals are used as the external signals12c. That is, the control circuit may make the intervals between the start times of the adjacent first anti-fuse address signals less than the above preset duration. In other words, when the intervals between the start times of the adjacent first anti-fuse address signals are unequal, the first anti-fuse address signals are screened, so that the intervals between the start times of the screened adjacent second anti-fuse address signals meets the requirement of being greater than or equal to the preset duration. It should be noted that an independent screening unit may be provided in the control circuit12. The screening unit performs a screening operation on the m anti-fuse address signals according to requirements for the intervals between the start times. The requirements include the equality and specific values of the intervals between adjacent start times. In some embodiments, referring toFIG.6, the control circuit12is further connected to an anti-fuse scanning unit14. The anti-fuse scanning unit14is configured to scan address information of the anti-fuse array and generate anti-fuse address signals XADD. The control circuit12is configured to receive the anti-fuse address signals XADD generated by the anti-fuse scanning unit14. In some embodiments, the anti-fuse scanning unit14is further configured to receive a reset signal Reset_n. The reset signal Reset_n is configured to trigger the anti-fuse scanning unit14to scan the address information of the anti-fuse array. A reception time of the reset signal Reset_n is later than a reception time of the effective state of the flag signal POR. The control circuit12may receive the flag signal POR in the effective state firstly, and generate the first enable signal12abased on the flag signal POR in the effective state to enable the first-type power supply circuit111. The control circuit then receives the external signals12c, and generates the second enable signals12bbased on the external signals12cand the flag signal POR to enable the second-type power supply circuits112. That is, the control circuit12can effectively control the asynchronous start of the first-type power supply circuit111and the second-type power supply circuits112. In some embodiments, the control circuit12is further connected to the anti-fuse scanning unit14through a local latch13. The anti-fuse scanning unit14is further configured to transmit the generated anti-fuse address signals XADD to the local latch13. The control circuit12is further configured to receive the anti-fuse address signals XADD from the local latch13. In some embodiments, each of the load units includes a filter capacitor. For example, the power supply circuit11may be an LDO structure, and the filter capacitor may be connected to the power supply terminal of the power supply circuit11. In some embodiments, the control circuit12is configured to transmit the first enable signal12aand the second enable signals12bbefore generation of a clock enable signal CKE to enable the first-type power supply circuit111and the second-type power supply circuits112. Before the generation of the clock enable signal CKE, the DRAM has not started a read-write or refresh operation, and it is not necessary to enable all power supply circuits11. Therefore, it is only necessary to enable all power supply circuits11before the generation of the CKE signal. A specific embodiment of the present disclosure will be explained in detail below in conjunction withFIG.4toFIG.6. In this specific embodiment, the first-type power supply circuit111includes one power supply circuit11, which is recorded as a first power supply circuit. The second-type power supply circuits112include three power supply circuits, which are recorded as a second power supply circuit, a third power supply circuit and a fourth power supply circuit respectively. After the flag signal POR is in the effective state, the first enable unit111transmits the first enable signal12a, that is, DC1_EN, to the first power supply circuit. After the reset signal Reset_n is generated, the control circuit12acquires, through the local latch13, the anti-fuse address signals XADD in the high-level scanned by the anti-fuse scanning unit14, for example, XADD<0:4>. After acquiring the anti-fuse address signals XADD, the control circuit12selects 3 groups of the signals from XADD<0:4>, specifically XADD<0>, XADD<2>, and XADD<4>. Since the flag signal POR is at the low level after acquiring the anti-fuse address signals XADD, the second enable units122may respectively generate the second enable signals12bbased on the selected anti-fuse address signals XADD and the flag signal POR, specifically DC2_EN, DC3_EN, and DC4_EN. The start time of DC2_EN is the same as the start time of XADD<0>, the start time of DC3_EN is the same as the start time of XADD<2>, and the start time of DC4_EN is the same as the start time of XADD<4>. DC2_EN is transmitted to the second power supply circuit to enable the second power supply circuit, DC3_EN is transmitted to the third power supply circuit to enable the third power supply circuit, and DC4_EN is transmitted to the fourth power supply circuit to enable the fourth power supply circuit, thereby implementing the asynchronous start of the second power supply circuit, the third power supply circuit and the fourth power supply circuit. In this embodiment, since the second enable signals is generated based on the flag signal and the external signals, the asynchronous start of the first-type power supply circuit and the second-type power supply circuits may be implemented by receiving a plurality of external signals after transmitting the first enable signal. Moreover, since each of the external signals corresponds to one second-type power supply circuit, the asynchronous start of different second-type power supply circuits may be implemented by controlling the start times of different external signals to be different. As such, it is beneficial to reducing the number of power supply circuits that are started at the same time, and preventing a surge current caused by simultaneous start from pulling down the voltage of the voltage source, thereby ensuring that the voltage of the power supply terminal meets the requirement and ensuring that the functional current connected to the power supply terminal starts normally. In addition, the use of the external signals to control the asynchronous start of the power supply circuits eliminates the need to provide a special delay circuit, which is beneficial to saving the chip area. Accordingly, the embodiments of the present disclosure also provide a memory including the power supply circuit of any of the above. In some embodiments, referring toFIG.7, the power supply circuits33are located in a peripheral circuit area32between adjacent memory banks31. Each memory bank31is a memory array area composed of memory units. Further, a plurality of power supply circuits33are evenly distributed in an extension direction of the peripheral circuit area32. In this embodiment, the multiple power supply circuits in the memory are asynchronously started. The voltage of the voltage source has high stability. The power supply terminal voltages of the power supply circuits can meet driving requirements. The functional circuits connected to the power supply terminals can be started normally and effectively. In some embodiments, the units mentioned in the disclosure may be sub-circuits or hardware components. For example, the control unit may be a control sub-circuit or a control component, the first enable unit may be a first enable sub-circuit or a first enable component, the anti-fuse scanning unit may be a anti-fuse scanning sub-circuit or a anti-fuse scanning component, and the memory unit may be a memory sub-circuit or a memory component, etc. It should be understood that, singular forms “a/an”, “one”, and “the” may include the plural forms, unless otherwise specified types in the context. It is also to be understood that, terms such as “comprising/containing” or “having” appoint existence of declarative features, wholes, steps, operations, components, parts or combinations of them, but not excluding the possibility of existence or adding of one or more other features, wholes, steps, operations, components, parts or combinations of them. Meanwhile, in the specification, term “and/or” includes any and all combinations of the related listed items. It can be understand by those skilled in the art that the above-mentioned implementations are specific embodiments for implementing the present disclosure, and in practical applications, various changes may be made in form and details without departing from the spirit and scope of the present disclosure. Any person skilled in the art may make changes and modifications without departing from the spirit and scope of the present disclosure. Therefore, the scope of protection of the present disclosure shall be subject to the scope defined in the claims. INDUSTRIAL APPLICABILITY In the embodiments of the present disclosure, the power supply circuit includes: a voltage source, multiple power supply circuits and a control circuit. The multiple power supply circuits are connected to the voltage source. If the voltage source is effective and the multiple power supply circuits are in an enable state, a voltage of a power supply terminal is pulled up to a preset voltage by the multiple power supply circuits and the load unit is supplied with power by the multiple power supply circuits during the pulling up operation. A first-type power circuit enters the enable state if a first enable signal is received, and second-type power supply circuits enter the enable state if second enable signals are received. The control circuit is configured to receive a flag signal, and transmit the first enable signal if the received flag signal is in an effective state, the effective state characterizing the effectiveness of the voltage source. The control circuit is further configured to receive a plurality of external signals after transmitting the first enable signal, each of the external signals corresponding to one of second-type power supply circuits and start times of different external signals being different, and transmit a corresponding second enable signal if the flag signal and an external signal are received. The embodiments of the present disclosure are helpful to ensure the effective start of a chip connected to the power supply terminals.
23,646